query_id
stringlengths 32
32
| query
stringlengths 6
3.9k
| positive_passages
listlengths 1
21
| negative_passages
listlengths 10
100
| subset
stringclasses 7
values |
---|---|---|---|---|
93c2400e0f67a02dd7af203e52ba3fa8
|
HFT-CNN: Learning Hierarchical Category Structure for Multi-label Short Text Categorization
|
[
{
"docid": "a3866467e9a5a1ee2e35b9f2e477a3e3",
"text": "This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks.",
"title": ""
},
{
"docid": "78f8d28f4b20abbac3ad848033bb088b",
"text": "Many real-world applications involve multilabel classification, in which the labels are organized in the form of a tree or directed acyclic graph (DAG). However, current research efforts typically ignore the label dependencies or can only exploit the dependencies in tree-structured hierarchies. In this paper, we present a novel hierarchical multilabel classification algorithm which can be used on both treeand DAG-structured hierarchies. The key idea is to formulate the search for the optimal consistent multi-label as the finding of the best subgraph in a tree/DAG. Using a simple greedy strategy, the proposed algorithm is computationally efficient, easy to implement, does not suffer from the problem of insufficient/skewed training data in classifier training, and can be readily used on large hierarchies. Theoretical results guarantee the optimality of the obtained solution. Experiments are performed on a large number of functional genomics data sets. The proposed method consistently outperforms the state-of-the-art method on both treeand DAG-structured hierarchies.",
"title": ""
},
{
"docid": "3a58c1a2e4428c0b875e1202055e5b13",
"text": "Short texts usually encounter data sparsity and ambiguity problems in representations for their lack of context. In this paper, we propose a novel method to model short texts based on semantic clustering and convolutional neural network. Particularly, we first discover semantic cliques in embedding spaces by a fast clustering algorithm. Then, multi-scale semantic units are detected under the supervision of semantic cliques, which introduce useful external knowledge for short texts. These meaningful semantic units are combined and fed into convolutional layer, followed by max-pooling operation. Experimental results on two open benchmarks validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "8efedc482ad1a0a08c2f588e5d4e9672",
"text": "Understanding the rapidly growing short text is very important. Short text is different from traditional documents in its shortness and sparsity, which hinders the application of conventional machine learning and text mining algorithms. Two major approaches have been exploited to enrich the representation of short text. One is to fetch contextual information of a short text to directly add more text; the other is to derive latent topics from existing large corpus, which are used as features to enrich the representation of short text. The latter approach is elegant and efficient in most cases. The major trend along this direction is to derive latent topics of certain granularity through well-known topic models such as latent Dirichlet allocation (LDA). However, topics of certain granularity are usually not sufficient to set up effective feature spaces. In this paper, we move forward along this direction by proposing an method to leverage topics at multiple granularity, which can model the short text more precisely. Taking short text classification as an example, we compared our proposed method with the state-of-the-art baseline over one open data set. Our method reduced the classification error by 20.25 % and 16.68 % respectively on two classifiers.",
"title": ""
},
{
"docid": "a9d0b367d4507bbcee55f4f25071f12e",
"text": "The goal of sentence and document modeling is to accurately represent the meaning of sentences and documents for various Natural Language Processing tasks. In this work, we present Dependency Sensitive Convolutional Neural Networks (DSCNN) as a generalpurpose classification system for both sentences and documents. DSCNN hierarchically builds textual representations by processing pretrained word embeddings via Long ShortTerm Memory networks and subsequently extracting features with convolution operators. Compared with existing recursive neural models with tree structures, DSCNN does not rely on parsers and expensive phrase labeling, and thus is not restricted to sentencelevel tasks. Moreover, unlike other CNNbased models that analyze sentences locally by sliding windows, our system captures both the dependency information within each sentence and relationships across sentences in the same document. Experiment results demonstrate that our approach is achieving state-ofthe-art performance on several tasks, including sentiment analysis, question type classification, and subjectivity classification.",
"title": ""
}
] |
[
{
"docid": "7d74a130f423bb86b692bd17d21f2271",
"text": "Traffic optimizations (TO, e.g. flow scheduling, load balancing) in datacenters are difficult online decision-making problems. Previously, they are done with heuristics relying on operators' understanding of the workload and environment. Designing and implementing proper TO algorithms thus take at least weeks. Encouraged by recent successes in applying deep reinforcement learning (DRL) techniques to solve complex online control problems, we study if DRL can be used for automatic TO without human-intervention. However, our experiments show that the latency of current DRL systems cannot handle flow-level TO at the scale of current datacenters, because short flows (which constitute the majority of traffic) are usually gone before decisions can be made.\n Leveraging the long-tail distribution of datacenter traffic, we develop a two-level DRL system, AuTO, mimicking the Peripheral & Central Nervous Systems in animals, to solve the scalability problem. Peripheral Systems (PS) reside on end-hosts, collect flow information, and make TO decisions locally with minimal delay for short flows. PS's decisions are informed by a Central System (CS), where global traffic information is aggregated and processed. CS further makes individual TO decisions for long flows. With CS&PS, AuTO is an end-to-end automatic TO system that can collect network information, learn from past decisions, and perform actions to achieve operator-defined goals. We implement AuTO with popular machine learning frameworks and commodity servers, and deploy it on a 32-server testbed. Compared to existing approaches, AuTO reduces the TO turn-around time from weeks to ~100 milliseconds while achieving superior performance. For example, it demonstrates up to 48.14% reduction in average flow completion time (FCT) over existing solutions.",
"title": ""
},
{
"docid": "c30c8c1a8f484d82b718ba4d510af4d5",
"text": "Reputation management plays a key role in improving brand awareness and mastering the impact of harmful events. Aware of its importance, companies invest a lot in acquiring powerful social media monitoring tools (SMM). Several SMM tools are available today in the market, they help marketers and firms' managers follow relevant insights that inform them about what people think about their brand, who speaks about it, when and how it occurs. For this, SMM tools use several metrics concerning messages sent in social media about a brand, but few give a scoring to that brand's reputation. Our contribution through this work is to introduce the reputation score measured by our Intelligent Reputation Measuring System (IRMS) and compare it with existing SMM tools metrics.",
"title": ""
},
{
"docid": "f193816262da8f4edb523e172a83f953",
"text": "The European FF POIROT project (IST-2001-38248) aims at developing applications for tackling financial fraud, using formal ontological repositories as well as multilingual terminological resources. In this article, we want to focus on the development cycle towards an application recognizing several types of e-mail fraud, such as phishing, Nigerian advance fee fraud and lottery scam. The development cycle covers four tracks of development - language engineering, terminology engineering, knowledge engineering and system engineering. These development tracks are preceded by a problem determination phase and followed by a deployment phase. Each development track is supported by a methodology. All methodologies and phases in the development cycle will be discussed in detail",
"title": ""
},
{
"docid": "0b8311e2f33a44e965cabe06e85cc799",
"text": "This article investigates cutting planes for mixed-integer disjunctive programs. In the early 1980s, Balas and Jeroslow presented monoidal disjunctive cuts exploiting the integrality of variables. For disjunctions arising from binary variables, it is known that these cutting planes are essentially the same as Gomory mixed-integer and mixed-integer rounding cuts. In this article, we investigate the relation of monoidal cut strengthening to other classes of cutting planes for general twoterm disjunctions. In this context, we introduce a generalization of mixed-integer rounding cuts. We also demonstrate the effectiveness of monoidal disjunctive cuts via computational experiments on instances involving complementarity constraints.",
"title": ""
},
{
"docid": "9624ce8061b8476d7fe8d61ef3b565b8",
"text": "The availability of high-resolution remote sensing (HRRS) data has opened up the possibility for new interesting applications, such as per-pixel classification of individual objects in greater detail. This paper shows how a convolutional neural network (CNN) can be applied to multispectral orthoimagery and a digital surface model (DSM) of a small city for a full, fast and accurate per-pixel classification. The predicted low-level pixel classes are then used to improve the high-level segmentation. Various design choices of the CNN architecture are evaluated and analyzed. The investigated land area is fully manually labeled into five categories (vegetation, ground, roads, buildings and water), and the classification accuracy is compared to other per-pixel classification works on other land areas that have a similar choice of categories. The results of the full classification and segmentation on selected segments of the map show that CNNs are a viable tool for solving both the segmentation and object recognition task for remote sensing data.",
"title": ""
},
{
"docid": "fc036d58e966b72fc9f0c9a4c156b5a7",
"text": "OBJECTIVE\nWe sought to estimate the prevalence of pelvic organ prolapse in older women using the Pelvic Organ Prolapse Quantification examination and to identify factors associated with prolapse.\n\n\nMETHODS\nWomen with a uterus enrolled at one site of the Women's Health Initiative Hormone Replacement Therapy randomized clinical trial were eligible for this ancillary cross-sectional study. Subjects underwent a Pelvic Organ Prolapse Quantification examination during a maximal Valsalva maneuver and in addition completed a questionnaire. Logistic regression was used to identify independent risk factors for each of 2 definitions of prolapse: 1) Pelvic Organ Prolapse Quantification stage II or greater and 2) the leading edge of prolapse measured at the hymen or below.\n\n\nRESULTS\nIn 270 participants, age (mean +/- SD) was 68.3 +/- 5.6 years, body mass index was 30.4 +/- 6.2 kg/m(2), and vaginal parity (median [range]) was 3 (0-12). The proportions of Pelvic Organ Prolapse Quantification stages (95% confidence intervals [CIs]) were stage 0, 2.3% (95% CI 0.8-4.8%); stage I, 33.0% (95% CI 27.4-39.0%); stage II, 62.9% (95% CI 56.8-68.7%); and stage III, 1.9% (95% CI 0.6-4.3%). In 25.2% (95% CI 20.1-30.8%), the leading edge of prolapse was at the hymen or below. Hormone therapy was not associated with prolapse (P =.9). On multivariable analysis, less education (odds ratio [OR] 2.16, 95% CI 1.10-4.24) and higher vaginal parity (OR 1.61, 95% CI 1.03-2.50) were associated with prolapse when defined as stage II or greater. For prolapse defined by the leading edge at or below the hymen, older age had a decreased risk (OR 0.50, 95% CI 0.27-0.92) and less education, and larger babies had an increased risk (OR 2.38, 95% CI 1.31-4.32 and OR 1.97, 95% CI 1.07-3.64, respectively).\n\n\nCONCLUSION\nSome degree of prolapse is nearly ubiquitous in older women, which should be considered in the development of clinically relevant definitions of prolapse. Risk factors for prolapse differed depending on the definition of prolapse used.",
"title": ""
},
{
"docid": "4e7122172cb7c37416381c251b510948",
"text": "Anatomic and physiologic data are used to analyze the energy expenditure on different components of excitatory signaling in the grey matter of rodent brain. Action potentials and postsynaptic effects of glutamate are predicted to consume much of the energy (47% and 34%, respectively), with the resting potential consuming a smaller amount (13%), and glutamate recycling using only 3%. Energy usage depends strongly on action potential rate--an increase in activity of 1 action potential/cortical neuron/s will raise oxygen consumption by 145 mL/100 g grey matter/h. The energy expended on signaling is a large fraction of the total energy used by the brain; this favors the use of energy efficient neural codes and wiring patterns. Our estimates of energy usage predict the use of distributed codes, with <or=15% of neurons simultaneously active, to reduce energy consumption and allow greater computing power from a fixed number of neurons. Functional magnetic resonance imaging signals are likely to be dominated by changes in energy usage associated with synaptic currents and action potential propagation.",
"title": ""
},
{
"docid": "6be6e28cf4a4a044122901fad0d2bf40",
"text": "ÐAutomatic transformation of paper documents into electronic documents requires geometric document layout analysis at the first stage. However, variations in character font sizes, text line spacing, and document layout structures have made it difficult to design a general-purpose document layout analysis algorithm for many years. The use of some parameters has therefore been unavoidable in previous methods. In this paper, we propose a parameter-free method for segmenting the document images into maximal homogeneous regions and identifying them as texts, images, tables, and ruling lines. A pyramidal quadtree structure is constructed for multiscale analysis and a periodicity measure is suggested to find a periodical attribute of text regions for page segmentation. To obtain robust page segmentation results, a confirmation procedure using texture analysis is applied to only ambiguous regions. Based on the proposed periodicity measure, multiscale analysis, and confirmation procedure, we could develop a robust method for geometric document layout analysis independent of character font sizes, text line spacing, and document layout structures. The proposed method was experimented with the document database from the University of Washington and the MediaTeam Document Database. The results of these tests have shown that the proposed method provides more accurate results than the previous ones. Index TermsÐGeometric document layout analysis, parameter-free method, periodicity estimation, multiscale analysis, page segmentation.",
"title": ""
},
{
"docid": "33b2c5abe122a66b73840506aa3b443e",
"text": "Semantic role labeling, the computational identification and labeling of arguments in text, has become a leading task in computational linguistics today. Although the issues for this task have been studied for decades, the availability of large resources and the development of statistical machine learning methods have heightened the amount of effort in this field. This special issue presents selected and representative work in the field. This overview describes linguistic background of the problem, the movement from linguistic theories to computational practice, the major resources that are being used, an overview of steps taken in computational systems, and a description of the key issues and results in semantic role labeling (as revealed in several international evaluations). We assess weaknesses in semantic role labeling and identify important challenges facing the field. Overall, the opportunities and the potential for useful further research in semantic role labeling are considerable.",
"title": ""
},
{
"docid": "61495d608f36a1e7937d322f7cb9dac6",
"text": "OBJECTIVE\nWe have developed an asynchronous brain-machine interface (BMI)-based lower limb exoskeleton control system based on steady-state visual evoked potentials (SSVEPs).\n\n\nAPPROACH\nBy decoding electroencephalography signals in real-time, users are able to walk forward, turn right, turn left, sit, and stand while wearing the exoskeleton. SSVEP stimulation is implemented with a visual stimulation unit, consisting of five light emitting diodes fixed to the exoskeleton. A canonical correlation analysis (CCA) method for the extraction of frequency information associated with the SSVEP was used in combination with k-nearest neighbors.\n\n\nMAIN RESULTS\nOverall, 11 healthy subjects participated in the experiment to evaluate performance. To achieve the best classification, CCA was first calibrated in an offline experiment. In the subsequent online experiment, our results exhibit accuracies of 91.3 ± 5.73%, a response time of 3.28 ± 1.82 s, an information transfer rate of 32.9 ± 9.13 bits/min, and a completion time of 1100 ± 154.92 s for the experimental parcour studied.\n\n\nSIGNIFICANCE\nThe ability to achieve such high quality BMI control indicates that an SSVEP-based lower limb exoskeleton for gait assistance is becoming feasible.",
"title": ""
},
{
"docid": "5c8242eabf1df5fb6c61f490dd2e3e5d",
"text": "In recent years, the capabilities and roles of Unmanned Aerial Vehicles (UAVs) have rapidly evolved, and their usage in military and civilian areas is extremely popular as a result of the advances in technology of robotic systems such as processors, sensors, communications, and networking technologies. While this technology is progressing, development and maintenance costs of UAVs are decreasing relatively. The focus is changing from use of one large UAV to use of multiple UAVs, which are integrated into teams that can coordinate to achieve high-level goals. This level of coordination requires new networking models that can be set up on highly mobile nodes such as UAVs in the fleet. Such networking models allow any two nodes to communicate directly if they are in the communication range, or indirectly through a number of relay nodes such as UAVs. Setting up an ad-hoc network between flying UAVs is a challenging issue, and requirements can differ from traditional networks, Mobile Ad-hoc Networks (MANETs) and Vehicular Ad-hoc Networks (VANETs) in terms of node mobility, connectivity, message routing, service quality, application areas, etc. This paper O. K. Sahingoz (B) Computer Engineering Department, Turkish Air Force Academy, Yesilyurt, Istanbul, 34149, Turkey e-mail: [email protected] identifies the challenges with using UAVs as relay nodes in an ad-hoc manner, introduces network models of UAVs, and depicts open research issues with analyzing opportunities and future work.",
"title": ""
},
{
"docid": "eedcff8c2a499e644d1343b353b2a1b9",
"text": "We consider the problem of finding related tables in a large corpus of heterogenous tables. Detecting related tables provides users a powerful tool for enhancing their tables with additional data and enables effective reuse of available public data. Our first contribution is a framework that captures several types of relatedness, including tables that are candidates for joins and tables that are candidates for union. Our second contribution is a set of algorithms for detecting related tables that can be either unioned or joined. We describe a set of experiments that demonstrate that our algorithms produce highly related tables. We also show that we can often improve the results of table search by pulling up tables that are ranked much lower based on their relatedness to top-ranked tables. Finally, we describe how to scale up our algorithms and show the results of running it on a corpus of over a million tables extracted from Wikipedia.",
"title": ""
},
{
"docid": "6c8b83e0e02e5c0230d57e4885d27e02",
"text": "Contemporary conceptions of physical education pedagogy stress the importance of considering students’ physical, affective, and cognitive developmental states in developing curricula (Aschebrock, 1999; Crum, 1994; Grineski, 1996; Humel, 2000; Hummel & Balz, 1995; Jones & Ward, 1998; Kurz, 1995; Siedentop, 1996; Virgilio, 2000). Sport and physical activity preference is one variable that is likely to change with development. Including activities preferred by girls and boys in physical education curricula could produce several benefits, including greater involvement in lessons and increased enjoyment of physical education (Derner, 1994; Greenwood, Stillwell, & Byars, 2000; Knitt et al., 2000; Lee, Fredenburg, Belcher, & Cleveland, 1999; Sass H. & Sass I., 1986; Strand & Scatling, 1994; Volke, Poszony, & Stumpf, 1985). These are significant goals, because preference for physical activity and enjoyment of physical education are important predictors for overall physical activity participation (Sallis et al., 1999a, b). Although physical education curricula should be based on more than simply students’ preferences, student preferences can inform the design of physical education, other schoolbased physical activity programs, and programs sponsored by other agencies. Young people’s physical activity and sport preferences are likely to vary by age, sex, socio-economic status and nationality. Although several studies have been conducted over many years (Greller & Cochran, 1995; Hoffman & Harris, 2000; Kotonski-Immig, 1994; Lamprecht, Ruschetti, & Stamm, 1991; Strand & Scatling, 1994; Taks, Renson, & Vanreusel, 1991; Telama, 1978; Walton et al., 1999), current understanding of children’s preferences in specific sports and movement activities is limited. One of the main limitations is the cross-sectional nature of the data, so the stability of sport and physical activity preferences over time is not known. The main aim of the present research is to describe the levels and trends in the development of sport and physical activity preferences in girls and boys over a period of five years, from the age of 10 to 14. Further, the study aims to establish the stability of preferences over time.",
"title": ""
},
{
"docid": "a70fce38ca9f0ce79d84f6154b0cb0d3",
"text": "Vehicular Ad Hoc Network (VANET) has been drawing interest among the researchers for the past couple of years. Though ad hoc network or mobile ad hoc network is very common in military environment, the real world practice of ad hoc network is still very low. On the other hand, cloud computing is supposed to be the next big thing because of its scalability, PaaS, IaaS, SaaS and other important characteristics. In this paper we have tried to propose a model of ad hoc cloud network architecture. We have specially focused on vehicular ad hoc network architecture or VANET which will enable us to create a “cloud on the run” model. The major parts of this proposed model are wireless devices mounted on vehicles which will act as a mobile multihop network and a public or private cloud created by the vehicles called vehicular cloud.",
"title": ""
},
{
"docid": "54ef290e7c8fbc5c1bcd459df9bc4a06",
"text": "Augmenter of Liver Regeneration (ALR) is a sulfhydryl oxidase carrying out fundamental functions facilitating protein disulfide bond formation. In mammals, it also functions as a hepatotrophic growth factor that specifically stimulates hepatocyte proliferation and promotes liver regeneration after liver damage or partial hepatectomy. Whether ALR also plays a role during vertebrate hepatogenesis is unknown. In this work, we investigated the function of alr in liver organogenesis in zebrafish model. We showed that alr is expressed in liver throughout hepatogenesis. Knockdown of alr through morpholino antisense oligonucleotide (MO) leads to suppression of liver outgrowth while overexpression of alr promotes liver growth. The small-liver phenotype in alr morphants results from a reduction of hepatocyte proliferation without affecting apoptosis. When expressed in cultured cells, zebrafish Alr exists as dimer and is localized in mitochondria as well as cytosol but not in nucleus or secreted outside of the cell. Similar to mammalian ALR, zebrafish Alr is a flavin-linked sulfhydryl oxidase and mutation of the conserved cysteine in the CxxC motif abolishes its enzymatic activity. Interestingly, overexpression of either wild type Alr or enzyme-inactive Alr(C131S) mutant promoted liver growth and rescued the liver growth defect of alr morphants. Nevertheless, alr(C131S) is less efficacious in both functions. Meantime, high doses of alr MOs lead to widespread developmental defects and early embryonic death in an alr sequence-dependent manner. These results suggest that alr promotes zebrafish liver outgrowth using mechanisms that are dependent as well as independent of its sulfhydryl oxidase activity. This is the first demonstration of a developmental role of alr in vertebrate. It exemplifies that a low-level sulfhydryl oxidase activity of Alr is essential for embryonic development and cellular survival. The dose-dependent and partial suppression of alr expression through MO-mediated knockdown allows the identification of its late developmental role in vertebrate liver organogenesis.",
"title": ""
},
{
"docid": "1b7b64bd6c51a2a81c112a43ff10bb86",
"text": "We propose techniques for decentralizing prediction markets and order books, utilizing Bitcoin’s security model and consensus mechanism. Decentralization of prediction markets offers several key advantages over a centralized market: no single entity governs over the market, all transactions are transparent in the block chain, and anybody can participate pseudonymously to either open a new market or place bets in an existing one. We provide trust agility: each market has its own specified arbiter and users can choose to interact in markets that rely on the arbiters they trust. We also provide a transparent, decentralized order book that enables order execution on the block chain in the presence of potentially malicious miners. 1 Introductory Remarks Bitcoin has demonstrated that achieving consensus in a decentralized network is practical. This has stimulated research on applying Bitcoin-esque consensus mechanisms to new applications (e.g., DNS through Namecoin, timestamping through CommitCoin [10], and smart contracts through Ethereum). In this paper, we consider application of Bitcoin’s principles to prediction markets. A prediction market (PM) enables forecasts about uncertain future events to be forged into financial instruments that can be traded (bought, sold, shorted, etc.) until the uncertainty of the event is resolved. In several common forecasting scenarios, PMs have demonstrated lower error than polls, expert opinions, and statistical inference [2]. Thus an open and transparent PM not only serves its traders, it serves any stakeholder in the outcome by providing useful forecasting information through prices. Whenever discussing the application of Bitcoin to a new technology or service, its important to distinguish exactly what is meant. For example, a “Bitcoin-based prediction market” could mean at least three different things: (1) adding Bitcoin-related contracts (e.g., the future Bitcoin/USD exchange rate) to a traditional centralized PM, (2) converting the underlying currency of a centralized prediction market to Bitcoin, or (3) applying the design principles of Bitcoin to decentralize the functionality and governance of a PM. Of the three interpretations, approach (1) is not a research contribution. Approach (2) inherits most of the properties of a traditional PM: Opening markets for new future events is subject to a commitment by the PM host to determine the outcome, virtually any trading rules can be implemented, and trade settlement and clearing can be automated if money is held in trading accounts. In addition, by denominating the PM in Bitcoin, approach (2) enables easy electronic deposits and withdrawals from trading accounts, and can add a level of anonymity. An example of approach (2) is Predictious. This set of properties is a desirable starting point but we see several ways it can be improved through approach (3). Thus, our contribution is a novel PM design that enables: • A Decentralized Clearing/Settlement Service. Fully automated settlement and clearing of trades without escrowing funds to a trusted straight through processor (STP). • A Decentralized Order Matching Service. Fully automated matching of orders in a built-in call market, plus full support for external centralized exchanges. 4 http://namecoin.info 5 http://www.ethereum.org 6 https://www.predictious.com • Self-Organized Markets. Any participant can solicit forecasts on any event by arranging for any entity (or group of entities) to arbitrate the final payout based on the event’s outcome. • Agile Arbitration. Anyone can serve as an arbiter, and arbiters only need to sign two transactions (an agreement to serve and a declaration of an outcome) keeping the barrier to entry low. Traders can choose to participate in markets with arbiters they trust. Our analogue of Bitcoin miners can also arbitrate. • Transparency by Design. All trades, open markets, and arbitrated outcomes are reflected in a public ledger akin to Bitcoin’s block chain. • Flexible Fees. Fees paid to various parties can be configured on a per-market basis, with levels driven by market conditions (e.g., the minimum to incentivize correct behavior). • Resilience. Disruption to sets of participants will not disrupt the operations of the PM. • Theft Resistance. Like Bitcoin, currency and PM shares are held by the traders, and no transfers are possible without the holder’s digital signature. However like Bitcoin, users must protect their private keys and understand the risks of keeping money on an exchange service. • Pseudonymous Transactions. Like Bitcoin, holders of funds and shares are only identified with a pseudonymous public key, and any individual can hold an arbitrary number of keys. 2 Preliminaries and Related Work 2.1 Prediction Markets A PM enables participants to trade financial shares tied to the outcome of a specified future event. For example, if Alice, Bob, and Charlie are running for president, a share in ‘the outcome of Alice winning’ might entitle its holder to $1 if Alice wins and $0 if she does not. If the participants believed Alice to have a 60% chance of winning, the share would have an expected value of $0.60. In the opposite direction, if Bob and Charlie are trading at $0.30 and $0.10 respectively, the market on aggregate believes their likelihood of winning to be 30% and 10%. One of the most useful innovations of PMs is the intuitiveness of this pricing function [24]. Amateur traders and market observers can quickly assess current market belief, as well as monitor how forecasts change over time. The economic literature provides evidence that PMs can forecast certain types of events more accurately than methods that do not use financial incentives, such as polls (see [2] for an authoritative summary). They have been deployed internally by organizations such as the US Department of Defense, Microsoft, Google, IBM, and Intel, to forecast things like national security threats, natural disasters, and product development time and cost [2]. The literature on PMs tends to focus on topics orthogonal to how PMs are technically deployed, such as market scoring rules for market makers [13,9], accuracy of forecasts [23], and the relationship between share price and market belief [24]. Concurrently with the review of our paper, a decentralized PM called Truthcoin was independently proposed. It is also a Bitcoin-based design, however it focuses on determining a voting mechanism that incentivizes currency holders to judge the outcome of all events. We argue for designated arbitration in Section 5.1. Additionally, our approach does not use a market maker and is based on asset trading through a decentralized order book.",
"title": ""
},
{
"docid": "19bc37be7a2a128c70f2b0556844c0d7",
"text": "Automatic music transcription (AMT) aims to infer a latent symbolic representation of a piece of music (piano-roll), given a corresponding observed audio recording. Transcribing polyphonic music (when multiple notes are played simultaneously) is a challenging problem, due to highly structured overlapping between harmonics. We study whether the introduction of physically inspired Gaussian process (GP) priors into audio content analysis models improves the extraction of patterns required for AMT. Audio signals are described as a linear combination of sources. Each source is decomposed into the product of an amplitude-envelope, and a quasi-periodic component process. We introduce the Matérn spectral mixture (MSM) kernel for describing frequency content of singles notes. We consider two different regression approaches. In the sigmoid model every pitch-activation is independently non-linear transformed. In the softmax model several activation GPs are jointly non-linearly transformed. This introduce crosscorrelation between activations. We use variational Bayes for approximate inference. We empirically evaluate how these models work in practice transcribing polyphonic music. We demonstrate that rather than encourage dependency between activations, what is relevant for improving pitch detection is to learnt priors that fit the frequency content of the sound events to detect. Python code complementing this paper is available at https://github.com/PabloAlvarado/MSMK.",
"title": ""
},
{
"docid": "b7bf40c61ff4c73a8bbd5096902ae534",
"text": "—In therapeutic and functional applications transcutaneous electrical stimulation (TES) is still the most frequently applied technique for muscle and nerve activation despite the huge efforts made to improve implantable technologies. Stimulation electrodes play the important role in interfacing the tissue with the stimulation unit. Between the electrode and the excitable tissue there are a number of obstacles in form of tissue resistivities and permittivities that can only be circumvented by magnetic fields but not by electric fields and currents. However, the generation of magnetic fields needed for the activation of excitable tissues in the human body requires large and bulky equipment. TES devices on the other hand can be built cheap, small and light weight. The weak part in TES is the electrode that cannot be brought close enough to the excitable tissue and has to fulfill a number of requirements to be able to act as efficient as possible. The present review article summarizes the most important factors that influence efficient TES, presents and discusses currently used electrode materials, designs and configurations, and points out findings that have been obtained through modeling, simulation and testing.",
"title": ""
},
{
"docid": "5bde20f5c0cad9bf14bec276b59c9054",
"text": "Energy conversion of sunlight by photosynthetic organisms has changed Earth and life on it. Photosynthesis arose early in Earth's history, and the earliest forms of photosynthetic life were almost certainly anoxygenic (non-oxygen evolving). The invention of oxygenic photosynthesis and the subsequent rise of atmospheric oxygen approximately 2.4 billion years ago revolutionized the energetic and enzymatic fundamentals of life. The repercussions of this revolution are manifested in novel biosynthetic pathways of photosynthetic cofactors and the modification of electron carriers, pigments, and existing and alternative modes of photosynthetic carbon fixation. The evolutionary history of photosynthetic organisms is further complicated by lateral gene transfer that involved photosynthetic components as well as by endosymbiotic events. An expanding wealth of genetic information, together with biochemical, biophysical, and physiological data, reveals a mosaic of photosynthetic features. In combination, these data provide an increasingly robust framework to formulate and evaluate hypotheses concerning the origin and evolution of photosynthesis.",
"title": ""
},
{
"docid": "2acdc7dfe5ae0996ef0234ec51a34fe5",
"text": "The on-line or automatic visual inspection of PCB is basically a very first examination before its electronic testing. This inspection consists of mainly missing or wrongly placed components in the PCB. If there is any missing electronic component then it is not so damaging the PCB. But if any of the component that can be placed only in one way and has been soldered in other way around, then the same will be damaged and there are chances that other components may also get damaged. To avoid this, an automatic visual inspection is in demand that may take care of the missing or wrongly placed electronic components. In the presented paper work, an automatic machine vision system for inspection of PCBs for any missing component as compared with the standard one has been proposed. The system primarily consists of two parts: 1) the learning process, where the system is trained for the standard PCB, and 2) inspection process where the PCB under test is inspected for any missing component as compared with the standard one. The proposed system can be deployed on a manufacturing line with a much more affordable price comparing to other commercial inspection systems.",
"title": ""
}
] |
scidocsrr
|
78c22194de4fc9bb39399f4c9acfb9df
|
Identifying lexical relationships and entailments with distributional semantics
|
[
{
"docid": "a5b7253f56a487552ba3b0ce15332dd1",
"text": "We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (Socher et al., 2013) and TransE (Bordes et al., 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and/or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2% vs. 54.7% by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as BornInCitypa, bq ^ CityInCountrypb, cq ùñ Nationalitypa, cq. We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics, and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-ofthe-art confidence-based rule mining approach in mining horn rules that involve compositional reasoning.",
"title": ""
}
] |
[
{
"docid": "739669a06f0fbe94f5c21e1b0b514345",
"text": "This paper proposes an image dehazing model built with a convolutional neural network (CNN), called All-in-One Dehazing Network (AOD-Net). It is designed based on a re-formulated atmospheric scattering model. Instead of estimating the transmission matrix and the atmospheric light separately as most previous models did, AOD-Net directly generates the clean image through a light-weight CNN. Such a novel end-to-end design makes it easy to embed AOD-Net into other deep models, e.g., Faster R-CNN, for improving high-level tasks on hazy images. Experimental results on both synthesized and natural hazy image datasets demonstrate our superior performance than the state-of-the-art in terms of PSNR, SSIM and the subjective visual quality. Furthermore, when concatenating AOD-Net with Faster R-CNN, we witness a large improvement of the object detection performance on hazy images.",
"title": ""
},
{
"docid": "baf576065d4d32faa8d2a05c14975ca7",
"text": "These studies tested the associations between responses to an induced imaginary romantic rejection and individual differences on dimensions of attachment and covert narcissism. In Study 1 (N=125), we examined the associations between attachment dimensions and emotional responses to a vignette depicting a scenario of romantic rejection, as measured by self-reported negative mood states, expressions of anger, somatic symptoms, and self-evaluation. Higher scores on attachment anxiety, but not on attachment avoidance, were associated with stronger reactions to the induced rejection. Moreover, decreased self-evaluation scores (self-esteem and pride) were found to mediate these associations. In Study 2 (N=88), the relative contributions of covert narcissism and attachment anxiety to the emotional responses to romantic rejection were explored. Higher scores on covert narcissism were associated with stronger reactions to the induced rejection. Moreover, covert narcissism seemed to constitute a specific aspect of attachment anxiety.",
"title": ""
},
{
"docid": "a4b57037235e306034211e07e8500399",
"text": "As wireless devices boom and bandwidth-hungry applications (e.g., video and cloud uploading) get popular, today's wireless local area networks (WLANs) become not only crowded but also stressed at throughput. Multiuser multiple-input-multiple-output (MU-MIMO), an advanced form of MIMO, has gained attention due to its huge potential in improving the performance of WLANs. This paper surveys random access-based medium access control (MAC) protocols for MU-MIMO-enabled WLANs. It first provides background information about the evolution and the fundamental MAC schemes of IEEE 802.11 Standards and Amendments, and then identifies the key requirements of designing MU-MIMO MAC protocols for WLANs. After this, the most representative MU-MIMO MAC proposals in the literature are overviewed by benchmarking their MAC procedures and examining the key components, such as the channel state information acquisition, decoding/precoding, and scheduling schemes. Classifications and discussions on important findings of the surveyed MAC protocols are provided, based on which, the research challenges for designing effective MU-MIMO MAC protocols, as well as the envisaged MAC's role in the future heterogeneous networks, are highlighted.",
"title": ""
},
{
"docid": "6a8b76ae69dbbe42c7ea5f533c813d63",
"text": "Requests for recommendation can be seen as a form of query for candidate items, ranked by relevance. Users are however oen unable to crisply dene what they are looking for. One of the core concepts of natural communication for describing and explaining complex information needs in an intuitive fashion are analogies: e.g., “What is to Christopher Nolan as is 2001: A Space Odyssey to Stanley Kubrick?”. Analogies allow users to explore the item space by formulating queries in terms of items rather than explicitly specifying the properties that they nd aractive. One of the core challenges which hamper research on analogy-enabled queries is that analogy semantics rely on consensus on human perception, which is not well represented in current benchmark data sets. erefore, in this paper we introduce a new benchmark dataset focusing on the human aspects for analogy semantics. Furthermore, we evaluate a popular technique for analogy semantics (word2vec neuronal embeddings) using our dataset. e results show that current word embedding approaches are still not not suitable to suciently deal with deeper analogy semantics. We discuss future directions including hybrid algorithms also incorporating structural or crowd-based approaches, and the potential for analogy-based explanations.",
"title": ""
},
{
"docid": "64d53035eb919d5e27daef6b666b7298",
"text": "The 3L-NPC (Neutral-Point-Clamped) is the most popular multilevel converter used in high-power medium-voltage applications. An important disadvantage of this structure is the unequal distribution of losses among the switches. The performances of 3L-NPC structure were improved by developing the 3L-ANPC (Active-NPC) converter which has more degrees of freedom. In this paper the switching states and the loss distribution problem are studied for different PWM strategies in a STATCOM application. The PSIM simulation results are shown in order to validate the PWM strategies studied for 3L-ANPC converter.",
"title": ""
},
{
"docid": "afb01956f94be72b9424219ada7a5890",
"text": "The ability to remember one's past depends on neural processing set in motion at the moment each event is experienced. Memory formation can be observed by segregating neural responses according to whether or not each event is recalled or recognized on a subsequent memory test. Subsequent memory analyses have been performed with various neural measures, including brain potentials extracted from intracranial and extracranial electroencephalographic recordings, and hemodynamic responses from functional magnetic resonance imaging. Neural responses can predict which events, and which aspects of those events, will be subsequently remembered or forgotten, thereby elucidating the neurocognitive processes that establish durable episodic memories.",
"title": ""
},
{
"docid": "6b14bd3e01eb3f4abfc6d7456cf7fd47",
"text": "Fermented foods and beverages were among the first processed food products consumed by humans. The production of foods such as yogurt and cultured milk, wine and beer, sauerkraut and kimchi, and fermented sausage were initially valued because of their improved shelf life, safety, and organoleptic properties. It is increasingly understood that fermented foods can also have enhanced nutritional and functional properties due to transformation of substrates and formation of bioactive or bioavailable end-products. Many fermented foods also contain living microorganisms of which some are genetically similar to strains used as probiotics. Although only a limited number of clinical studies on fermented foods have been performed, there is evidence that these foods provide health benefits well-beyond the starting food materials.",
"title": ""
},
{
"docid": "435da20d6285a8b57a35fb407b96c802",
"text": "This paper attempts to review examples of the use of storytelling and narrative in immersive virtual reality worlds. Particular attention is given to the way narrative is incorporated in artistic, cultural, and educational applications through the development of specific sensory and perceptual experiences that are based on characteristics inherent to virtual reality, such as immersion, interactivity, representation, and illusion. Narrative development is considered on three axes: form (visual representation), story (emotional involvement), and history (authenticated cultural content) and how these can come together.",
"title": ""
},
{
"docid": "15b05bdc1310d038110b545686082c98",
"text": "The class of materials combining high electrical or thermal conductivity, optical transparency and flexibility is crucial for the development of many future electronic and optoelectronic devices. Silver nanowire networks show very promising results and represent a viable alternative to the commonly used, scarce and brittle indium tin oxide. The science and technology research of such networks are reviewed to provide a better understanding of the physical and chemical properties of this nanowire-based material while opening attractive new applications.",
"title": ""
},
{
"docid": "dcf87ddb6c4c79313174dc4ce02550e2",
"text": "BACKGROUND\nMany factors are considered predictors of recurrence after hallux valgus (HV) surgery, including preoperative distal metatarsal articular angle (DMAA). The restoration of the bone and joint alignment would be more important than realigning the cartilaginous surface of the metatarsal head. Therefore, is DMAA correction essential for a good clinical and radiological results after HV surgery? This study aims to illustrate the results of percutaneous forefoot surgery (PFS) for correction of HV deformity without DMAA correction.\n\n\nMATERIAL AND METHODS\nA prospective single-center study of 74 patients (89 feet), with mild-to-moderate hallux valgus deformity, who underwent PFS. The mean latest follow-up was 57.3 months.\n\n\nRESULTS\nPreoperative median visual analog scale was 7 points and AOFAS scores were 52 points. At the mean latest follow up both scores improved to 0 points and 90 points, respectively. Median HV angle and intermetatarsal angle changed from 30° and 12° preoperatively, to 21° and 11° at mean latest follow-up. Overall, 80% of the patients were satisfied or very satisfied. Recurrence of medial first metatarsal head pain occurred in 12 cases (13.5%).\n\n\nCONCLUSIONS\nPFS, without DMAA correction, is a valid procedure for surgical correction in patients with HV, despite the slightly worse radiographic results in our study.\n\n\nLEVELS OF EVIDENCE\nLevel II: Prospective study.",
"title": ""
},
{
"docid": "641754ee9332e1032838d0dba7712607",
"text": "Medication administration is an increasingly complex process, influenced by the number of medications on the market, the number of medications prescribed for each patient, new medical technology and numerous administration policies and procedures. Adverse events initiated by medication error are a crucial area to improve patient safety. This project looked at the complexity of the medication administration process at a regional hospital and the effect of two medication distribution systems. A reduction in work complexity and time spent gathering medication and supplies, was a goal of this work; but more importantly was determining what barriers to safety and efficiency exist in the medication administration process and the impact of barcode scanning and other technologies. The concept of mobile medication units is attractive to both managers and clinicians; however it is only one solution to the problems with medication administration. Introduction and Background Medication administration is an increasingly complex process, influenced by the number of medications on the market, the number of medications prescribed for each patient, and the numerous policies and procedures created for their administration. Mayo and Duncan (2004) found that a “single [hospital] patient can receive up to 18 medications per day, and a nurse can administer as many as 50 medications per shift” (p. 209). While some researchers indicated that the solution is more nurse education or training (e.g. see Mayo & Duncan, 2004; and Tang, Sheu, Yu, Wei, & Chen, 2007), it does not appear that they have determined the feasibility of this solution and the increased time necessary to look up every unfamiliar medication. Most of the research which focuses on the causes of medication errors does not examine the processes involved in the administration of the medication. And yet, understanding the complexity in the nurses’ processes and workflow is necessary to develop safeguards and create more robust systems that reduce the probability of errors and adverse events. Current medication administration processes include many \\ tasks, including but not limited to, assessing the patient to obtain pertinent data, gathering medications, confirming the five rights (right dose, patient, route, medication, and time), administering the medications, documenting administration, and observing for therapeutic and untoward effects. In studies of the delivery of nursing care in acute care settings, Potter et al. (2005) found that nurses spent 16% their time preparing or administering medication. In addition to the amount of time that the nurses spent in preparing and administering medication, Potter et al found that a significant number of interruptions occurred during this critical process. Interruptions impact the cognitive workload of the nurse, and create an environment where medication errors are more likely to occur. A second environmental factor that affects the nurses’ workflow, is the distance traveled to administer care during a shift. Welker, Decker, Adam, & Zone-Smith (2006) found that on average, ward nurses who were assigned three patients walked just over 4.1 miles per shift while a nurse assigned to six patients walked over 4.8 miles. As a large number of interruptions (22%) occurred within the medication rooms, which were highly visible and in high traffic locations (Potter et al., 2005), and while collecting supplies or traveling to and from patient rooms (Ebright, Patterson, Chalko, & Render, 2003), reducing the distances and frequency of repeated travel could have the ability to decrease the number of interruptions and possibly errors in medication administration. Adding new technology, revising policies and procedures, and providing more education have often been the approaches taken to reduce medication errors. Unfortunately these new technologies, such as computerized order entry and electronic medical records / charting, and new procedures, for instance bar code scanning both the medicine and the patient, can add complexity to the nurse’s taskload. The added complexity in correspondence with the additional time necessary to complete the additional steps can lead to workarounds and variations in care. Given the problems in the current medication administration processes, this work focused on facilitating the nurse’s role in the medication administration process. This study expands on the Braswell and Duggar (2006) investigation and compares processes at baseline and postintroduction of a new mobile medication system. To do this, the current medication administration and distribution process was fully documented to determine a baseline in workload complexity. Then a new mobile medication center was installed to allow nurses easier access to patient medications while traveling on the floor, and the medication administration and distribution process was remapped to demonstrate where process complexities were reduced and nurse workflow is more efficient. A similar study showed that the time nurses spend gathering medications and supplies can be dramatically reduced through this type of system (see Braswell & Duggar, 2006); however, they did not directly investigate the impact on the nursing process. Thus, this research is presented to document the impact of this technology on the nursing workflow at a regional hospital, and as an expansion on the work begun by Braswell and Duggar.",
"title": ""
},
{
"docid": "4af5aa24efc82a8e66deb98f224cd033",
"text": "Abstract—In the recent years, the rapid spread of mobile device has create the vast amount of mobile data. However, some shallow-structure models such as support vector machine (SVM) have difficulty dealing with high dimensional data with the development of mobile network. In this paper, we analyze mobile data to predict human trajectories in order to understand human mobility pattern via a deep-structure model called “DeepSpace”. To the best of out knowledge, it is the first time that the deep learning approach is applied to predicting human trajectories. Furthermore, we develop the vanilla convolutional neural network (CNN) to be an online learning system, which can deal with the continuous mobile data stream. In general, “DeepSpace” consists of two different prediction models corresponding to different scales in space (the coarse prediction model and fine prediction models). This two models constitute a hierarchical structure, which enable the whole architecture to be run in parallel. Finally, we test our model based on the data usage detail records (UDRs) from the mobile cellular network in a city of southeastern China, instead of the call detail records (CDRs) which are widely used by others as usual. The experiment results show that “DeepSpace” is promising in human trajectories prediction.",
"title": ""
},
{
"docid": "412951e42529d7862cb0bcbaf5bd9f97",
"text": "Wireless Sensor Network is an emerging field which is accomplishing much importance because of its vast contribution in varieties of applications. Wireless Sensor Networks are used to monitor a given field of interest for changes in the environment. Coverage is one of the main active research interests in WSN.In this paper we aim to review the coverage problem In WSN and the strategies that are used in solving coverage problem in WSN.These strategies studied are used during deployment phase of the network. Besides this we also outlined some basic design considerations in coverage of WSN.We also provide a brief summary of various coverage issues and the various approaches for coverage in Sensor network. Keywords— Coverage; Wireless sensor networks: energy efficiency; sensor; area coverage; target Coverage.",
"title": ""
},
{
"docid": "494b375064fbbe012b382d0ad2db2900",
"text": "You are smart to question how different medications interact when used concurrently. Champix, called Chantix in the United States and globally by its generic name varenicline [2], is a prescription medication that can help individuals quit smoking by partially stimulating nicotine receptors in cells throughout the body. Nicorette gum, a type of nicotine replacement therapy (NRT), is also a tool to help smokers quit by providing individuals with the nicotine they crave by delivering the substance in controlled amounts through the lining of the mouth. NRT is available in many other forms including lozenges, patches, inhalers, and nasal sprays. The short answer is that there is disagreement among researchers about whether or not there are negative consequences to chewing nicotine gum while taking varenicline. While some studies suggest no harmful side effects to using them together, others have found that adverse effects from using both at the same time. So, what does the current evidence say?",
"title": ""
},
{
"docid": "a32956703826761d16bba1a9665b215e",
"text": "Triangle meshes are widely used in representing surfaces in computer vision and computer graphics. Although 2D image processingbased edge detection techniques have been popular in many application areas, they are not well developed for surfaces represented by triangle meshes. This paper proposes a robust edge detection algorithm for triangle meshes and its applications to surface segmentation and adaptive surface smoothing. The proposed edge detection technique is based on eigen analysis of the surface normal vector field in a geodesic window. To compute the edge strength of a certain vertex, the neighboring vertices in a specified geodesic distance are involved. Edge information are used further to segment the surfaces with watershed algorithm and to achieve edgepreserved, adaptive surface smoothing. The proposed algorithm is novel in robustly detecting edges on triangle meshes against noise. The 3D watershed algorithm is an extension from previous work. Experimental results on surfaces reconstructed from multi-view real range images are presented.",
"title": ""
},
{
"docid": "cd9552d9891337f7e58b3e7e36dfab54",
"text": "Multi-variant program execution is an application of n-version programming, in which several slightly different instances of the same program are executed in lockstep on a multiprocessor. These variants are created in such a way that they behave identically under \"normal\" operation and diverge when \"out of specification\" events occur, which may be indicative of attacks. This paper assess the effectiveness of different code variation techniques to address different classes of vulnerabilities. In choosing a variant or combination of variants, security demands need to be balanced against runtime overhead. Our study indicates that a good combination of variations when running two variants is to choose one of instruction set randomization, system call number randomization, and register randomization, and use that together with library entry point randomization. Running more variants simultaneously makes it exponentially more difficult to take over the system.",
"title": ""
},
{
"docid": "a40727cfa31be91e0ed043826f1507d8",
"text": "Deep clustering learns deep feature representations that favor clustering task using neural networks. Some pioneering work proposes to simultaneously learn embedded features and perform clustering by explicitly defining a clustering oriented loss. Though promising performance has been demonstrated in various applications, we observe that a vital ingredient has been overlooked by these work that the defined clustering loss may corrupt feature space, which leads to non-representative meaningless features and this in turn hurts clustering performance. To address this issue, in this paper, we propose the Improved Deep Embedded Clustering (IDEC) algorithm to take care of data structure preservation. Specifically, we manipulate feature space to scatter data points using a clustering loss as guidance. To constrain the manipulation and maintain the local structure of data generating distribution, an under-complete autoencoder is applied. By integrating the clustering loss and autoencoder’s reconstruction loss, IDEC can jointly optimize cluster labels assignment and learn features that are suitable for clustering with local structure preservation. The resultant optimization problem can be effectively solved by mini-batch stochastic gradient descent and backpropagation. Experiments on image and text datasets empirically validate the importance of local structure preservation and the effectiveness of our algorithm.",
"title": ""
},
{
"docid": "2ce7c776cd231117fecdf81f2e8d35a2",
"text": "The use of social media as a source of news is entering a new phase as computer algorithms are developed and deployed to detect, rank, and verify news. The efficacy and ethics of such technology are the subject of this article, which examines the SocialSensor application, a tool developed by a multidisciplinary EU research project. The results suggest that computer software can be used successfully to identify trending news stories, allow journalists to search within a social media corpus, and help verify social media contributors and content. However, such software also raises questions about accountability as social media is algorithmically filtered for use by journalists and others. Our analysis of the inputs SocialSensor relies on shows biases towards those who are vocal and have an audience, many of whom are men in the media. We also reveal some of the technology's temporal and topic preferences. The conclusion discusses whether such biases are necessary for systems like SocialSensor to be effective. The article also suggests that academic research has failed to fully recognise the changes to journalists' sourcing practices brought about by social media, particularly Twitter, and provides some countervailing evidence and an explanation for this failure. Introduction The ubiquity of computing in contemporary culture has resulted in human decision-making being augmented, and even partially replaced, by computational processes or algorithms using artificial intelligence and information-retrieval techniques. Such augmentation and substitution is already common, and even predominates, in some industries, such as financial trading and legal research. Frey and Osborne (2013) have attempted to predict the extent to which a wide spectrum of jobs is susceptible to computerisation. Although journalists were not included in their analysis, some of the activities undertaken by journalists—for example those carried out by interviewers, proofreaders, and copy markers—were, and had a greater than 50 per cent probability of being computerised. It is that potential for the automation of journalistic work that is explored in this article. Frey and Osborne remind us of how automation can be aggressively resisted by workers, giving the example of William Lee who, they say, was driven out of Britain by the guild of hosiers for inventing a machine that knitted stockings. Such resistance also exists in the context of journalistic automation. For example, the German Federation of Journalists have said they \" don't think it is … desirable that journalism is done with algorithms \" (Konstantin Dörr, personal communication, 6 February …",
"title": ""
},
{
"docid": "e4edffeea08d6eae4dfc89c05f4c7507",
"text": "A partially reflective surface (PRS) antenna design enabling 1-bit dynamic beamwidth control is presented. The antenna operates at X-band and is based on microelectromechanical systems (MEMS) technology. The reconfigurable PRS unit cell monolithically integrates MEMS elements, whose positions are chosen to reduce losses while allowing a considerable beamwidth variation. The combined use of the proposed PRS unit cell topology and MEMS technology allows achieving low loss in the reconfigurable PRS. In addition, the antenna operates in dual-linear polarization with independent beamwidth control of each polarization. An operative MEMS-based PRS unit cell is fabricated and measured upon reconfiguration, showing very good agreement with simulations. The complete antenna system performance is rigorously evaluated based on full-wave simulations and the unit cell measurements, demonstrating an 18° and 23° variation of the half-power beamwidth in the E-plane and the H-plane, respectively. The antenna radiation efficiency is better than 75% in all states of operation.",
"title": ""
}
] |
scidocsrr
|
0a90b8e190c5b737669c235857031818
|
SVRG meets SAGA: k-SVRG - A Tale of Limited Memory
|
[
{
"docid": "4dc6f5768b43e6c491f0b08600acbea5",
"text": "Stochastic Dual Coordinate Ascent is a popular method for solving regularized loss minimization for the case of convex losses. We describe variants of SDCA that do not require explicit regularization and do not rely on duality. We prove linear convergence rates even if individual loss functions are non-convex, as long as the expected loss is strongly convex.",
"title": ""
},
{
"docid": "367268c67657a43d1b981347e8175153",
"text": "In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems. Different from the vanilla SGD and other modern stochastic methods such as SVRG, S2GD, SAG and SAGA, SARAH admits a simple recursive framework for updating stochastic gradient estimates; when comparing to SAG/SAGA, SARAH does not require a storage of past gradients. The linear convergence rate of SARAH is proven under strong convexity assumption. We also prove a linear convergence rate (in the strongly convex case) for an inner loop of SARAH, the property that SVRG does not possess. Numerical experiments demonstrate the efficiency of our algorithm.",
"title": ""
}
] |
[
{
"docid": "42e07265a724f946fe7c76b7d858279d",
"text": "This work investigates design optimisation and design trade-offs for multi-kW DC-DC Interleaved Boost Converters (IBC). A general optimisation procedure for weight minimisation is presented, and the trade-offs between the key design variables (e.g. switching frequency, topology) and performance metrics (e.g. power density, efficiency) are explored. It is shown that the optimal selection of components, switching frequency, and topology are heavily dependent on operating specifications such as voltage ratio, output voltage, and output power. With the device and component technologies considered, the single-phase boost converter is shown to be superior to the interleaved topologies in terms of power density for lower power, lower voltage specifications, whilst for higher-power specifications, interleaved designs are preferable. Comparison between an optimised design and an existing prototype for a 220 V–600 V, 40 kW specification, further illustrates the potential weight reduction that is afforded through design optimisation, with the optimised design predicting a reduction in component weight of around 33%.",
"title": ""
},
{
"docid": "4f3088d58e18e565795de1b017215ceb",
"text": "The platform switching (PLS) concept was introduced in the literature in 2005. The biological benefits and clinical effectiveness of the PLS technique have been established by several studies. In this article different aspects of PLS concept are discussed. Crestal bone loss, biologic width, and stress distribution in this concept are comprehensively reviewed. In this article the relative published articles from 1990 to 2011 have been evaluated by electronic search. Because of controversial results especially in immediate loading and animal studies, further modified research is needed to establish the mechanism and effect of the PLS technique. Essential changes in studies including using the control group for accurate interpretation of results and long-term observation, particularly through, randomized, prospective, multicenter trials with large numbers of participants, and implants are necessary.",
"title": ""
},
{
"docid": "ab05a100cfdb072f65f7dad85b4c5aea",
"text": "Expanding retrieval practice refers to the idea that gradually increasing the spacing interval between repeated tests ought to promote optimal long-term retention. Belief in the superiority of this technique is widespread, but empirical support is scarce. In addition, virtually all research on expanding retrieval has examined the learning of word pairs in paired-associate tasks. We report two experiments in which we examined the learning of text materials with expanding and equally spaced retrieval practice schedules. Subjects studied brief texts and recalled them in an initial learning phase. We manipulated the spacing of the repeated recall tests and examined final recall 1 week later. Overall we found that (1) repeated testing enhanced retention more than did taking a single test, (2) testing with feedback (restudying the passages) produced better retention than testing without feedback, but most importantly (3) there were no differences between expanding and equally spaced schedules of retrieval practice. Repeated retrieval enhanced long-term retention, but how the repeated tests were spaced did not matter.",
"title": ""
},
{
"docid": "8d313c48bfe642fd1455067bc2537ee4",
"text": "We propose an approximate strategy to efficiently train neural network based language models over very large vocabularies. Our approach, called adaptive softmax, circumvents the linear dependency on the vocabulary size by exploiting the unbalanced word distribution to form clusters that explicitly minimize the expectation of computation time. Our approach further reduces the computational time by exploiting the specificities of modern architectures and matrix-matrix vector operations, making it particularly suited for graphical processing units. Our experiments carried out on standard benchmarks, such as EuroParl and One Billion Word, show that our approach brings a large gain in efficiency over standard approximations while achieving an accuracy close to that of the full softmax. The code of our method is available at https://github.com/ facebookresearch/adaptive-softmax.",
"title": ""
},
{
"docid": "ae23145d649c6df81a34babdfc142b31",
"text": "Multi-head attention is appealing for the ability to jointly attend to information from different representation subspaces at different positions. In this work, we introduce a disagreement regularization to explicitly encourage the diversity among multiple attention heads. Specifically, we propose three types of disagreement regularization, which respectively encourage the subspace, the attended positions, and the output representation associated with each attention head to be different from other heads. Experimental results on widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness and universality of the proposed approach.",
"title": ""
},
{
"docid": "7df6c0084162a6ef3cb141f762859059",
"text": "The rotor integrity design for a high-speed modular air-cored axial-flux permanent-magnet (AFPM) generator is presented. The main focus is on the mechanical parametric optimization of the rotor, which becomes a more dominating design issue over electromagnetic optimization at high operational speeds. Approximate analytical formulas are employed for preliminary sizing of the mechanical parameters of the rotor, which consists of the permanent magnets, retainment ring, and back iron. Two-dimensional (2-D) finite-element analysis (FEA) models are used to optimize the values of the parameters. Then, 3-D FEA models are developed to verify the final design. Finally, based on the final design, an AFPM prototype is built for experimental validation, and mechanical integrity tests for the rotor are undertaken. The results confirm the validity of the analytical and FEA models, as well as the overall design approach.",
"title": ""
},
{
"docid": "60fbaecc398f04bdb428ccec061a15a5",
"text": "A decade earlier, work on modeling and analyzing social network, was primarily focused on manually collected datasets where the friendship links were sparse but relatively noise free (i.e. all links represented strong physical relation). With the popularity of online social networks, the notion of “friendship” changed dramatically. The data collection, now although automated, contains dense friendship links but the links contain noisier information (i.e. some weaker relationships). The aim of this study is to identify these weaker links and suggest how these links (identification) play a vital role in improving social media design elements such as privacy control, detection of auto-bots, friend introductions, information prioritization and so on. The binary metric used so far for modeling links in social network (i.e. friends or not) is of little importance as it groups all our relatives, close friends and acquaintances in the same category. Therefore a popular notion of tie-strength has been incorporated for modeling links. In this paper, a predictive model is presented that helps evaluate tie-strength for each link in network based on transactional features (e.g. communication, file transfer, photos). The model predicts tie strength with 76.4% efficiency. This work also suggests that important link properties manifest similarly across different social media sites.",
"title": ""
},
{
"docid": "c2332c4484fa18482ef072c003cf2caf",
"text": "The rapid development of smartphone technologies have resulted in the evolution of mobile botnets. The implications of botnets have inspired attention from the academia and the industry alike, which includes vendors, investors, hackers, and researcher community. Above all, the capability of botnets is uncovered through a wide range of malicious activities, such as distributed denial of service (DDoS), theft of business information, remote access, online or click fraud, phishing, malware distribution, spam emails, and building mobile devices for the illegitimate exchange of information and materials. In this study, we investigate mobile botnet attacks by exploring attack vectors and subsequently present a well-defined thematic taxonomy. By identifying the significant parameters from the taxonomy, we compared the effects of existing mobile botnets on commercial platforms as well as open source mobile operating system platforms. The parameters for review include mobile botnet architecture, platform, target audience, vulnerabilities or loopholes, operational impact, and detection approaches. In relation to our findings, research challenges are then presented in this domain.",
"title": ""
},
{
"docid": "59608978a30fcf6fc8bc0b92982abe69",
"text": "The self-advocacy movement (Dybwad & Bersani, 1996) grew out of resistance to oppressive practices of institutionalization (and worse) for people with cognitive disabilities. Moving beyond the worst abuses, people with cognitive disabilities seek as full participation in society as possible.",
"title": ""
},
{
"docid": "f315dca8c08645292c96aa1425d94a24",
"text": "WebRTC has quickly become popular as a video conferencing platform, partly due to the fact that many browsers support it. WebRTC utilizes the Google Congestion Control (GCC) algorithm to provide congestion control for realtime communications over UDP. The performance during a WebRTC call may be influenced by several factors, including the underlying WebRTC implementation, the device and network characteristics, and the network topology. In this paper, we perform a thorough performance evaluation of WebRTC both in emulated synthetic network conditions as well as in real wired and wireless networks. Our evaluation shows that WebRTC streams have a slightly higher priority than TCP flows when competing with cross traffic. In general, while in several of the considered scenarios WebRTC performed as expected, we observed important cases where there is room for improvement. These include the wireless domain and the newly added support for the video codecs VP9 and H.264 that does not perform as expected.",
"title": ""
},
{
"docid": "1c348f670ad5fe55701794de2284f043",
"text": "Face alignment is an important issue in many computer vision problems. The key problem is to find the nonlinear mapping from face image or feature to landmark locations. In this paper, we propose a novel cascaded approach with bidirectional Long Short Term Memory (LSTM) neural networks to approximate this nonlinear mapping. The cascaded structure is used to reduce the complexity of this problem and accelerate the algorithm by conducting the coarse-to-fine search. In each cascaded module, features of landmarks are delivered as inputs into the bidirectional LSTM network. The depth of the network guarantees the ability to learn highly complex mapping. The recurrent connections in LSTM explore the relationships of different landmarks and ensure that the shape of the face is maintained. On several challenging public databases, our approach achieves state-of-the-art performances.",
"title": ""
},
{
"docid": "e615ff8da6cdd43357e41aa97df88cc0",
"text": "In recent years, increasing numbers of people have been choosing herbal medicines or products to improve their health conditions, either alone or in combination with others. Herbs are staging a comeback and herbal \"renaissance\" occurs all over the world. According to the World Health Organization, 75% of the world's populations are using herbs for basic healthcare needs. Since the dawn of mankind, in fact, the use of herbs/plants has offered an effective medicine for the treatment of illnesses. Moreover, many conventional/pharmaceutical drugs are derived directly from both nature and traditional remedies distributed around the world. Up to now, the practice of herbal medicine entails the use of more than 53,000 species, and a number of these are facing the threat of extinction due to overexploitation. This paper aims to provide a review of the history and status quo of Chinese, Indian, and Arabic herbal medicines in terms of their significant contribution to the health promotion in present-day over-populated and aging societies. Attention will be focused on the depletion of plant resources on earth in meeting the increasing demand for herbs.",
"title": ""
},
{
"docid": "1e93cf6d4c004344fb8d49d2170ff768",
"text": "Modern deep neural networks have a large number of parameters, making them very powerful machine learning systems. A critical issue for training such large networks on large-scale data-sets is to prevent overfitting while at the same time providing enough model capacity. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks. In the first D step, we train a dense network to learn which connections are important. In the S step, we regularize the network by pruning the unimportant connections and retrain the network given the sparsity constraint. In the final D step, we increase the model capacity by freeing the sparsity constraint, re-initializing the pruned parameters, and retraining the whole dense network. Experiments show that DSD training can improve the performance of a wide range of CNN, RNN and LSTMs on the tasks of image classification, caption generation and speech recognition. On the Imagenet dataset, DSD improved the absolute accuracy of AlexNet, GoogleNet, VGG-16, ResNet50, ResNet-152 and SqueezeNet by a geo-mean of 2.1 points (Top-1) and 1.4 points (Top-5). On the WSJ’92 and WSJ’93 dataset, DSD improved DeepSpeech2 WER by 0.53 and 1.08 points. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by 2.0 points. DSD training flow produces the same model architecture and doesn’t incur any inference overhead.",
"title": ""
},
{
"docid": "d34b0fe424d1f1b748a0929fe1a67cc5",
"text": "Isolating sensitive data and state can increase the security and robustness of many applications. Examples include protecting cryptographic keys against exploits like OpenSSL’s Heartbleed bug or protecting a language runtime from native libraries written in unsafe languages. When runtime references across isolation boundaries occur relatively infrequently, then page-based hardware isolation can be used, because the cost of kernelor hypervisor-mediated domain switching is tolerable. However, some applications, such as isolating cryptographic session keys in a network-facing application or isolating frequently invoked native libraries in managed runtimes, require very frequent domain switching. In such applications, the overhead of kernelor hypervisormediated domain switching is prohibitive. In this paper, we present ERIM, a novel technique that provides hardware-enforced isolation with low overhead, even at high switching rates (ERIM’s average overhead is less than 1% for 100,000 switches per second). The key idea is to combine memory protection keys (MPKs), a feature recently added to Intel CPUs that allows protection domain switches in userspace, with binary inspection to prevent circumvention. We show that ERIM can be applied with little effort to new and existing applications, doesn’t require compiler changes, can run on a stock Linux kernel, and has low runtime overhead even at high domain switching rates.",
"title": ""
},
{
"docid": "c1fc1a31d9f5033a7469796d1222aef3",
"text": "Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more cameras are mounted on actuated mechanisms such as a gimbal. Existing methods for DCC calibration rely on joint angle measurements to resolve the time-varying transformation between the dynamic and static camera. This information is usually provided by motor encoders, however, joint angle measurements are not always readily available on off-the-shelf mechanisms. In this paper, we present an encoderless approach for DCC calibration which simultaneously estimates the kinematic parameters of the transformation chain as well as the unknown joint angles. We also demonstrate the integration of an encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show the extensions required in order to perform simultaneous online estimation of the joint angles and vehicle localization state. The proposed calibration approach is validated both in simulation and on a physical DCC composed of a 2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the calibrated mechanism integrated into the OKVIS VIO package, and demonstrate successful online joint angle estimation while maintaining localization accuracy that is comparable to a standard static multi-camera configuration.",
"title": ""
},
{
"docid": "afcde1fb33c3e36f35890db09c548a1f",
"text": "Since their inception, captchas have been widely used for preventing fraudsters from performing illicit actions. Nevertheless, economic incentives have resulted in an arms race, where fraudsters develop automated solvers and, in turn, captcha services tweak their design to break the solvers. Recent work, however, presented a generic attack that can be applied to any text-based captcha scheme. Fittingly, Google recently unveiled the latest version of reCaptcha. The goal of their new system is twofold; to minimize the effort for legitimate users, while requiring tasks that are more challenging to computers than text recognition. ReCaptcha is driven by an “advanced risk analysis system” that evaluates requests and selects the difficulty of the captcha that will be returned. Users may be required to click in a checkbox, or solve a challenge by identifying images with similar content. In this paper, we conduct a comprehensive study of reCaptcha, and explore how the risk analysis process is influenced by each aspect of the request. Through extensive experimentation, we identify flaws that allow adversaries to effortlessly influence the risk analysis, bypass restrictions, and deploy large-scale attacks. Subsequently, we design a novel low-cost attack that leverages deep learning technologies for the semantic annotation of images. Our system is extremely effective, automatically solving 70.78% of the image reCaptcha challenges, while requiring only 19 seconds per challenge. We also apply our attack to the Facebook image captcha and achieve an accuracy of 83.5%. Based on our experimental findings, we propose a series of safeguards and modifications for impacting the scalability and accuracy of our attacks. Overall, while our study focuses on reCaptcha, our findings have wide implications; as the semantic information conveyed via images is increasingly within the realm of automated reasoning, the future of captchas relies on the exploration of novel directions.",
"title": ""
},
{
"docid": "995ad137b6711f254c6b9852611242b5",
"text": "In this paper, we study beam selection for millimeter-wave (mm-wave) multiuser multiple input multiple output (MIMO) systems where a base station (BS) and users are equipped with antenna arrays. Exploiting a certain sparsity of mm-wave channels, a low-complexity beam selection method for beamforming by low-cost analog beamformers is derived. It is shown that beam selection can be carried out without explicit channel estimation using the notion of compressive sensing (CS). Due to various reasons (e.g., the background noise and interference), some users may choose the same BS beam, which results in high inter-user interference. To overcome this problem, we further consider BS beam selection by users. Through simulations, we show that the performance gap between the proposed approach and the optimal beamforming approach, which requires full channel state information (CSI), becomes narrower for a larger number of users at a moderate/low signal-to-noise ratio (SNR). Since the optimal beamforming approach is difficult to be used due to prohibitively high computational complexity for large antenna arrays with a large number of users, the proposed approach becomes attractive for BSs and users in mm-wave systems where large antenna arrays can be employed.",
"title": ""
},
{
"docid": "9b5b10031ab67dfd664993f727f1bce8",
"text": "PURPOSE\nWe propose a single network trained by pixel-to-label deep learning to address the general issue of automatic multiple organ segmentation in three-dimensional (3D) computed tomography (CT) images. Our method can be described as a voxel-wise multiple-class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image.\n\n\nMETHODS\nWe simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of \"convolution\" and \"deconvolution\" layers for 2D semantic image segmentation, and expands the core structure with 3D-2D-3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel-to-label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment.\n\n\nRESULTS\nThe proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground-truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth.\n\n\nCONCLUSIONS\nWe propose a single network based on pixel-to-label deep learning to address the challenging issue of anatomical structure segmentation in 3D CT cases. The novelty of this work is the policy of deep learning of the different 2D sectional appearances of 3D anatomical structures for CT cases and the majority voting of the 3D segmentation results from multiple crossed 2D sections to achieve availability and reliability with better efficiency, generality, and flexibility than conventional segmentation methods, which must be guided by human expertise.",
"title": ""
},
{
"docid": "7b2614d636c96659be5733a44038dea4",
"text": "Researchers have suggested that general self-efficacy (GSE) can substantially contribute to organizational theory, research, and practice. Unfortunately, the limited construct validity work conducted on commonly used GSE measures has highlighted such potential problems as low content validity and multidimension-ality. The authors developed a new GSE (NGSE) scale and compared its psycho-metric properties and validity to that of the Sherer et al. General Self-Efficacy Scale (SGSE). Studies in two countries found that the NGSE scale has higher construct validity than the SGSE scale. Although shorter than the SGSE scale, the NGSE scale demonstrated high reliability, predicted specific self-efficacy (SSE) for a variety of tasks in various contexts, and moderated the influence of previous performance on subsequent SSE formation. Implications, limitations, and directions for future organizational research are discussed. Self-efficacy, defined as \" beliefs in one's capabilities to mobilize the motivation, cog-nitive resources, and courses of action needed to meet given situational demands \", self-efficacy beliefs vary on three dimensions: (a) level or magnitude (particular level of task difficulty), (b) strength (certainty of successfully performing a particular level of task difficulty), and Authors' Note: We thank Jon-Andrew Whiteman for assistance in data collection and James Maddux, (c) generality (the extent to which magnitude and strength beliefs generalize across tasks and situations). Bandura's restrictive words \" given situational demands \" have given self-efficacy a narrow focus, and most researchers have limited their research to the magnitude and strength dimensions, conceptualizing and studying self-efficacy as a task-specific or state-like construct (SSE) (e. More recently, researchers have become interested in the more trait-like generality dimension of self-efficacy, which has been termed general self-efficacy (GSE) (e. GSE is defined as \" one's belief in one's overall competence to effect requisite performances across a wide variety of achievement situations \" (Eden, in press) or as \" individuals' perception of their ability to perform across a variety of different situations \" (Judge, Erez, et al., 1998, p. 170). Thus, GSE captures differences among individuals in their tendency to view themselves as capable of meeting task demands in a broad array of contexts. have suggested that SSE is a motivational state and GSE is a motivational trait. According to Eden, both GSE and SSE denote beliefs about one's ability to achieve desired outcomes, but the constructs differ in the scope (i.e., generality or specificity) of the performance domain contemplated. As such, GSE and SSE share similar antecedents (e.g., actual …",
"title": ""
}
] |
scidocsrr
|
9ef15b06903e806a61a234f66a9bc8f9
|
3D Model Retrieval with Spherical Harmonics and Moments
|
[
{
"docid": "2b1a9f7131b464d9587137baf828cd3a",
"text": "The description of the spatial characteristics of twoand three-dimensional objects, in the framework of MPEG-7, is considered. The shape of an object is one of its fundamental properties, and this paper describes an e$cient way to represent the coarse shape, scale and composition properties of an object. This representation is invariant to resolution, translation and rotation, and may be used for both two-dimensional (2-D) and three-dimensional (3-D) objects. This coarse shape descriptor will be included in the eXperimentation Model (XM) of MPEG-7. Applications of such a description to search object databases, in particular the CAESAR anthropometric database are discussed. ( 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "09e24c889d3e80dc31425a3510d6c696",
"text": "Earlier work by Driscoll and Healy [16] has produced an efficient algorithm for computing the Fourier transform of band-limited functions on the 2-sphere. In this paper we present a reformulation and variation of the original algorithm which results in a greatly improved inverse transform, and consequent improved convolution algorithm for such functions. All require at most O(N log N ) operations where N is the number of sample points. We also address implementation considerations and give heuristics for allowing reliable and computationally efficient floating point implementations of slightly modified algorithms. These claims are supported by extensive numerical experiments from our implementation in C on DEC, HP and SGI platforms. These results indicate that variations of the algorithm are both reliable and efficient for a large range of useful problem sizes. Performance appears to be architecture-dependent. The paper concludes with a brief discussion of a few potential applications.",
"title": ""
}
] |
[
{
"docid": "02f28b1237b88471b0d96e5ff3871dc4",
"text": "Data mining is becoming increasingly important since the size of databases grows even larger and the need to explore hidden rules from the databases becomes widely recognized. Currently database systems are dominated by relational database and the ability to perform data mining using standard SQL queries will definitely ease implementation of data mining. However the performance of SQL based data mining is known to fall behind specialized implementation and expensive mining tools being on sale. In this paper we present an evaluation of SQL based data mining on commercial RDBMS (IBM DB2 UDB EEE). We examine some techniques to reduce I/O cost by using View and Subquery. Those queries can be more than 6 times faster than SETM SQL query reported previously. In addition, we have made performance evaluation on parallel database environment and compared the performance result with commercial data mining tool (IBM Intelligent Miner). We prove that SQL based data mining can achieve sufficient performance by the utilization of SQL query customization and database tuning.",
"title": ""
},
{
"docid": "073e3296fc2976f0db2f18a06b0cb816",
"text": "Nowadays spoofing detection is one of the priority research areas in the field of automatic speaker verification. The success of Automatic Speaker Verification Spoofing and Countermeasures (ASVspoof) Challenge 2015 confirmed the impressive perspective in detection of unforeseen spoofing trials based on speech synthesis and voice conversion techniques. However, there is a small number of researches addressed to replay spoofing attacks which are more likely to be used by non-professional impersonators. This paper describes the Speech Technology Center (STC) anti-spoofing system submitted for ASVspoof 2017 which is focused on replay attacks detection. Here we investigate the efficiency of a deep learning approach for solution of the mentioned-above task. Experimental results obtained on the Challenge corpora demonstrate that the selected approach outperforms current state-of-the-art baseline systems in terms of spoofing detection quality. Our primary system produced an EER of 6.73% on the evaluation part of the corpora which is 72% relative improvement over the ASVspoof 2017 baseline system.",
"title": ""
},
{
"docid": "fa04e8e2e263d18ee821c7aa6ebed08e",
"text": "In this study we examined the effect of physical activity based labels on the calorie content of meals selected from a sample fast food menu. Using a web-based survey, participants were randomly assigned to one of four menus which differed only in their labeling schemes (n=802): (1) a menu with no nutritional information, (2) a menu with calorie information, (3) a menu with calorie information and minutes to walk to burn those calories, or (4) a menu with calorie information and miles to walk to burn those calories. There was a significant difference in the mean number of calories ordered based on menu type (p=0.02), with an average of 1020 calories ordered from a menu with no nutritional information, 927 calories ordered from a menu with only calorie information, 916 calories ordered from a menu with both calorie information and minutes to walk to burn those calories, and 826 calories ordered from the menu with calorie information and the number of miles to walk to burn those calories. The menu with calories and the number of miles to walk to burn those calories appeared the most effective in influencing the selection of lower calorie meals (p=0.0007) when compared to the menu with no nutritional information provided. The majority of participants (82%) reported a preference for physical activity based menu labels over labels with calorie information alone and no nutritional information. Whether these labels are effective in real-life scenarios remains to be tested.",
"title": ""
},
{
"docid": "2e72e09edaa4a13337609c058f139f6e",
"text": "Numerous experimental, epidemiologic, and clinical studies suggest that nonsteroidal anti-inflammatory drugs (NSAIDs), particularly the highly selective cyclooxygenase (COX)-2 inhibitors, have promise as anticancer agents. NSAIDs restore normal apoptosis in human adenomatous colorectal polyps and in various cancer cell lines that have lost adenomatous polyposis coli gene function. NSAIDs also inhibit angiogenesis in cell culture and rodent models of angiogenesis. Many epidemiologic studies have found that long-term use of NSAIDs is associated with a lower risk of colorectal cancer, adenomatous polyps, and, to some extent, other cancers. Two NSAIDs, sulindac and celecoxib, have been found to inhibit the growth of adenomatous polyps and cause regression of existing polyps in randomized trials of patients with familial adenomatous polyposis (FAP). However, unresolved questions about the safety, efficacy, optimal treatment regimen, and mechanism of action of NSAIDs currently limit their clinical application to the prevention of polyposis in FAP patients. Moreover, the development of safe and effective drugs for chemoprevention is complicated by the potential of even rare, serious toxicity to offset the benefit of treatment, particularly when the drug is administered to healthy people who have low annual risk of developing the disease for which treatment is intended. This review considers generic approaches to improve the balance between benefits and risks associated with the use of NSAIDs in chemoprevention. We critically examine the published experimental, clinical, and epidemiologic literature on NSAIDs and cancer, especially that regarding colorectal cancer, and identify strategies to overcome the various logistic and scientific barriers that impede clinical trials of NSAIDs for cancer prevention. Finally, we suggest research opportunities that may help to accelerate the future clinical application of NSAIDs for cancer prevention or treatment.",
"title": ""
},
{
"docid": "7b95b771e6194efb2deee35cfc179040",
"text": "A Bayesian nonparametric model is a Bayesian model on an infinite-dimensional parameter space. The parameter space is typically chosen as the set of all possible solutions for a given learning problem. For example, in a regression problem the parameter space can be the set of continuous functions, and in a density estimation problem the space can consist of all densities. A Bayesian nonparametric model uses only a finite subset of the available parameter dimensions to explain a finite sample of observations, with the set of dimensions chosen depending on the sample, such that the effective complexity of the model (as measured by the number of dimensions used) adapts to the data. Classical adaptive problems, such as nonparametric estimation and model selection, can thus be formulated as Bayesian inference problems. Popular examples of Bayesian nonparametric models include Gaussian process regression, in which the correlation structure is refined with growing sample size, and Dirichlet process mixture models for clustering, which adapt the number of clusters to the complexity of the data. Bayesian nonparametric models have recently been applied to a variety of machine learning problems, including regression, classification, clustering, latent variable modeling, sequential modeling, image segmentation, source separation and grammar induction.",
"title": ""
},
{
"docid": "2c9cfc7bf3b88f27046b9366b6053867",
"text": "The purpose of this thesis project is to study and evaluate a UWB Synthetic Aperture Radar (SAR) data image formation algorithm, that was previously less familiar and, that has recently got much attention in this field. Certain properties of it made it acquire a status in radar signal processing branch. This is a fast time-domain algorithm named Local Backprojection (LBP). The LBP algorithm has been implemented for SAR image formation. The algorithm has been simulated in MATLAB using standard values of pertinent parameters. Later, an evaluation of the LBP algorithm has been performed and all the comments, estimation and judgment have been done on the basis of the resulting images. The LBP has also been compared with the basic time-domain algorithm Global Backprojection (GBP) with respect to the SAR images. The specialty of LBP algorithm is in its reduced computational load than in GBP. LBP is a two-stage algorithm — it forms the beam first for a particular subimage and, in a later stage, forms the image of that subimage area. The signal data collected from the target is processed and backprojected locally for every subimage individually. This is the reason of naming it Local backprojection. After the formation of all subimages, these are arranged and combined coherently to form the full SAR image.",
"title": ""
},
{
"docid": "dcb5a2764d0ccf2746d5aa07e9a88e66",
"text": "●Objective: Create a Japanese morphological analyzer (word segmentation + POS tagging) that is robust and adaptable to new domains ●Approach: Use pointwise prediction, which estimates all tags independently of other tags ●Pointwise prediction: ●Robust: does not rely on dictionaries as much as previous methods ●Adaptable: it can be learned from single annotated words, not full sentences ●Works with active learning: Single words to annotate can be chosen effectively ●Evaluation on Japanese morphological analysis shows improvement over traditional methods 1",
"title": ""
},
{
"docid": "061ac4487fba7837f44293a2d20b8dd9",
"text": "This paper describes a model of cooperative behavior and describes how such a model can be applied in a natural language understanding system. We assume that agents attempt to recognize the plans of other agents and, then, use this plan when deciding what response to make. In particular, we show that, given a setting in which purposeful dialogues occur, this model can account for responses that provide more information that explicitly requested and for appropriate responses to both short sentence fragments and indirect speech acts.",
"title": ""
},
{
"docid": "2ca40fc7cf2cb7377b9b89be2606b096",
"text": "By “elementary” plane geometry I mean the geometry of lines and circles—straightedge and compass constructions—in both Euclidean and non-Euclidean planes. An axiomatic description of it is in Sections 1.1, 1.2, and 1.6. This survey highlights some foundational history and some interesting recent discoveries that deserve to be better known, such as the hierarchies of axiom systems, Aristotle’s axiom as a “missing link,” Bolyai’s discovery—proved and generalized by William Jagy—of the relationship of “circle-squaring” in a hyperbolic plane to Fermat primes, the undecidability, incompleteness, and consistency of elementary Euclidean geometry, and much more. A main theme is what Hilbert called “the purity of methods of proof,” exemplified in his and his early twentieth century successors’ works on foundations of geometry.",
"title": ""
},
{
"docid": "d2ce4df3be70141a3ab55aa0750f19ca",
"text": "Agile methods have become popular in recent years because the success rate of project development using Agile methods is better than structured design methods. Nevertheless, less than 50 percent of projects implemented using Agile methods are considered successful, and selecting the wrong Agile method is one of the reasons for project failure. Selecting the most appropriate Agile method is a challenging task because there are so many to choose from. In addition, potential adopters believe that migrating to an Agile method involves taking a drastic risk. Therefore, to assist project managers and other decision makers, this study aims to identify the key factors that should be considered when selecting an appropriate Agile method. A systematic literature review was performed to elicit these factors in an unbiased manner, and then content analysis was used to analyze the resultant data. It was found that the nature of the project, development team skills, project constraints, customer involvement and organizational culture are the key factors that should guide decision makers in the selection of an appropriate Agile method based on the value these factors have for different organizations and/or different projects. Keywords— Agile method selection; factors of selecting Agile methods; SLR",
"title": ""
},
{
"docid": "26d7cf1e760e9e443f33ebd3554315b6",
"text": "The arrival of a multinational corporation often looks like a death sentence to local companies in an emerging market. After all, how can they compete in the face of the vast financial and technological resources, the seasoned management, and the powerful brands of, say, a Compaq or a Johnson & Johnson? But local companies often have more options than they might think, say the authors. Those options vary, depending on the strength of globalization pressures in an industry and the nature of a company's competitive assets. In the worst case, when globalization pressures are strong and a company has no competitive assets that it can transfer to other countries, it needs to retreat to a locally oriented link within the value chain. But if globalization pressures are weak, the company may be able to defend its market share by leveraging the advantages it enjoys in its home market. Many companies in emerging markets have assets that can work well in other countries. Those that operate in industries where the pressures to globalize are weak may be able to extend their success to a limited number of other markets that are similar to their home base. And those operating in global markets may be able to contend head-on with multinational rivals. By better understanding the relationship between their company's assets and the industry they operate in, executives from emerging markets can gain a clearer picture of the options they really have when multinationals come to stay.",
"title": ""
},
{
"docid": "ac57fab046cfd02efa1ece262b07492f",
"text": "Interactive Narrative is an approach to interactive entertainment that enables the player to make decisions that directly affect the direction and/or outcome of the narrative experience being delivered by the computer system. Interactive narrative requires two seemingly conflicting requirements: coherent narrative and user agency. We present an interactive narrative system that uses a combination of narrative control and autonomous believable character agents to augment a story world simulation in which the user has a high degree of agency with narrative plot control. A drama manager called the Automated Story Director gives plot-based guidance to believable agents. The believable agents are endowed with the autonomy necessary to carry out directives in the most believable fashion possible. Agents also handle interaction with the user. When the user performs actions that change the world in such a way that the Automated Story Director can no longer drive the intended narrative forward, it is able to adapt the plot to incorporate the user’s changes and still achieve",
"title": ""
},
{
"docid": "775f4fd21194e18cdf303248f1cde206",
"text": "Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being 'too noisy'. Unfortunately, these priors generally yield overly smooth reconstructions and/or segmentations in certain regions whereas they fail in other areas to constrain the solution sufficiently. In this paper we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other's task. As a consequence, we propose a rigorous mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. Image segmentations provide geometric cues about which surface orientations are more likely to appear at a certain location in space whereas a dense 3D reconstruction yields a suitable regularization for the segmentation problem by lifting the labeling from 2D images to 3D space. We show how appearance-based cues and 3D surface orientation priors can be learned from training data and subsequently used for class-specific regularization. Experimental results on several real data sets highlight the advantages of our joint formulation.",
"title": ""
},
{
"docid": "19b5ec2f1347b458bccc79eb18b5bc39",
"text": "Objective: Cyber bullying is a combination of the word cyber and bullying where cyber basically means the Internet or on-line. In this case, cyber bullying will focus on getting in action with bullying by using the Internet or modern technologies such as on-line chats, online media and short messaging texts through social media. The current review aims to compile and summarize the results of relevant publications related to “cyber bullying.\" The review also includes discussing on relevant variables related to cyber bullying. Methods: Information from relevant publications addresses the demographics, prevalence, differences between cyber bullying and traditional bullying, bullying motivation, avenues to overcome it, preventions, coping mechanisms in relation to “cyber bullying” were retrieved and summarized. Results: The prevalence of cyber bullying ranges from 30% 55% and the contributing risk factors include positive association with perpetration, non-supportive school environment, and Internet risky behaviors. Both males and females have been equal weigh on being perpetrators and victims. The older groups with more technology exposures are more prone to be exposed to cyber bullying. With respect to individual components of bullying, repetition is less evident in cyber bullying and power imbalance is not measured by physicality but in terms of popularity and technical knowledge of the perpetrator. Conclusion: Due to the limited efforts centralized on the intervention, future researchers should focus on testing the efficacy of possible interventional programs and the effects of different roles in the intervention in order to curb the problem and prevent more deleterious effects of cyber bullying. ASEAN Journal of Psychiatry, Vol. 17 (1): January – June 2016: XX XX.",
"title": ""
},
{
"docid": "b602123afa9a78fd37e71038dfb5c4c7",
"text": "This paper presents a new approach for supervised power disaggregation by using a deep recurrent long short term memory network. It is useful to extract the power signal of one dominant appliance or any subcircuit from the aggregate power signal. To train the network, a measurement of the power signal of the target appliance in addition to the total power signal during the same time period is required. The method is supervised, but less restrictive in practice since submetering of an important appliance or a subcircuit for a short time is feasible. The main advantages of this approach are: a) It is also applicable to variable load and not restricted to on-off and multi-state appliances. b) It does not require hand-engineered event detection and feature extraction. c) By using multiple networks, it is possible to disaggregate multiple appliances or subcircuits at the same time. d) It also works with a low cost power meter as shown in the experiments with the Reference Energy Disaggregation (REDD) dataset (1/3Hz sampling frequency, only real power).",
"title": ""
},
{
"docid": "6349e0444220d4a8ea3c34755954a58a",
"text": "We present QuickNet, a fast and accurate network architecture that is both faster and significantly more accurate than other “fast” deep architectures like SqueezeNet. Furthermore, it uses less parameters than previous networks, making it more memory efficient. We do this by making two major modifications to the reference “Darknet” model (Redmon et al, 2015): 1) The use of depthwise separable convolutions and 2) The use of parametric rectified linear units. We make the observation that parametric rectified linear units are computationally equivalent to leaky rectified linear units at test time and the observation that separable convolutions can be interpreted as a compressed Inception network (Chollet, 2016). Using these observations, we derive a network architecture, which we call QuickNet, that is both faster and more accurate than previous models. Our architecture provides at least four major advantages: (1) A smaller model size, which is more tenable on memory constrained systems; (2) A significantly faster network which is more tenable on computationally constrained systems; (3) A high accuracy of 95.7% on the CIFAR-10 Dataset which outperforms all but one result published so far, although we note that our works are orthogonal approaches and can be combined (4) Orthogonality to previous model compression approaches allowing for further speed gains to be realized.",
"title": ""
},
{
"docid": "56206ddb152c3a09f3e28a6ffa703cd6",
"text": "This chapter introduces the operation and control of a Doubly-fed Induction Generator (DFIG) system. The DFIG is currently the system of choice for multi-MW wind turbines. The aerodynamic system must be capable of operating over a wide wind speed range in order to achieve optimum aerodynamic efficiency by tracking the optimum tip-speed ratio. Therefore, the generator’s rotor must be able to operate at a variable rotational speed. The DFIG system therefore operates in both suband super-synchronous modes with a rotor speed range around the synchronous speed. The stator circuit is directly connected to the grid while the rotor winding is connected via slip-rings to a three-phase converter. For variable-speed systems where the speed range requirements are small, for example ±30% of synchronous speed, the DFIG offers adequate performance and is sufficient for the speed range required to exploit typical wind resources. An AC-DC-AC converter is included in the induction generator rotor circuit. The power electronic converters need only be rated to handle a fraction of the total power – the rotor power – typically about 30% nominal generator power. Therefore, the losses in the power electronic converter can be reduced, compared to a system where the converter has to handle the entire power, and the system cost is lower due to the partially-rated power electronics. This chapter will introduce the basic features and normal operation of DFIG systems for wind power applications basing the description on the standard induction generator. Different aspects that will be described include their variable-speed feature, power converters and their associated control systems, and application issues.",
"title": ""
},
{
"docid": "a60720be4018e744d9e99c68d29f24c5",
"text": "Edentulism can be a debilitating handicap. Zarb described endentulous individuals who could not function as 'denture cripples'. Most difficulty with complete denture prostheses arises from the inability to function with the mandibular prostheses. Factors that adversely affect successful use of a complete denture on the mandible include: 1) the mobility of the floor of the mouth, 2) thin mucosa lining the alveolar ridge, 3) reduced support area and 4) the motion of the mandible (Figs 1,2). These factors alone can explain the difficulty of wearing a denture on the mandibular arch compared to the maxillary arch. The maxilla exhibits much less mobility on the borders of the denture than the mandible, moreover having a stable palate with thick fibrous tissues available to support the prostheses and resist occlusal forces. These differences explain most of the reasons why patients experience difficulty with using a complete denture on the mandibular arch compared to the maxillary arch.",
"title": ""
},
{
"docid": "a4e6b629ec4b0fdf8784ba5be1a62260",
"text": "Today's real-world databases typically contain millions of items with many thousands of fields. As a result, traditional distribution-based outlier detection techniques have more and more restricted capabilities and novel k-nearest neighbors based approaches have become more and more popular. However, the problems with these k-nearest neighbors rankings for top n outliers, are very computationally expensive for large datasets, and doubts exist in general whether they would work well for high dimensional datasets. To partially circumvent these problems, we propose in this paper a new global outlier factor and a new local outlier factor and an efficient outlier detection algorithm developed upon them that is easy to implement and can provide competing performances with existing solutions. Experiments performed on both synthetic and real data sets demonstrate the efficacy of our method. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
0ff3d0a8db58f8ad6be35e0e2f1aca60
|
Is Faster R-CNN Doing Well for Pedestrian Detection?
|
[
{
"docid": "c9b6f91a7b69890db88b929140f674ec",
"text": "Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.",
"title": ""
}
] |
[
{
"docid": "b5009853d22801517431f46683b235c2",
"text": "Artificial intelligence (AI) is the study of how to make computers do things which, at the moment, people do better. Thus Strong AI claims that in near future we will be surrounded by such kinds of machine which can completely works like human being and machine could have human level intelligence. One intention of this article is to excite a broader AI audience about abstract algorithmic information theory concepts, and conversely to inform theorists about exciting applications to AI.The science of Artificial Intelligence (AI) might be defined as the construction of intelligent systems and their analysis.",
"title": ""
},
{
"docid": "260c12152d9bd38bd0fde005e0394e17",
"text": "On the initiative of the World Health Organization, two meetings on the Standardization of Reporting Results of Cancer Treatment have been held with representatives and members of several organizations. Recommendations have been developed for standardized approaches to the recording of baseline data relating to the patient, the tumor, laboratory and radiologic data, the reporting of treatment, grading of acute and subacute toxicity, reporting of response, recurrence and disease-free interval, and reporting results of therapy. These recommendations, already endorsed by a number of organizations, are proposed for international acceptance and use to make it possible for investigators to compare validly their results with those of others.",
"title": ""
},
{
"docid": "71a65ff432ae4b53085ca5c923c29a95",
"text": "Data provenance is essential for debugging query results, auditing data in cloud environments, and explaining outputs of Big Data analytics. A well-established technique is to represent provenance as annotations on data and to instrument queries to propagate these annotations to produce results annotated with provenance. However, even sophisticated optimizers are often incapable of producing efficient execution plans for instrumented queries, because of their inherent complexity and unusual structure. Thus, while instrumentation enables provenance support for databases without requiring any modification to the DBMS, the performance of this approach is far from optimal. In this work, we develop provenancespecific optimizations to address this problem. Specifically, we introduce algebraic equivalences targeted at instrumented queries and discuss alternative, equivalent ways of instrumenting a query for provenance capture. Furthermore, we present an extensible heuristic and cost-based optimization (CBO) framework that governs the application of these optimizations and implement this framework in our GProM provenance system. Our CBO is agnostic to the plan space shape, uses a DBMS for cost estimation, and enables retrofitting of optimization choices into existing code by adding a few LOC. Our experiments confirm that these optimizations are highly effective, often improving performance by several orders of magnitude for diverse provenance tasks.",
"title": ""
},
{
"docid": "ee3b2a97f01920ccbc653f4833820ca0",
"text": "Notwithstanding many years of progress, pedestrian recognition is still a difficult but important problem. We present a novel multilevel Mixture-of-Experts approach to combine information from multiple features and cues with the objective of improved pedestrian classification. On pose-level, shape cues based on Chamfer shape matching provide sample-dependent priors for a certain pedestrian view. On modality-level, we represent each data sample in terms of image intensity, (dense) depth, and (dense) flow. On feature-level, we consider histograms of oriented gradients (HOG) and local binary patterns (LBP). Multilayer perceptrons (MLP) and linear support vector machines (linSVM) are used as expert classifiers. Experiments are performed on a unique real-world multi-modality dataset captured from a moving vehicle in urban traffic. This dataset has been made public for research purposes. Our results show a significant performance boost of up to a factor of 42 in reduction of false positives at constant detection rates of our approach compared to a baseline intensity-only HOG/linSVM approach.",
"title": ""
},
{
"docid": "251bf66c8f742ceafc91ef92dc28085b",
"text": "Recently, Altug and Wagner [1] posed a question regarding the optimal behavior of the probability of error when channel coding rate converges to the capacity sufficiently slowly. They gave a sufficient condition for the discrete memoryless channel (DMC) to satisfy a moderate deviation property (MDP) with the constant equal to the channel dispersion. Their sufficient condition excludes some practically interesting channels, such as the binary erasure channel and the Z-channel. We extend their result in two directions. First, we show that a DMC satisfies MDP if and only if its channel dispersion is nonzero. Second, we prove that the AWGN channel also satisfies MDP with a constant equal to the channel dispersion. While the methods used by Altug and Wagner are based on the method of types and other DMC-specific ideas, our proofs (in both achievability and converse parts) rely on the tools from our recent work [2] on finite-blocklength regime that are equally applicable to non-discrete channels and channels with memory.",
"title": ""
},
{
"docid": "44bbc67f44f4f516db97b317ae16a22a",
"text": "Although the number of occupational therapists working in mental health has dwindled, the number of people who need our services has not. In our tendency to cling to a medical model of service provision, we have allowed the scope and content of our services to be limited to what has been supported within this model. A social model that stresses functional adaptation within the community, exemplified in psychosocial rehabilitation, offers a promising alternative. A strongly proactive stance is needed if occupational therapists are to participate fully. Occupational therapy can survive without mental health specialists, but a large and deserving population could ultimately be deprived of a valuable service.",
"title": ""
},
{
"docid": "781fbf087201e480899f8bfb7e0e1838",
"text": "The term \"Ehlers-Danlos syndrome\" (EDS) groups together an increasing number of heritable connective tissue disorders mainly featuring joint hypermobility and related complications, dermal dysplasia with abnormal skin texture and repair, and variable range of the hollow organ and vascular dysfunctions. Although the nervous system is not considered a primary target of the underlying molecular defect, recently, increasing attention has been posed on neurological manifestations of EDSs, such as musculoskeletal pain, fatigue, headache, muscle weakness and paresthesias. Here, a comprehensive overview of neurological findings of these conditions is presented primarily intended for the clinical neurologist. Features are organized under various subheadings, including pain, fatigue, headache, stroke and cerebrovascular disease, brain and spine structural anomalies, epilepsy, muscular findings, neuropathy and developmental features. The emerging picture defines a wide spectrum of neurological manifestations that are unexpectedly common and potentially disabling. Their evaluation and correct interpretation by the clinical neurologist is crucial for avoiding superfluous investigations, wrong therapies, and inappropriate referral. A set of basic tools for patient's recognition is offered for raising awareness among neurologists on this underdiagnosed group of hereditary disorders.",
"title": ""
},
{
"docid": "f51d5eb0e569606aa4fc9a87521dfd9f",
"text": "This article proposes LA-LDA, a location-aware probabilistic generative model that exploits location-based ratings to model user profiles and produce recommendations. Most of the existing recommendation models do not consider the spatial information of users or items; however, LA-LDA supports three classes of location-based ratings, namely spatial user ratings for nonspatial items, nonspatial user ratings for spatial items, and spatial user ratings for spatial items. LA-LDA consists of two components, ULA-LDA and ILA-LDA, which are designed to take into account user and item location information, respectively. The component ULA-LDA explicitly incorporates and quantifies the influence from local public preferences to produce recommendations by considering user home locations, whereas the component ILA-LDA recommends items that are closer in both taste and travel distance to the querying users by capturing item co-occurrence patterns, as well as item location co-occurrence patterns. The two components of LA-LDA can be applied either separately or collectively, depending on the available types of location-based ratings. To demonstrate the applicability and flexibility of the LA-LDA model, we deploy it to both top-k recommendation and cold start recommendation scenarios. Experimental evidence on large-scale real-world data, including the data from Gowalla (a location-based social network), DoubanEvent (an event-based social network), and MovieLens (a movie recommendation system), reveal that LA-LDA models user profiles more accurately by outperforming existing recommendation models for top-k recommendation and the cold start problem.",
"title": ""
},
{
"docid": "7790f5dc699dc264d7be6f7376597867",
"text": "The CNN-encoding of features from entire videos for the representation of human actions has rarely been addressed. Instead, CNN work has focused on approaches to fuse spatial and temporal networks, but these were typically limited to processing shorter sequences. We present a new video representation, called temporal linear encoding (TLE) and embedded inside of CNNs as a new layer, which captures the appearance and motion throughout entire videos. It encodes this aggregated information into a robust video feature representation, via end-to-end learning. Advantages of TLEs are: (a) they encode the entire video into a compact feature representation, learning the semantics and a discriminative feature space, (b) they are applicable to all kinds of networks like 2D and 3D CNNs for video classification, and (c) they model feature interactions in a more expressive way and without loss of information. We conduct experiments on two challenging human action datasets: HMDB51 and UCF101. The experiments show that TLE outperforms current state-of-the-art methods on both datasets.",
"title": ""
},
{
"docid": "6e878dbb176ea3a18190a8ab8177425a",
"text": "We present a new computing machine, called an active element machine (AEM), and the AEM programming language. This computing model is motivated by the positive aspects of dendritic integration, inspired by biology, and traditional programming languages based on the register machine. Distinct from the traditional register machine, the fundamental computing elements – active elements – compute simultaneously. Distinct from traditional programming languages, all active element commands have an explicit reference to time. These attributes make the AEM an inherently parallel machine and enable the AEM to change its architecture (program) as it is executing its program.",
"title": ""
},
{
"docid": "631cd44345606641454e9353e071f2c5",
"text": "Microblogs are rich sources of information because they provide platforms for users to share their thoughts, news, information, activities, and so on. Twitter is one of the most popular microblogs. Twitter users often use hashtags to mark specific topics and to link them with related tweets. In this study, we investigate the relationship between the music listening behaviors of Twitter users and a popular music ranking service by comparing information extracted from tweets with music-related hashtags and the Billboard chart. We collect users' music listening behavior from Twitter using music-related hashtags (e.g., #nowplaying). We then build a predictive model to forecast the Billboard rankings and hit music. The results show that the numbers of daily tweets about a specific song and artist can be effectively used to predict Billboard rankings and hits. This research suggests that users' music listening behavior on Twitter is highly correlated with general music trends and could play an important role in understanding consumers' music consumption patterns. In addition, we believe that Twitter users' music listening behavior can be applied in the field of Music Information Retrieval (MIR).",
"title": ""
},
{
"docid": "26d7cf1e760e9e443f33ebd3554315b6",
"text": "The arrival of a multinational corporation often looks like a death sentence to local companies in an emerging market. After all, how can they compete in the face of the vast financial and technological resources, the seasoned management, and the powerful brands of, say, a Compaq or a Johnson & Johnson? But local companies often have more options than they might think, say the authors. Those options vary, depending on the strength of globalization pressures in an industry and the nature of a company's competitive assets. In the worst case, when globalization pressures are strong and a company has no competitive assets that it can transfer to other countries, it needs to retreat to a locally oriented link within the value chain. But if globalization pressures are weak, the company may be able to defend its market share by leveraging the advantages it enjoys in its home market. Many companies in emerging markets have assets that can work well in other countries. Those that operate in industries where the pressures to globalize are weak may be able to extend their success to a limited number of other markets that are similar to their home base. And those operating in global markets may be able to contend head-on with multinational rivals. By better understanding the relationship between their company's assets and the industry they operate in, executives from emerging markets can gain a clearer picture of the options they really have when multinationals come to stay.",
"title": ""
},
{
"docid": "c04dd7ccb0426ef5d44f0420d321904d",
"text": "In this paper, we introduce a new convolutional layer named the Temporal Gaussian Mixture (TGM) layer and present how it can be used to efficiently capture temporal structure in continuous activity videos. Our layer is designed to allow the model to learn a latent hierarchy of sub-event intervals. Our approach is fully differentiable while relying on a significantly less number of parameters, enabling its end-to-end training with standard backpropagation. We present our convolutional video models with multiple TGM layers for activity detection. Our experiments on multiple datasets including Charades and MultiTHUMOS confirm the benefit of our TGM layers, illustrating that it outperforms other models and temporal convolutions.",
"title": ""
},
{
"docid": "4e8c39eaa7444158a79573481b80a77f",
"text": "Image patch classification is an important task in many different medical imaging applications. In this work, we have designed a customized Convolutional Neural Networks (CNN) with shallow convolution layer to classify lung image patches with interstitial lung disease (ILD). While many feature descriptors have been proposed over the past years, they can be quite complicated and domain-specific. Our customized CNN framework can, on the other hand, automatically and efficiently learn the intrinsic image features from lung image patches that are most suitable for the classification purpose. The same architecture can be generalized to perform other medical image or texture classification tasks.",
"title": ""
},
{
"docid": "328c1c6ed9e38a851c6e4fd3ab71c0f8",
"text": "We present the MSP-IMPROV corpus, a multimodal emotional database, where the goal is to have control over lexical content and emotion while also promoting naturalness in the recordings. Studies on emotion perception often require stimuli with fixed lexical content, but that convey different emotions. These stimuli can also serve as an instrument to understand how emotion modulates speech at the phoneme level, in a manner that controls for coarticulation. Such audiovisual data are not easily available from natural recordings. A common solution is to record actors reading sentences that portray different emotions, which may not produce natural behaviors. We propose an alternative approach in which we define hypothetical scenarios for each sentence that are carefully designed to elicit a particular emotion. Two actors improvise these emotion-specific situations, leading them to utter contextualized, non-read renditions of sentences that have fixed lexical content and convey different emotions. We describe the context in which this corpus was recorded, the key features of the corpus, the areas in which this corpus can be useful, and the emotional content of the recordings. The paper also provides the performance for speech and facial emotion classifiers. The analysis brings novel classification evaluations where we study the performance in terms of inter-evaluator agreement and naturalness perception, leveraging the large size of the audiovisual database.",
"title": ""
},
{
"docid": "d593c18bf87daa906f83d5ff718bdfd0",
"text": "Information and communications technologies (ICTs) have enabled the rise of so-called “Collaborative Consumption” (CC): the peer-to-peer-based activity of obtaining, giving, or sharing the access to goods and services, coordinated through community-based online services. CC has been expected to alleviate societal problems such as hyper-consumption, pollution, and poverty by lowering the cost of economic coordination within communities. However, beyond anecdotal evidence, there is a dearth of understanding why people participate in CC. Therefore, in this article we investigate people’s motivations to participate in CC. The study employs survey data (N = 168) gathered from people registered onto a CC site. The results show that participation in CC is motivated by many factors such as its sustainability, enjoyment of the activity as well as economic gains. An interesting detail in the result is that sustainability is not directly associated with participation unless it is at the same time also associated with positive attitudes towards CC. This suggests that sustainability might only be an important factor for those people for whom ecological consumption is important. Furthermore, the results suggest that in CC an attitudebehavior gap might exist; people perceive the activity positively and say good things about it, but this good attitude does not necessary translate into action. Introduction",
"title": ""
},
{
"docid": "3b06bc2d72e0ae7fa75873ed70e23fc3",
"text": "Transaction traces analysis is a key utility for marketing, trend monitoring, and fraud detection purposes. However, they can also be used for designing and verification of contextual risk management systems for card-present transactions. In this paper, we presented a novel approach to collect detailed transaction traces directly from payment terminal. Thanks to that, it is possible to analyze each transaction step precisely, including its frequency and timing. We also demonstrated our approach to analyze such data based on real-life experiment. Finally, we concluded this paper with important findings for designers of such a system.",
"title": ""
},
{
"docid": "ceef658faa94ad655521ece5ac5cba1d",
"text": "We propose learning a semantic visual feature representation by training a neural network supervised solely by point and object trajectories in video sequences. Currently, the predominant paradigm for learning visual features involves training deep convolutional networks on an image classification task using very large human-annotated datasets, e.g. ImageNet. Though effective as supervision, semantic image labels are costly to obtain. On the other hand, under high enough frame rates, frame-to-frame associations between the same 3D physical point or an object can be established automatically. By transitivity, such associations grouped into tracks can relate object/point appearance across large changes in pose, illumination and camera viewpoint, providing a rich source of invariance that can be used for training. We train a siamese network we call it AssociationNet to discriminate between correct and wrong associations between patches in different frames of a video sequence. We show that AssociationNet learns useful features when used as pretraining for object recognition in static images, and outperforms random weight initialization and alternative pretraining methods.",
"title": ""
},
{
"docid": "6616607ee5a856a391131c5e2745bc79",
"text": "Project management (PM) landscaping is continually changing in the IT industry. Working with the small teams and often with the limited budgets, while facing frequent changes in the business requirements, project managers are under continuous pressure to deliver fast turnarounds. Following the demands of the IT project management, leaders in this industry are optimizing and adopting different and new more effective styles and strategies. This paper proposes a new hybrid way of managing IT projects, flexibly combining the traditional and the Agile method. Also, it investigates what is the necessary organizational transition in an IT company, required before converting from the traditional to the proposed new hybrid method.",
"title": ""
},
{
"docid": "bb6737c84b0d96896c82abefee876858",
"text": "This paper introduces a novel tactile sensor with the ability to detect objects in the sensor's near proximity. For both tasks, the same capacitive sensing principle is used. The tactile part of the sensor provides a tactile sensor array enabling the sensor to gather pressure profiles of the mechanical contact area. Several tactile sensors have been developed in the past. These sensors lack the capability of detecting objects in their near proximity before a mechanical contact occurs. Therefore, we developed a tactile proximity sensor, which is able to measure the current flowing out of or even into the sensor. Measuring these currents and the exciting voltage makes a calculation of the capacitance coupled to the sensor's surface and, using more sensors of this type, the change of capacitance between the sensors possible. The sensor's mechanical design, the analog/digital signal processing and the hardware efficient demodulator structure, implemented on a FPGA, will be discussed in detail.",
"title": ""
}
] |
scidocsrr
|
017329e1d30a515ce8518a47edf754d8
|
Multi-oriented scene text detection in video based on wavelet and angle projection boundary growing
|
[
{
"docid": "a43e646ee162a23806c3b8f0a9d69b23",
"text": "This paper describes the results of the ICDAR 2005 competition for locating text in camera captured scenes. For this we used the same data as the ICDAR 2003 competition, which has been kept private until now. This allows a direct comparison with the 2003 entries. The main result is that the leading 2005 entry has improved significantly on the leading 2003 entry, with an increase in average f-score from 0.5 to 0.62, where the f-score is the same adapted information retrieval measure used for the 2003 competition. The paper also discusses the Web-based deployment and evaluation of text locating systems, and one of the leading entries has now been deployed in this way. This mode of usage could lead to more complete and more immediate knowledge of the strengths and weaknesses of each newly developed system.",
"title": ""
}
] |
[
{
"docid": "7a86d9e19930ce5af78431a52bb75728",
"text": "Mapping Relational Databases (RDB) to RDF is an active field of research. The majority of data on the current Web is stored in RDBs. Therefore, bridging the conceptual gap between the relational model and RDF is needed to make the data available on the Semantic Web. In addition, recent research has shown that Semantic Web technologies are useful beyond the Web, especially if data from different sources has to be exchanged or integrated. Many mapping languages and approaches were explored leading to the ongoing standardization effort of the World Wide Web Consortium (W3C) carried out in the RDB2RDF Working Group (WG). The goal and contribution of this paper is to provide a feature-based comparison of the state-of-the-art RDB-to-RDF mapping languages. It should act as a guide in selecting a RDB-to-RDF mapping language for a given application scenario and its requirements w.r.t. mapping features. Our comparison framework is based on use cases and requirements for mapping RDBs to RDF as identified by the RDB2RDF WG. We apply this comparison framework to the state-of-the-art RDB-to-RDF mapping languages and report the findings in this paper. As a result, our classification proposes four categories of mapping languages: direct mapping, read-only general-purpose mapping, read-write general-purpose mapping, and special-purpose mapping. We further provide recommendations for selecting a mapping language.",
"title": ""
},
{
"docid": "7de901a988afab3aee99f44d3f98cb46",
"text": "A pulsewidth modulation (PWM) and pulse frequency modulation (PFM) hybrid modulated three-port converter (TPC) interfacing a photovoltaic (PV) source, a storage battery, and a load is proposed for a standalone PV/battery power system. The TPC is derived by integrating a two-phase interleaved boost circuit and a full-bridge LLC resonant circuit. Hence, it features a reduced number of switches, lower cost, and single-stage power conversion between any two of the three ports. With the PWM and PFM hybrid modulation strategy, the dc voltage gain from the PV to the load is wide, the input current ripple is small, and flexible power management among three ports can be easily achieved. Moreover, all primary switches turn ON with zero-voltage switching (ZVS), while all secondary diodes operate with zero-current switching over full operating range, which is beneficial for reducing switching losses, switch voltage stress, and electromagnetic interference. The topology derivation and power transfer analysis are presented. Depending on the resonant states, two different operation modes are identified and explored. Then, main characteristics, including the gain, input current ripple, and ZVS, are analyzed and compared. Furthermore, guidelines for parameter design and optimization are given as well. Finally, a 500-W laboratory prototype is built and tested to verify the effectiveness and advantages of all proposals.",
"title": ""
},
{
"docid": "43228a3436f23d786ad7faa7776f1e1b",
"text": "Antineutrophil cytoplasmic antibody (ANCA)-associated vasculitides (AAV) include Wegener granulomatosis, microscopic polyangiitis, Churg–Strauss syndrome and renal-limited vasculitis. This Review highlights the progress that has been made in our understanding of AAV pathogenesis and discusses new developments in the treatment of these diseases. Evidence from clinical studies, and both in vitro and in vivo experiments, supports a pathogenic role for ANCAs in the development of AAV; evidence is stronger for myeloperoxidase-ANCAs than for proteinase-3-ANCAs. Neutrophils, complement and effector T cells are also involved in AAV pathogenesis. With respect to treatment of AAV, glucocorticoids, cyclophosphamide and other conventional therapies are commonly used to induce remission in generalized disease. Pulse intravenous cyclophosphamide is equivalent in efficacy to oral cyclophosphamide but seems to be associated with less adverse effects. Nevertheless, alternatives to cyclophosphamide therapy have been investigated, such as the use of methotrexate as a less-toxic alternative to cyclophosphamide to induce remission in non-organ-threatening or non-life-threatening AAV. Furthermore, rituximab is equally as effective as cyclophosphamide for induction of remission in AAV and might become the standard of therapy in the near future. Controlled trials in which specific immune effector cells and molecules are being therapeutically targeted have been initiated or are currently being planned.",
"title": ""
},
{
"docid": "365c76244ffb82ff7fd6e02954438951",
"text": "This paper first presents open loop behavior of buck boost and cuk dc-dc converter operated under Continuous conduction mode (CCM). This is achieved withthe help of state equations and MATLAB/SIMULINK tool for simulation of those state equations. Using the knowledge of open loop converter behavior, a closed loop converter is designed. DC-DC converter will be designed for specific line and load conditions. But in practice there is deviation of the circuit operation from the desired nominal behavior due to changes in the source, load and circuit parameters. So we need to design a proper controller or compensator to overcome this situation of the circuit operation. This paper presents PID controller designed such that any input variations produces a constant output voltage.",
"title": ""
},
{
"docid": "4977071d6bd1a0d99fcc45d97d71ec0b",
"text": "In today's Internet, there are many challenges such as low-latency support for interactive communication, security and privacy of user data, as well as development and deployment of new transport mechanisms. Quick UDP Internet Connections (QUIC) is a new transport protocol that addresses these challenges, focusing on HTTP/2 transmission as a first use case. The first QUIC working group meeting took place at IETF-97 in November 2016, and it has begun the standardization process. This article introduces the key features of QUIC and discusses the potential challenges that require further consideration.",
"title": ""
},
{
"docid": "6a3afa9644477304d2d32d99c99e07c8",
"text": "This paper presents a comprehensive survey of five most widely used in-vehicle networks from three perspectives: system cost, data transmission capacity, and fault-tolerance capability. The paper reviews the pros and cons of each network, and identifies possible approaches to improve the quality of service (QoS). In addition, two classifications of automotive gateways have been presented along with a brief discussion about constructing a comprehensive in-vehicle communication system with different networks and automotive gateways. Furthermore, security threats to in-vehicle networks are briefly discussed, along with the corresponding protective methods. The survey concludes with highlighting the trends in future development of in-vehicle network technology and a proposal of a topology of the next generation in-vehicle network.",
"title": ""
},
{
"docid": "c08fa2224b8a38b572ea546abd084bd1",
"text": "Off-chip main memory has long been a bottleneck for system performance. With increasing memory pressure due to multiple on-chip cores, effective cache utilization is important. In a system with limited cache space, we would ideally like to prevent 1) cache pollution, i.e., blocks with low reuse evicting blocks with high reuse from the cache, and 2) cache thrashing, i.e., blocks with high reuse evicting each other from the cache.\n In this paper, we propose a new, simple mechanism to predict the reuse behavior of missed cache blocks in a manner that mitigates both pollution and thrashing. Our mechanism tracks the addresses of recently evicted blocks in a structure called the Evicted-Address Filter (EAF). Missed blocks whose addresses are present in the EAF are predicted to have high reuse and all other blocks are predicted to have low reuse. The key observation behind this prediction scheme is that if a block with high reuse is prematurely evicted from the cache, it will be accessed soon after eviction. We show that an EAF-implementation using a Bloom filter, which is cleared periodically, naturally mitigates the thrashing problem by ensuring that only a portion of a thrashing working set is retained in the cache, while incurring low storage cost and implementation complexity.\n We compare our EAF-based mechanism to five state-of-the-art mechanisms that address cache pollution or thrashing, and show that it provides significant performance improvements for a wide variety of workloads and system configurations.",
"title": ""
},
{
"docid": "b5ca5d8ee536160c293ca52a2f3c4db2",
"text": "We present a neural network based shiftreduce CCG parser, the first neural-network based parser for CCG. We also study the impact of neural network based tagging models, and greedy versus beam-search parsing, by using a structured neural network model. Our greedy parser obtains a labeled F-score of 83.27%, the best reported result for greedy CCG parsing in the literature (an improvement of 2.5% over a perceptron based greedy parser) and is more than three times faster. With a beam, our structured neural network model gives a labeled F-score of 85.57% which is 0.6% better than the perceptron based counterpart.",
"title": ""
},
{
"docid": "dfae67d62731a9307a10de7b11d6d117",
"text": "A 16 Gb 4-state MLC NAND flash memory augments the sustained program throughput to 34 MB/s by fully exercising all the available cells along a selected word line and by using additional performance enhancement modes. The same chip operating as an 8 Gb SLC device guarantees over 60 MB/s programming throughput. The newly introduced all bit line (ABL) architecture has multiple advantages when higher performance is targeted and it was made possible by adopting the ldquocurrent sensingrdquo (as opposed to the mainstream ldquovoltage sensingrdquo) technique. The general chip architecture is presented in contrast to a state of the art conventional circuit and a double size data buffer is found to be necessary for the maximum parallelism attained. Further conceptual changes designed to counterbalance the area increase are presented, hierarchical column architecture being of foremost importance. Optimization of other circuits, such as the charge pump, is another example. Fast data access rate is essential, and ways of boosting it are described, including a new redundancy scheme. ABL contribution to energy saving is also acknowledged.",
"title": ""
},
{
"docid": "9f1a1fdd9e6bc888abb14827d43d1980",
"text": "In recent years, many variance reduced algorithms for empirical risk minimization have been introduced. In contrast to vanilla SGD, these methods converge linearly on strong convex problems. To obtain the variance reduction, current methods either require frequent passes over the full data to recompute gradients—without making any progress during this time (like in SVRG), or they require memory of the same size as the input problem (like SAGA). In this work, we propose k-SVRG, an algorithm that interpolates between those two extremes: it makes best use of the available memory and in turn does avoid full passes over the data without making progress. We prove linear convergence of k-SVRG on strongly convex problems and convergence to stationary points on non-convex problems. Numerical experiments show the effectiveness of our method.",
"title": ""
},
{
"docid": "15f8f9a6a6ec038a9b48fcc30f39ad4e",
"text": "The macrophage mannose receptor (MR, CD206) is a C-type lectin expressed predominantly by most tissue macrophages, dendritic cells and specific lymphatic or endothelial cells. It functions in endocytosis and phagocytosis, and plays an important role in immune homeostasis by scavenging unwanted mannoglycoproteins. More attention is being paid to its particularly high expression in tissue pathology sites during disease such the tumor microenvironment. The MR recognizes a variety of microorganisms by their mannan-coated cell wall, which is exploited by adapted intracellular pathogens such as Mycobacterium tuberculosis, for their own survival. Despite the continued development of drug delivery technologies, the targeting of agents to immune cells, especially macrophages, for effective diagnosis and treatment of chronic infectious diseases has not been addressed adequately. In this regard, strategies that optimize MR-mediated uptake by macrophages in target tissues during infection are becoming an attractive approach. We review important progress in this area.",
"title": ""
},
{
"docid": "8898565b2a081af8374af7b5d25c52ec",
"text": "Traditionally, prejudice has been conceptualized as simple animosity. The stereotype content model (SCM) shows that some prejudice is worse. The SCM previously demonstrated separate stereotype dimensions of warmth (low-high) and competence (low-high), identifying four distinct out-group clusters. The SCM predicts that only extreme out-groups, groups that are both stereotypically hostile and stereotypically incompetent (low warmth, low competence), such as addicts and the homeless, will be dehumanized. Prior studies show that the medial prefrontal cortex (mPFC) is necessary for social cognition. Functional magnetic resonance imaging provided data for examining brain activations in 10 participants viewing 48 photographs of social groups and 12 participants viewing objects; each picture dependably represented one SCM quadrant. Analyses revealed mPFC activation to all social groups except extreme (low-low) out-groups, who especially activated insula and amygdala, a pattern consistent with disgust, the emotion predicted by the SCM. No objects, though rated with the same emotions, activated the mPFC. This neural evidence supports the prediction that extreme out-groups may be perceived as less than human, or dehumanized.",
"title": ""
},
{
"docid": "695a0e8ba9556afde6b22f29399616ba",
"text": "Microstrip lines (MSL) are widely used in microwave systems because of its low cost, light weight, and easy integration with other components. Substrate integrated waveguides (SIW), which inherit the advantages from traditional rectangular waveguides without their bulky configuration, aroused recently in low loss and high power planar applications. This chapter proposed the design and modeling of transitions between these two common structures. Research motives will be described firstly in the next subsection, followed by a literature survey on the proposed MSL to SIW transition structures. Outlines of the following sections in this chapter will also be given in the end of this section.",
"title": ""
},
{
"docid": "d642e6cc5de4dc194c6b2d7d0cf17d18",
"text": "The purpose of regression testing is to ensure that bug xes and new functionality introduced in a new version of a software do not adversely a ect the correct functionality inherited from the previous version. This paper explores e cient methods of selecting small subsets of regression test sets that may be used to es-",
"title": ""
},
{
"docid": "775969c0c6ad9224cdc9b73706cb5b4f",
"text": "This paper discusses how hot carrier injection (HCI) can be exploited to create a trojan that will cause hardware failures. The trojan is produced not via additional logic circuitry but by controlled scenarios that maximize and accelerate the HCI effect in transistors. These scenarios range from manipulating the manufacturing process to varying the internal voltage distribution. This new type of trojan is difficult to test due to its gradual hardware degradation mechanism. This paper describes the HCI effect, detection techniques and discusses the possibility for maliciously induced HCI trojans.",
"title": ""
},
{
"docid": "161bf0c4abd39223f881510594b459d8",
"text": "This paper describes a set of comparative exper iments for the problem of automatically ltering unwanted electronic mail messages Several vari ants of the AdaBoost algorithm with con dence rated predictions Schapire Singer have been applied which di er in the complexity of the base learners considered Two main conclu sions can be drawn from our experiments a The boosting based methods clearly outperform the baseline learning algorithms Naive Bayes and Induction of Decision Trees on the PU corpus achieving very high levels of the F measure b Increasing the complexity of the base learners al lows to obtain better high precision classi ers which is a very important issue when misclassi cation costs are considered",
"title": ""
},
{
"docid": "2cc1afe86873bb7d83e919d25fbd5954",
"text": "Cellular Automata (CA) have attracted growing attention in urban simulation because their capability in spatial modelling is not fully developed in GIS. This paper discusses how cellular automata (CA) can be extended and integrated with GIS to help planners to search for better urban forms for sustainable development. The cellular automata model is built within a grid-GIS system to facilitate easy access to GIS databases for constructing the constraints. The essence of the model is that constraint space is used to regulate cellular space. Local, regional and global constraints play important roles in a ecting modelling results. In addition, ‘grey’ cells are de ned to represent the degrees or percentages of urban land development during the iterations of modelling for more accurate results. The model can be easily controlled by the parameter k using a power transformation function for calculating the constraint scores. It can be used as a useful planning tool to test the e ects of di erent urban development scenarios. 1. Cellular automata and GIS for urban simulation Cellular automata (CA) were developed by Ulam in the 1940s and soon used by Von Neumann to investigate the logical nature of self-reproducible systems (White and Engelen 1993). A CA system usually consists of four elements—cells, states, neighbourhoods and rules. Cells are the smallest units which must manifest some adjacency or proximity. The state of a cell can change according to transition rules which are de ned in terms of neighbourhood functions. The notion of neighbourhood is central to the CA paradigm (Couclelis 1997), but the de nition of neighbourhood is rather relaxed. CA are cell-based methods that can model two-dimensional space. Because of this underlying feature, it does not take long for geographers to apply CA to simulate land use change, urban development and other changes of geographical phenomena. CA have become especially, useful as a tool for modelling urban spatial dynamics and encouraging results have been documented (Deadman et al. 1993, Batty and Xie 1994a, Batty and Xie 1997, White and Engelen 1997). The advantages are that the future trajectory of urban morphology can be shown virtually during the simulation processes. The rapid development of GIS helps to foster the application of CA in urban Internationa l Journal of Geographica l Information Science ISSN 1365-8816 print/ISSN 1362-3087 online © 2000 Taylor & Francis Ltd http://www.tandf.co.uk/journals/tf/13658816.html X. L i and A. G. Yeh 132 simulation. Some researches indicate that cell-based GIS may indeed serve as a useful tool for implementing cellular automata models for the purposes of geographical analysis (Itami 1994). Although current GIS are not designed for fast iterative computation, cellular automata can still be used by creating batch ® les that contain iterative command sequences. While linking cellular automata to GIS can overcome some of the limitations of current GIS (White and Engelen 1997), CA can bene® t from the useful information provided by GIS in de® ning transition rules. The data realism requirement of CA can be best satis® ed with the aid of GIS (Couclelis 1997). Space no longer needs to be uniform since the spatial di erence equations can be easily developed in the context of GIS (Batty and Xie 1994b). Most current GIS techniques have limitations in modelling changes in the landscape over time, but the integration of CA and GIS has demonstrated considerable potential (Itami 1988, Deadman et al. 1993). The limitations of contemporary GIS include its poor ability to handle dynamic spatial models, poor performance for many operations, and poor handling of the temporal dimension (Park and Wagner 1997 ). In coupling GIS with CA, CA can serves as an analytical engine to provide a ̄ exible framework for the programming and running of dynamic spatial models. 2. Constrained CA for the planning of sustainable urban development Interest in sustainable urban development has increased rapidly in recent years. Unfortunately, the concept of sustainable urban development is debatable because unique de® nitions and scopes do not exist (Haughton and Hunter 1994). However, this concept is very important to our society in dealing with its increasingly pressing resource and environmental problems. As more nations are implementing this concept in their development plans, it has created important impacts on national policies and urban planning. The concern over sustainable urban development will continue to grow, especially in the developing countries which are undergoing rapid urbanization. A useful way to clarify its ambiguity is to set up some working de® nitions. Some speci® c and narrow de® nitions do exist for special circumstances but there are no commonly accepted de® nitions. The working de® nitions can help to eliminate ambiguities and ® nd out solutions and better alternatives to existing development patterns. The conversion of agricultural land into urban land uses in the urbanization processes has become a serious issue for sustainable urban development in the developing countries. Take China as an example, it cannot a ord to lose a signi® cant amount of its valuable agricultural land because it has a huge growing population to feed. Unfortunately, in recent years, a large amount of such land have been unnecessarily lost and the forms of existing urban development cannot help to sustain its further development (Yeh and Li 1997, Yeh and Li 1998). The complete depletion of agricultural land resources would not be far away in some fast growing areas if such development trends continued. The main issue of sustainable urban development is to search for better urban forms that can help to sustain development, especially the minimization of unnecessary agricultural land loss. Four operational criteria for sustainable urban forms can be used: (1 ) not to convert too much agricultural land at the early stages of development; (2 ) to decide the amount of land consumption based on available land resources and population growth; (3 ) to guide urban development to sites which are less important for food production; and (4 ) to maintain compact development patterns. The objective of this research is to develop an operational CA model for Modelling sustainable urban development 133 sustainable urban development. A number of advantages have been identi® ed in the application of CA in urban simulation (Wolfram 1984, Itami 1988). Cellular automata are seen not only as a framework for dynamic spatial modelling but as a paradigm for thinking about complex spatial-temporal phenomena and an experimental laboratory for testing ideas (Itami 1994 ). Formally, standard cellular automata may be generalised as follows: St+1 = f (St, N ) (1 ) where S is a set of all possible states of the cellular automata, N is a neighbourhood of all cells providing input values for the function f, and f is a transition function that de® nes the change of the state from t to t+1. Standard cellular automata apply a b̀ottom-up’ approach. The approach argues that local rules can create complex patterns by running the models in iterations. It is central to the idea that cities should work from particular to general, and that they should seek to understand the small scale in order to understand the large (Batty and Xie 1994a). It is amazing to see that real urban systems can be modelled based on microscopic behaviour that may be the CA model’s most useful advantage . However, the t̀op-down’ critique nevertheless needs to be taken seriously. An example is that central governments have the power to control overall land development patterns and the amount of land consumption. With the implementations of sustainable elements into cellular automata, a new paradigm for thinking about urban planning emerges. It is possible to embed some constraints in the transition rules of cellular automata so that urban growth can be rationalised according to a set of pre-de® ned sustainable criteria. However, such experiments are very limited since many researchers just focus on the simulation of possible urban evolution and the understanding of growth mechanisms using CA techniques. The constrained cellular automata should be able to provide much better alternatives to actual development patterns. A good example is to produce a c̀ompact’ urban form using CA models. The need for sustainable cities is readily apparent in recent years. A particular issue is to seek the most suitable form for sustainable urban development. The growing spread of urban areas accelerating at an alarming rate in the last few decades re ̄ ects the dramatic pressure of human development on nature. The steady rise in urban areas and decline in agricultural land have led to the worsening of food production and other environmental problems. Urban development towards a compact form has been proposed as a means to alleviate the increasingly intensi® ed land use con ̄ icts. The morphology of a city is an important feature in the c̀ompact city theory’ (Jenks et al. 1996). Evidence indicates a strong link between urban form and sustainable development, although it is not simple and straightforward. Compact urban form can be a major means in guiding urban development to sustainability, especially in reducing the negative e ects of the present dispersed development in Western cities. However, one of the frequent problems in the compact city debate is the lack of proper tools to ensure successful implementation of the compact city because of its complexity (Burton et al. 1996). This study demonstrates that the constrained CA can be used to model compact cities and sustainable urban forms based on local, regional and global constraints. 3. Suitability and constraints for sustainable urban forms using CA In this constrained CA model, there are three important aspects of sustainable urban forms that need to be consideredÐ compact patterns, land q",
"title": ""
},
{
"docid": "872f224c2dbf06a335eee267bac4ec79",
"text": "Shallow supervised 1-hidden layer neural networks have a number of favorable properties that make them easier to interpret, analyze, and optimize than their deep counterparts, but lack their representational power. Here we use 1-hidden layer learning problems to sequentially build deep networks layer by layer, which can inherit properties from shallow networks. Contrary to previous approaches using shallow networks, we focus on problems where deep learning is reported as critical for success. We thus study CNNs on two large-scale image recognition tasks: ImageNet and CIFAR-10. Using a simple set of ideas for architecture and training we find that solving sequential 1-hidden-layer auxiliary problems leads to a CNN that exceeds AlexNet performance on ImageNet. Extending our training methodology to construct individual layers by solving 2-and-3-hidden layer auxiliary problems, we obtain an 11-layer network that exceeds VGG-11 on ImageNet obtaining 89.8% top-5 single crop.To our knowledge, this is the first competitive alternative to end-to-end training of CNNs that can scale to ImageNet. We conduct a wide range of experiments to study the properties this induces on the intermediate layers.",
"title": ""
},
{
"docid": "eadc50aebc6b9c2fbd16f9ddb3094c00",
"text": "Instance segmentation is the problem of detecting and delineating each distinct object of interest appearing in an image. Current instance segmentation approaches consist of ensembles of modules that are trained independently of each other, thus missing opportunities for joint learning. Here we propose a new instance segmentation paradigm consisting in an end-to-end method that learns how to segment instances sequentially. The model is based on a recurrent neural network that sequentially finds objects and their segmentations one at a time. This net is provided with a spatial memory that keeps track of what pixels have been explained and allows occlusion handling. In order to train the model we designed a principled loss function that accurately represents the properties of the instance segmentation problem. In the experiments carried out, we found that our method outperforms recent approaches on multiple person segmentation, and all state of the art approaches on the Plant Phenotyping dataset for leaf counting.",
"title": ""
},
{
"docid": "a9bc9d9098fe852d13c3355ab6f81edb",
"text": "The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification.",
"title": ""
}
] |
scidocsrr
|
4051fcb17e8731404dbe0abeef2fa326
|
OpenSec: Policy-Based Security Using Software-Defined Networking
|
[
{
"docid": "afa7ccbc17103f199abc38e98b6049bf",
"text": "Cloud computing is becoming a popular paradigm. Many recent new services are based on cloud environments, and a lot of people are using cloud networks. Since many diverse hosts and network configurations coexist in a cloud network, it is essential to protect each of them in the cloud network from threats. To do this, basically, we can employ existing network security devices, but applying them to a cloud network requires more considerations for its complexity, dynamism, and diversity. In this paper, we propose a new framework, CloudWatcher, which provides monitoring services for large and dynamic cloud networks. This framework automatically detours network packets to be inspected by pre-installed network security devices. In addition, all these operations can be implemented by writing a simple policy script, thus, a cloud network administrator is able to protect his cloud network easily. We have implemented the proposed framework, and evaluated it on different test network environments.",
"title": ""
},
{
"docid": "2601ff3b4af85883017d8fb7e28e5faa",
"text": "The heterogeneous nature of the applications, technologies and equipment that today's networks have to support has made the management of such infrastructures a complex task. The Software-Defined Networking (SDN) paradigm has emerged as a promising solution to reduce this complexity through the creation of a unified control plane independent of specific vendor equipment. However, designing a SDN-based solution for network resource management raises several challenges as it should exhibit flexibility, scalability and adaptability. In this paper, we present a new SDN-based management and control framework for fixed backbone networks, which provides support for both static and dynamic resource management applications. The framework consists of three layers which interact with each other through a set of interfaces. We develop a placement algorithm to determine the allocation of managers and controllers in the proposed distributed management and control layer. We then show how this layer can satisfy the requirements of two specific applications for adaptive load-balancing and energy management purposes.",
"title": ""
}
] |
[
{
"docid": "160a27e958b5e853efb090f93bf006e8",
"text": "Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone.",
"title": ""
},
{
"docid": "b9147ef0cf66bdb7ecc007a4e3092790",
"text": "This paper is related to the use of social media for disaster management by humanitarian organizations. The past decade has seen a significant increase in the use of social media to manage humanitarian disasters. It seems, however, that it has still not been used to its full potential. In this paper, we examine the use of social media in disaster management through the lens of Attribution Theory. Attribution Theory posits that people look for the causes of events, especially unexpected and negative events. The two major characteristics of disasters are that they are unexpected and have negative outcomes/impacts. Thus, Attribution Theory may be a good fit for explaining social media adoption patterns by emergency managers. We propose a model, based on Attribution Theory, which is designed to understand the use of social media during the mitigation and preparedness phases of disaster management. We also discuss the theoretical contributions and some practical implications. This study is still in its nascent stage and is research in progress.",
"title": ""
},
{
"docid": "e9b7eba9f15440ec7112a1938fad1473",
"text": "Recovery is not a new concept within mental health, although in recent times, it has come to the forefront of the policy agenda. However, there is no universal definition of recovery, and it is a contested concept. The aim of this study was to examine the British literature relating to recovery in mental health. Three contributing groups are identified: service users, health care providers and policy makers. A review of the literature was conducted by accessing all relevant published texts. A search was conducted using these terms: 'recovery', 'schizophrenia', 'psychosis', 'mental illness' and 'mental health'. Over 170 papers were reviewed. A thematic analysis was conducted. Six main themes emerged, which were examined from the perspective of the stakeholder groups. The dominant themes were identity, the service provision agenda, the social domain, power and control, hope and optimism, risk and responsibility. Consensus was found around the belief that good quality care should be made available to service users to promote recovery both as inpatient or in the community. However, the manner in which recovery was defined and delivered differed between the groups.",
"title": ""
},
{
"docid": "d13ddbafa8f0774aec3bf0f491b89c0c",
"text": "Dust explosions always claim lives and cause huge financial losses. Dust explosion risk can be prevented by inherently safer design or mitigated by engineering protective system. Design of explosion prevention and protection needs comprehensive knowledge and data on the process, workshop, equipment, and combustible materials. The knowledge includes standards, expertise of experts, and practical experience. The database includes accidents, dust explosion characteristics, inherently safer design methods, and protective design methods. Integration of such a comprehensive knowledge system is very helpful. The developed system has the following functions: risk assessment, accident analysis, recommendation of prevention and protection solution, and computer aided design of explosion protection. The software was based on Browser/Server architecture and was developed using mixed programming of ASP.Net and Prolog. The developed expert system can be an assistant to explosion design engineers and safety engineers of combustible dust handling plants.",
"title": ""
},
{
"docid": "d8ddb086b2bd881e68d14488025007f3",
"text": "This paper presents a compact model of SiC insulated-gate bipolar transistors (IGBTs) for power electronic circuit simulation. Here, we focus on the modeling of important specific features in the turn-off characteristics of the 4H-SiC IGBT, which are investigated with a 2-D device simulator, at supply voltages higher than 5 kV. These features are found to originate from the punch-through effect of the SiC IGBT. Thus, they are modeled based on the carrier distribution change caused by punch through and implemented into the silicon IGBT model named “HiSIM-IGBT” to obtain a practically useful SiC-IGBT model. The developed compact SiC-IGBT model for circuit simulation is verified with the 2-D device simulation data.",
"title": ""
},
{
"docid": "3ee9255be688d57902e4456b50392c11",
"text": "This paper presents a character segmentation method to address automatic number plate recognition problem. The method considered pixel intensity, character appearance, and arrangement of characters altogether to segment character regions. The method firstly discovers candidate blobs of characters by using connected component analysis and appearance-based character detection. A character recognizer is used for removing redundant and noisy blobs. Then, a trained classifier selects character blobs among the candidates by examining arrangement of the blobs. Experimental results show an achievement of 98.3% of segmentation rate, which prove the effectiveness of our method.",
"title": ""
},
{
"docid": "d9123053892ce671665a3a4a1694a57c",
"text": "Visual perceptual learning (VPL) is defined as a long-term improvement in performance on a visual task. In recent years, the idea that conscious effort is necessary for VPL to occur has been challenged by research suggesting the involvement of more implicit processing mechanisms, such as reinforcement-driven processing and consolidation. In addition, we have learnt much about the neural substrates of VPL and it has become evident that changes in visual areas and regions beyond the visual cortex can take place during VPL.",
"title": ""
},
{
"docid": "a2223d57a866b0a0ef138e52fb515b84",
"text": "This paper is concerned with paraphrase detection, i.e., identifying sentences that are semantically identical. The ability to detect similar sentences written in natural language is crucial for several applications, such as text mining, text summarization, plagiarism detection, authorship authentication and question answering. Recognizing this importance, we study in particular how to address the challenges with detecting paraphrases in user generated short texts, such as Twitter, which often contain language irregularity and noise, and do not necessarily contain as much semantic information as longer clean texts. We propose a novel deep neural network-based approach that relies on coarse-grained sentence modelling using a convolutional neural network (CNN) and a recurrent neural network (RNN) model, combined with a specific fine-grained word-level similarity matching model. More specifically, we develop a new architecture, called DeepParaphrase, which enables to create an informative semantic representation of each sentence by (1) using CNN to extract the local region information in form of important n-grams from the sentence, and (2) applying RNN to capture the long-term dependency information. In addition, we perform a comparative study on stateof-the-art approaches within paraphrase detection. An important insight from this study is that existing paraphrase approaches perform well when applied on clean texts, but they do not necessarily deliver good performance against noisy texts, and vice versa. In contrast, our evaluation has shown that the proposed DeepParaphrase-based approach achieves good results in both types of texts, thus making it more robust and generic than the existing approaches.",
"title": ""
},
{
"docid": "b61c9f69a2fffcf2c3753e51a3bbfa14",
"text": "..............................................................................................................ix 1 Interoperability .............................................................................................1 1.",
"title": ""
},
{
"docid": "9d3cb5ae51c25bb059a7503d1212e6a5",
"text": "People are generally unaware of the operation of the system of cognitive mechanisms that ameliorate their experience of negative affect (the psychological immune system), and thus they tend to overestimate the duration of their affective reactions to negative events. This tendency was demonstrated in 6 studies in which participants overestimated the duration of their affective reactions to the dissolution of a romantic relationship, the failure to achieve tenure, an electoral defeat, negative personality feedback, an account of a child's death, and rejection by a prospective employer. Participants failed to distinguish between situations in which their psychological immune systems would and would not be likely to operate and mistakenly predicted overly and equally enduring affective reactions in both instances. The present experiments suggest that people neglect the psychological immune system when making affective forecasts.",
"title": ""
},
{
"docid": "5fd751a9400da021bf0337e64f3ff18a",
"text": "In this letter, a dual-band and dual-polarization capacitive-fed slot patch antenna is investigated. The proposed antenna can operate at 1.575 GHz for Global Positioning System and 2.4 GHz for Wi-Fi system with the corresponding polarizations. A 90 ° hybrid coupler chip was used to excite the right-hand circular polarization required for optimum GPS performance. For the high frequency band, a pair of linearly polarized arc-shaped slots radiating at 2.4 GHz are embedded in the circular patch. The operating bandwidths of the antenna are enhanced by the multilayered geometry, and the capacitive disks feedpoints placed between the substrate layers. The measured impedance bandwidths at the lower and high bands are 320 and 230 MHz, respectively. The measured 3-dB axial-ratio bandwidth is 120 MHz.",
"title": ""
},
{
"docid": "1cfcc98bcf1e7be84a4e5f984327cb96",
"text": "It is approximately 50 years since the first computational experiments were conducted in what has become known today as the field of Genetic Programming (GP), twenty years since John Koza named and popularised the method, and ten years since the first issue appeared of the Genetic Programming & Evolvable Machines journal. In particular, during the past two decades there has been a significant range and volume of development in the theory and application of GP, and in recent years the field has become increasingly applied. There remain a number of significant open issues despite the successful application of GP to a number of challenging real-world problem domains and progress in the development of a theory explaining the behavior and dynamics of GP. These issues must be addressed for GP to realise its full potential and to become a trusted mainstream member of the computational problem solving toolkit. In this paper we outline some of the challenges and open issues that face researchers and practitioners of GP. We hope this overview will stimulate debate, focus the direction of future research to deepen our understanding of GP, and further the development of more powerful problem solving algorithms.",
"title": ""
},
{
"docid": "272d83db41293889d9ca790717983193",
"text": "The ability to measure the level of customer satisfaction with online shopping is essential in gauging the success and failure of e-commerce. To do so, Internet businesses must be able to determine and understand the values of their existing and potential customers. Hence, it is important for IS researchers to develop and validate a diverse array of metrics to comprehensively capture the attitudes and feelings of online customers. What factors make online shopping appealing to customers? What customer values take priority over others? This study’s purpose is to answer these questions, examining the role of several technology, shopping, and product factors on online customer satisfaction. This is done using a conjoint analysis of consumer preferences based on data collected from 188 young consumers. Results indicate that the three most important attributes to consumers for online satisfaction are privacy (technology factor), merchandising (product factor), and convenience (shopping factor). These are followed by trust, delivery, usability, product customization, product quality, and security. Implications of these findings are discussed and suggestions for future research are provided.",
"title": ""
},
{
"docid": "a17e4b2fdab3ae2b82728e1e49559612",
"text": "Top-k processing in uncertain databases is semantically and computationally different from traditional top-k processing. The interplay between score and uncertainty makes traditional techniques inapplicable. We introduce new probabilistic formulations for top-k queries. Our formulations are based on \"marriage\" of traditional top-k semantics and possible worlds semantics. In the light of these formulations, we construct a framework that encapsulates a state space model and efficient query processing techniques to tackle the challenges of uncertain data settings. We prove that our techniques are optimal in terms of the number of accessed tuples and materialized search states. Our experiments show the efficiency of our techniques under different data distributions with orders of magnitude improvement over naive materialization of possible worlds.",
"title": ""
},
{
"docid": "06272b99e56db2cb79c336047268c064",
"text": "In this paper, we describe our proposed approach for participating in the Third Emotion Recognition in the Wild Challenge (EmotiW 2015). We focus on the sub-challenge of Audio-Video Based Emotion Recognition using the AFEW dataset. The AFEW dataset consists of 7 emotion groups corresponding to the 7 basic emotions. Each group includes multiple videos from movie clips with people acting a certain emotion. In our approach, we extract LBP-TOP-based video features, openEAR energy/spectral-based audio features, and CNN (convolutional neural network) based deep image features by fine-tuning a pre-trained model with extra emotion images from the web. For each type of features, we run a SVM grid search to find the best RBF kernel. Then multi-kernel learning is employed to combine the RBF kernels to accomplish the feature fusion and generate a fused RBF kernel. Running multi-class SVM classification, we achieve a 45.23% test accuracy on the AFEW dataset. We then apply a decision optimization method to adjust the label distribution closer to the ground truth, by setting offsets for some of the classifiers' prediction confidence score. By applying this modification, the test accuracy increases to 50.46%, which is a significant improvement comparing to the baseline accuracy 39.33% .",
"title": ""
},
{
"docid": "52dbfe369d1875c402220692ef985bec",
"text": "Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data. Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors. Leave-many-out evaluation shows that our method is able to infer location for 101, 846, 236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80% of public tweets.",
"title": ""
},
{
"docid": "5a4959ef609e2ed64018aed292b7f27f",
"text": "With thousands of alerts identified by IDSs every day, the process of distinguishing which alerts are important (i.e., true positives) and which are is irrelevant (i.e., false positives) is become more complicated. The security administrator must analyze each single alert either a true of false alert. This paper proposes an alert prioritization model, which is based on risk assessment. The model uses indicators, such as priority, reliability, asset value, as decision factors to calculate alert's risk. The objective is to determine the impact of certain alerts generated by IDS on the security status of an information system, also improve the detection of intrusions using snort by classifying the most critical alerts by their levels of risk, thus, only the alerts that presents a real threat will be displayed to the security administrator, so, we reduce the number of false positives, also we minimize the analysis time of the alerts. The model was evaluated using KDD Cup 99 Dataset as test environment and a pattern matching algorithm.",
"title": ""
},
{
"docid": "124dffade8cbc98b95292a21b71b31e0",
"text": "High performance photodetectors play important roles in the development of innovative technologies in many fields, including medicine, display and imaging, military, optical communication, environment monitoring, security check, scientific research and industrial processing control. Graphene, the most fascinating two-dimensional material, has demonstrated promising applications in various types of photodetectors from terahertz to ultraviolet, due to its ultrahigh carrier mobility and light absorption in broad wavelength range. Graphene field effect transistors are recognized as a type of excellent transducers for photodetection thanks to the inherent amplification function of the transistors, the feasibility of miniaturization and the unique properties of graphene. In this review, we will introduce the applications of graphene transistors as photodetectors in different wavelength ranges including terahertz, infrared, visible, and ultraviolet, focusing on the device design, physics and photosensitive performance. Since the device properties are closely related to the quality of graphene, the devices based on graphene prepared with different methods will be addressed separately with a view to demonstrating more clearly their advantages and shortcomings in practical applications. It is expected that highly sensitive photodetectors based on graphene transistors will find important applications in many emerging areas especially flexible, wearable, printable or transparent electronics and high frequency communications.",
"title": ""
},
{
"docid": "b0593843ce815016a003c60f8f154006",
"text": "This paper introduces a method for acquiring forensic-grade evidence from Android smartphones using open source tools. We investigate in particular cases where the suspect has made use of the smartphone's Wi-Fi or Bluetooth interfaces. We discuss the forensic analysis of four case studies, which revealed traces that were left in the inner structure of three mobile Android devices and also indicated security vulnerabilities. Subsequently, we propose a detailed plan for forensic examiners to follow when dealing with investigations of potential crimes committed using the wireless facilities of a suspect Android smartphone. This method can be followed to perform physical acquisition of data without using commercial tools and then to examine them safely in order to discover any activity associated with wireless communications. We evaluate our method using the Association of Chief Police Officers' (ACPO) guidelines of good practice for computer-based, electronic evidence and demonstrate that it is made up of an acceptable host of procedures for mobile forensic analysis, focused specifically on device Bluetooth and Wi-Fi facilities.",
"title": ""
}
] |
scidocsrr
|
c935c17cbf376a9c344e6c71deade676
|
Robustness of Federated Averaging for Non-IID Data
|
[
{
"docid": "244b583ff4ac48127edfce77bc39e768",
"text": "We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users’ mobile devices instead of logging it to a data center for training. In federated optimization, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network — as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of federated optimization.",
"title": ""
}
] |
[
{
"docid": "93dd7ecb1707f7b404e79d79dac0a7ba",
"text": "Information quality has received great attention from both academics and practitioners since it plays an important role in decision-making process. The need of high information quality in organization is increase in order to reach business excellent. Total Information Quality Management (TIQM) offers solution to solve information quality problems through a method for building an effective information quality management (IQM) with continuous improvement in whole process. However, TIQM does not have a standard measure in determining the process maturity level. Thus causes TIQM process maturity level cannot be determined exactly so that the assessment and improvement process will be difficult to be done. The contribution of this research is the process maturity indicators and measures based on TIQM process and Capability Maturity Model (CMM) concepts. It have been validated through an Expert Judgment using the Delphi method and implemented through a case study.",
"title": ""
},
{
"docid": "c25144cf41462c58820fdcd3652e9fec",
"text": "0957-4174/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.eswa.2009.02.043 * Corresponding author. Tel.: +3",
"title": ""
},
{
"docid": "6f7332494ffc384eaae308b2116cab6a",
"text": "Investigations of the relationship between pain conditions and psychopathology have largely focused on depression and have been limited by the use of non-representative samples (e.g. clinical samples). The present study utilized data from the Midlife Development in the United States Survey (MIDUS) to investigate associations between three pain conditions and three common psychiatric disorders in a large sample (N = 3,032) representative of adults aged 25-74 in the United States population. MIDUS participants provided reports regarding medical conditions experienced over the past year including arthritis, migraine, and back pain. Participants also completed several diagnostic-specific measures from the Composite International Diagnostic Interview-Short Form [Int. J. Methods Psychiatr. Res. 7 (1998) 171], which was based on the revised third edition of the Diagnostic and Statistical Manual of Mental Disorders [American Psychiatric Association 1987]. The diagnoses included were depression, panic attacks, and generalized anxiety disorder. Logistic regression analyses revealed significant positive associations between each pain condition and the psychiatric disorders (Odds Ratios ranged from 1.48 to 3.86). The majority of these associations remained statistically significant after adjusting for demographic variables, the other pain conditions, and other medical conditions. Given the emphasis on depression in the pain literature, it was noteworthy that the associations between the pain conditions and the anxiety disorders were generally larger than those between the pain conditions and depression. These findings add to a growing body of evidence indicating that anxiety disorders warrant further attention in relation to pain. The clinical and research implications of these findings are discussed.",
"title": ""
},
{
"docid": "8af81ca6334ad51856ac523fecd65cc5",
"text": "Few studies have examined how changes in materialism relate to changes in well-being; fewer have experimentally manipulated materialism to change wellbeing. Studies 1, 2, and 3 examined how changes in materialistic aspirations related to changes in well-being, using varying time frames (12 years, 2 years, and 6 months), samples (US young adults and Icelandic adults), and measures of materialism and well-being. Across all three studies, results supported the hypothesis that people’s well-being improves as they place relatively less importance on materialistic goals and values, whereas orienting toward materialistic goals relatively more is associated with decreases in well-being over time. Study 2 additionally demonstrated that this association was mediated by changes in psychological need satisfaction. A fourth, experimental study showed that highly materialistic US adolescents who received an intervention that decreased materialism also experienced increases in self-esteem over the next several months, relative to a control group. Thus, well-being changes as people change their relative focus on materialistic goals.",
"title": ""
},
{
"docid": "537793712e4e62d66e35b3c9114706f2",
"text": "Database indices provide a non-discriminative navigational infrastructure to localize tuples of interest. Their maintenance cost is taken during database updates. In this work we study the complementary approach, addressing index maintenance as part of query processing using continuous physical reorganization, i.e., cracking the database into manageable pieces. Each query is interpreted not only as a request for a particular result set, but also as an advice to crack the physical database store into smaller pieces. Each piece is described by a query, all of which are assembled in a cracker index to speedup future search. The cracker index replaces the non-discriminative indices (e.g., B-trees and hash tables) with a discriminative index. Only database portions of past interest are easily localized. The remainder is unexplored territory and remains non-indexed until a query becomes interested. The cracker index is fully self-organized and adapts to changing query workloads. With cracking, the way data is physically stored self-organizes according to query workload. Even with a huge data set, only tuples of interest are touched, leading to significant gains in query performance. In case the focus shifts to a different part of the data, the cracker index will automatically adjust to that. We report on our design and implementation of cracking in the context of a full fledged relational system. It led to a limited enhancement to its relational algebra kernel, such that cracking could be piggy-backed without incurring too much processing overhead. Furthermore, we illustrate the ripple effect of dynamic reorganization on the query plans derived by the SQL optimizer. The experiences and results obtained are indicative of a significant reduction in system complexity with clear performance benefits. ∗Stratos Idreos is the contact author ([email protected]) and a Ph.D student at CWI",
"title": ""
},
{
"docid": "b4880ddb59730f465f585f3686d1d2b1",
"text": "The authors study the effect of word-of-mouth (WOM) marketing on member growth at an Internet social networking site and compare it with traditional marketing vehicles. Because social network sites record the electronic invitations sent out by existing members, outbound WOM may be precisely tracked. WOM, along with traditional marketing, can then be linked to the number of new members subsequently joining the site (signups). Due to the endogeneity among WOM, new signups, and traditional marketing activity, the authors employ a Vector Autoregression (VAR) modeling approach. Estimates from the VAR model show that word-ofmouth referrals have substantially longer carryover effects than traditional marketing actions. The long-run elasticity of signups with respect to WOM is estimated to be 0.53 (substantially larger than the average advertising elasticities reported in the literature) and the WOM elasticity is about 20 times higher than the elasticity for marketing events, and 30 times that of media appearances. Based on revenue from advertising impressions served to a new member, the monetary value of a WOM referral can be calculated; this yields an upper bound estimate for the financial incentives the firm might offer to stimulate word-of-mouth.",
"title": ""
},
{
"docid": "4d7e876d61060061ba6419869d00675e",
"text": "Context-aware recommender systems (CARS) take context into consideration when modeling user preferences. There are two general ways to integrate context with recommendation: contextual filtering and contextual modeling. Currently, the most effective context-aware recommendation algorithms are based on a contextual modeling approach that estimate deviations in ratings across different contexts. In this paper, we propose context similarity as an alternative contextual modeling approach and examine different ways to represent context similarity and incorporate it into recommendation. More specifically, we show how context similarity can be integrated into the sparse linear method and matrix factorization algorithms. Our experimental results demonstrate that learning context similarity is a more effective approach to contextaware recommendation than modeling contextual rating deviations.",
"title": ""
},
{
"docid": "077e4307caf9ac3c1f9185f0eaf58524",
"text": "Many text mining tools cannot be applied directly to documents available on web pages. There are tools for fetching and preprocessing of textual data, but combining them in one working tool chain can be time consuming. The preprocessing task is even more labor-intensive if documents are located on multiple remote sources with different storage formats. In this paper we propose the simplification of data preparation process for cases when data come from wide range of web resources. We developed an open-sourced tool, called Kayur, that greatly minimizes time and effort required for routine data preprocessing steps, allowing to quickly proceed to the main task of data analysis. The datasets generated by the tool are ready to be loaded into a data mining workbench, such as WEKA or Carrot2, to perform classification, feature prediction, and other data mining tasks.",
"title": ""
},
{
"docid": "eaec7fb5490ccabd52ef7b4b5abd25f6",
"text": "Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.",
"title": ""
},
{
"docid": "ed3044439e2ca81cbe57a6d4d2e7707a",
"text": "ness. Second, every attribute specified for a concept is shared by more than one instance of the concept. Thus, the information contained in a concept is an abstraction across instances of the concept. The overlapping networks of shared attributes thus formed hold conceptual categories together. In this respect, the family resemblance view is like the classical view: Both maintain that the instances of a concept cohere because they are similar to one another by virtueof sharing certain attributes. Weighted attributes. An object that shares attributes with many members of a category bears greater family resemblance to that category than an object that shares attributes with few members. This suggests that attributes that are shared by many members confer a greater degree of family resemblance than those that are shared by a few. A third characteristic of the family resemblance view is that it assumes that concept attributes are \"weighted\" according to their relevance for conferring family resemblance to the category. In general, that relevance is taken to be a function of the number of category instances (and perhaps noninstances) that share the attribute. Presumably, if the combined relevance weights of the attributes of some novel object exceed a certain level (what might be called the membership threshold or criterion), that object will be 2 Here and throughout, I use relevance to include both relevance and salience as used by Ortony, Vondruska, Foss, and Jones (1985). 504 LLOYD K. KOMATSU considered an instance of the category (Medin, 1983; Rosch & Mervis, 1975; E. E. Smith & Medin, 1981). The greater the degree to which the combined relevance weights exceed the threshold, the more typical an instance it is (see also Shafir, Smith, & Osherson, 1990). By this measure, an object must have a large number of heavily weighted attributes to be judged highly typical of a given category. Because such heavily weighted attributes are probably shared by many category instances and relatively few noninstances, an object highly typical of a category is likely to lie near the central tendencies of the category (see Retention of Central Tendencies, below), and is not likely to be typical of or lie near the central tendencies of any other category. Independence and additive combination of weights: Linear separability. Attribute weights can be combined using a variety of methods (cf. Medin & Schaffer, 1978; Reed, 1972). In the method typically associated with the family resemblance view (adapted from Tversky's, 1977, contrast model of similarity), attribute weights are assumed to be independent and combined by adding (Rosch & Mervis, 1975; E. E. Smith & Medin, 1981). This leads to a fourth characteristic of the (modal) family resemblance view: It predicts that instances and noninstances of a concept can be perfectly partitioned by a linear discriminant function (i.e., if one was to plot a set of objects by the combined weights of their attributes, all instances would fall to one side of a line, and all noninstances would fall on the other side; Medin & SchafFer, 1978; Medin & Schwanenflugel, 1981; Nakamura, 1985; Wattenmaker, Dewey, Murphy, & Medin, 1986). Thus the (modal) family resemblance view predicts that concepts are \"linearly separable.\" Retention of central tendencies. The phrase family resemblance is used in two ways. In the sense that I have focused on until now, the family resemblance of an object to a category increases as the similarity between that object and all other members of the category increases and the similarity between that object and all nonmembers of the category decreases. This use of family resemblance (probably the use more reflective of Wittgenstein's, 1953, original ideas) has an extensional emphasis: It describes a relationship among objects and makes no assumptions about how the category of objects is represented mentally (i.e., about the intension of the word or what I have been calling the concept). In the second sense, family resemblance increases as the similarity between an object and the central tendencies of the category increases (Hampton, 1979). This use of family resemblance has an intentional emphasis: It describes a relationship between objects and a mental representation (of the central tendencies of a category). Although these two ways of thinking about family resemblance, average similarity to all instances and similarity to a central tendency, are different (cf. Reed, 1972), Barsalou (1985, 1987) points out that they typically yield roughly the same outcome, much as the average difference between a number and a set of other numbers is roughly the same as the difference between that number and the average of that set of other numbers. (For example, consider the number 2 and the set of numbers 3, 5, and 8. The average difference between 2 and 3, 5, and 8 is 3.33, and the difference between 2 and the average of 3,5, and 8 is 3.33.) Barsalou argues that although for most purposes the two ways of thinking about family resemblance are equivalent (one of the reasons the exemplar and family resemblance views are often difficult to distinguish empirically; see below), computation in terms of central tendencies may be more plausible psychologically (because fewer comparisons are involved in comparing an object with the central tendencies of a concept than with every instance and noninstance of the concept; see also Barresi, Robbins, & Shain, 1975). This suggests a fifth characteristic of the family resemblance view: A concept provides a summary of a category in terms of the central tendencies of the members of that category rather than in terms of the representations of individual instances. Economy, Informativeness, Coherence, and Naturalness Both the classical and the family resemblance views explain conceptual coherence in terms of the attributes shared by the members of a category (i.e., the similarity among the instances of a concept). The critical difference between the two views lies in the constraints placed on the attributes shared. In the classical view, all instances are similar in that they share a set of necessary and sufficient attributes (i.e., the definition). The family resemblance view relaxes this constraint and requires only that every attribute specified by the concept be shared by more than one instance. Although this requirement confers a certain amount of economy to the family resemblance view (every piece of information applies to several instances), removing the definitional constraint allows family resemblance representations to include nondefinitional information. In particular, concepts are likely to specify information beyond that true of all instances or beyond that strictly needed to understand what Medin and Smith (1984) call linguistic meaning (the different kinds of relations that hold among words such as synonymy, antynomy, hyponomy, anomaly, and contradiction as usually understood; cf. Katz, 1972; Katz & Fodor, 1963) to include information about how the objects referred to may relate to one another and to the world. It is not clear whether this loss in economy results in a concomitant increase in informativeness: Although in the family resemblance view more information may be associated with a concept than in the classical, not all of that information applies to every instance of the concept. In the family resemblance view, attributes can be inferred to inhere in different instances only with some level of probability. Thus the informativeness of the individual attributes specified is somewhat compromised. With no a priori constraint on the nature (or level) of similar3 There are several different ways to approach the representation of the central tendencies of a category. E. E. Smith and Medin (1981), for example, identified three approaches to what they called the probablistic view: the featural, the dimensional, and the holistic. E. E. Smith and Medin provided ample evidence for rejecting the holistic approach on both empirical and theoretical grounds (see also McNamara & Miller, 1989). They also argued that the similarities between the featural and dimensional approaches suggest that they might profitably be combined into a single position that could be called the \"component\" approach (E. E. Smith & Medin, 1981, p. 164) and concluded that the component approach is the only viable variant. RECENT VIEWS OF CONCEPTS 505 ity shared by the instances of a concept, the family resemblance view has difficulty specifying which similarities count and which do not when it comes to setting the boundaries between concepts. A Great Dane and a Bedlington terrier appear to share few similarities, but they share enough so that both are dogs. But a Bedlington terrier seems to share as many similarities with a lamb as it does with a Great Dane. Why is a Bedlington terrier a dog and not a lamb? Presumably, the family resemblance view would predict that the summed weights of Bedlington terrier attributes lead to its being more similar to other dogs than to lambs and result in its being categorized as a dog rather than a lamb. But to determine those weights, we need to know how common those attributes are among dogs and lambs. This implies that the categorization of Bedlington terriers must be preceded by the partitioning of the world into dog and lamb. Without that prior partitioning, the dog versus lamb weights of Bedlington terrier attributes cannot be determined. To answer the question of what privileges the categorization of a Bedlington terrier with the Great Dane rather than the lamb requires answering what privileges the partitioning of the world into dogs and lambs. Rosch (Rosch, 1978; Rosch & Mervis, 1975) argues that certain partitionings of the world (including, presumably, into dogs and lambs) are privileged, more immediate or direct, and arise naturally from the interaction of our perceptual apparatus and the environment. Thus whereas the classical view",
"title": ""
},
{
"docid": "4f0b28ded91c48913a13bde141a3637f",
"text": "This paper presents our work in mapping the design space of techniques for temporal graph visualisation. We identify two independent dimensions upon which the techniques can be classified: graph structural encoding and temporal encoding. Based on these dimensions, we create a matrix into which we organise existing techniques. We identify gaps in this design space which may prove interesting opportunities for the development of novel techniques. We also consider additional dimensions upon which further useful classification could be made. In organising the disparate existing approaches from a wide range of domains, our classification will assist those new to the research area, and designers and evaluators developing systems for temporal graph data by raising awareness of the range of possible approaches available, and highlighting possible directions for further research.",
"title": ""
},
{
"docid": "a6fbd3f79105fd5c9edfc4a0292a3729",
"text": "The widespread use of templates on the Web is considered harmful for two main reasons. Not only do they compromise the relevance judgment of many web IR and web mining methods such as clustering and classification, but they also negatively impact the performance and resource usage of tools that process web pages. In this paper we present a new method that efficiently and accurately removes templates found in collections of web pages. Our method works in two steps. First, the costly process of template detection is performed over a small set of sample pages. Then, the derived template is removed from the remaining pages in the collection. This leads to substantial performance gains when compared to previous approaches that combine template detection and removal. We show, through an experimental evaluation, that our approach is effective for identifying terms occurring in templates - obtaining F-measure values around 0.9, and that it also boosts the accuracy of web page clustering and classification methods.",
"title": ""
},
{
"docid": "7120d5acf58f8ec623d65b4f41bef97d",
"text": "BACKGROUND\nThis study analyzes the problems and consequences associated with prolonged use of laparoscopic instruments (dissector and needle holder) and equipments.\n\n\nMETHODS\nA total of 390 questionnaires were sent to the laparoscopic surgeons of the Spanish Health System. Questions were structured on the basis of 4 categories: demographics, assessment of laparoscopic dissector, assessment of needle holder, and other informations.\n\n\nRESULTS\nA response rate of 30.26% was obtained. Among them, handle shape of laparoscopic instruments was identified as the main element that needed to be improved. Furthermore, the type of instrument, electrocautery pedals and height of the operating table were identified as major causes of forced positions during the use of both surgical instruments.\n\n\nCONCLUSIONS\nAs far as we know, this is the largest Spanish survey conducted on this topic. From this survey, some ergonomic drawbacks have been identified in: (a) the instruments' design, (b) the operating tables, and (c) the posture of the surgeons.",
"title": ""
},
{
"docid": "2ad8723c9fce1a6264672f41824963f8",
"text": "Psychologists have repeatedly shown that a single statistical factor--often called \"general intelligence\"--emerges from the correlations among people's performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of \"collective intelligence\" exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group's performance on a wide variety of tasks. This \"c factor\" is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.",
"title": ""
},
{
"docid": "b2d256cd40e67e3eadd3f5d613ad32fa",
"text": "Due to the wide spread of cloud computing, arises actual question about architecture, design and implementation of cloud applications. The microservice model describes the design and development of loosely coupled cloud applications when computing resources are provided on the basis of automated IaaS and PaaS cloud platforms. Such applications consist of hundreds and thousands of service instances, so automated validation and testing of cloud applications developed on the basis of microservice model is a pressing issue. There are constantly developing new methods of testing both individual microservices and cloud applications at a whole. This article presents our vision of a framework for the validation of the microservice cloud applications, providing an integrated approach for the implementation of various testing methods of such applications, from basic unit tests to continuous stability testing.",
"title": ""
},
{
"docid": "753eb03a060a5e5999eee478d6d164f9",
"text": "Recently reported results with distributed-vector word representations in natural language processing make them appealing for incorporation into a general cognitive architecture like Sigma. This paper describes a new algorithm for learning such word representations from large, shallow information resources, and how this algorithm can be implemented via small modifications to Sigma. The effectiveness and speed of the algorithm are evaluated via a comparison of an external simulation of it with state-of-the-art algorithms. The results from more limited experiments with Sigma are also promising, but more work is required for it to reach the effectiveness and speed of the simulation.",
"title": ""
},
{
"docid": "c5118bfd338ed2879477023b69fff911",
"text": "The paper describes a study and an experimental verification of remedial strategies against failures occurring in the inverter power devices of a permanent-magnet synchronous motor drive. The basic idea of this design consists in incorporating a fourth inverter pole, with the same topology and capabilities of the other conventional three poles. This minimal redundant hardware, appropriately connected and controlled, allows the drive to face a variety of power device fault conditions while maintaining a smooth torque production. The achieved results also show the industrial feasibility of the proposed fault-tolerant control, that could fit many practical applications.",
"title": ""
},
{
"docid": "98b78340925729e580f888f9ab2d8453",
"text": "This paper describes the Jensen-Shannon divergence (JSD) and Hilbert space embedding. With natural definitions making these considerations precise, one finds that the general Jensen-Shannon divergence related to the mixture is the minimum redundancy, which can be achieved by the observer. The set of distributions with the metric /spl radic/JSD can even be embedded isometrically into Hilbert space and the embedding can be identified.",
"title": ""
},
{
"docid": "e2630765e2fa4b203a4250cb5b69b9e9",
"text": "Thirteen years have passed since Karl Sims published his work onevolving virtual creatures. Since then,several novel approaches toneural network evolution and genetic algorithms have been proposed.The aim of our work is to apply recent results in these areas tothe virtual creatures proposed by Karl Sims, leading to creaturescapable of solving more complex tasks. This paper presents oursuccess in reaching the first milestone -a new and completeimplementation of the original virtual creatures. All morphologicaland control properties of the original creatures were implemented.Laws of physics are simulated using ODE library. Distributedcomputation is used for CPU-intensive tasks, such as fitnessevaluation.Experiments have shown that our system is capable ofevolving both morphology and control of the creatures resulting ina variety of non-trivial swimming and walking strategies.",
"title": ""
},
{
"docid": "7892a17a84d54bb6975cb7b8229242a9",
"text": "The way people conceptualize space is an important consideration for the design of geographic information systems, because a better match with peopleÕs thinking is expected to lead to easier-touse information systems. Everyday space, the basis to geographic information systems (GISs), has been characterized in the literature as being either small-scale (from table-top to room-size spaces) or large-scale (inside-of-building spaces to city-size space). While this dichotomy of space is grounded in the view from psychology that peopleÕs perception of space, spatial cognition, and spatial behavior are experience-based, it is in contrast to current GISs, which enable us to interact with large-scale spaces as though they were small-scale or manipulable. We analyze different approaches to characterizing spaces and propose a unified view in which space is based on the physical properties of manipulability, locomotion, and size of space. Within the structure of our framework, we distinguish six types of spaces: manipulable object space (smaller than the human body), non-manipulable object space (greater than the human body, but less than the size of a building), environmental space (from inside building spaces to city-size spaces), geographic space (state, country, and continent-size spaces), panoramic space (spaces perceived via scanning the landscape), and map space. Such a categorization is an important part of Naive Geography, a set of theories of how people intuitively or spontaneously conceptualize geographic space and time, because it has implications for various theoretical and methodological questions concerning the design and use of spatial information tools. Of particular concern is the design of effective spatial information tools that lead to better communication.",
"title": ""
}
] |
scidocsrr
|
6570b5932fa1ab575920ce7fbf745d77
|
Genital Beautification: A Concept That Offers More Than Reduction of the Labia Minora
|
[
{
"docid": "da93678f1b1070d68cfcbc9b7f6f88fe",
"text": "Dermal fat grafts have been utilized in plastic surgery for both reconstructive and aesthetic purposes of the face, breast, and body. There are multiple reports in the literature on the male phallus augmentation with the use of dermal fat grafts. Few reports describe female genitalia aesthetic surgery, in particular rejuvenation of the labia majora. In this report we describe an indication and use of autologous dermal fat graft for labia majora augmentation in a patient with loss of tone and volume in the labia majora. We found that this procedure is an option for labia majora augmentation and provides a stable result in volume-restoration.",
"title": ""
}
] |
[
{
"docid": "7c593a9fc4de5beb89022f7d438ffcb8",
"text": "The design of a low power low drop out voltage regulator with no off-chip capacitor and fast transient responses is presented in this paper. The LDO regulator uses a combination of a low power operational trans-conductance amplifier and comparators to drive the gate of the PMOS pass element. The amplifier ensures stability and accurate setting of the output voltage in addition to power supply rejection. The comparators ensure fast response of the regulator to any load or line transients. A settling time of less than 200ns is achieved in response to a load transient step of 50mA with a rise time of 100ns with an output voltage spike of less than 200mV at an output voltage of 3.25 V. A line transient step of 1V with a rise time of 100ns results also in a settling time of less than 400ns with a voltage spike of less than 100mV when the output voltage is 2.6V. The regulator is fabricated using a standard 0.35μm CMOS process and consumes a quiescent current of only 26 μA.",
"title": ""
},
{
"docid": "7aeb10faf8590ed9f4054bafcd4dee0c",
"text": "Concept, design, and measurement results of a frequency-modulated continuous-wave radar sensor in low-temperature co-fired ceramics (LTCC) technology is presented in this paper. The sensor operates in the frequency band between 77–81 GHz. As a key component of the system, wideband microstrip grid array antennas with a broadside beam are presented and discussed. The combination with a highly integrated feeding network and a four-channel transceiver chip based on SiGe technology results in a very compact LTCC RF frontend (23 mm <formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\times$</tex></formula> 23 mm). To verify the feasibility of the concept, first radar measurement results are presented.",
"title": ""
},
{
"docid": "93f7a6057bf0f446152daf3233d000aa",
"text": "Given a stream of depth images with a known cuboid reference object present in the scene, we propose a novel approach for accurate camera tracking and volumetric surface reconstruction in real-time. Our contribution in this paper is threefold: (a) utilizing a priori knowledge of the precisely manufactured cuboid reference object, we keep drift-free camera tracking without explicit global optimization; (b) we improve the fineness of the volumetric surface representation by proposing a prediction-corrected data fusion strategy rather than a simple moving average, which enables accurate reconstruction of high-frequency details such as the sharp edges of objects and geometries of high curvature; (c) we introduce a benchmark dataset CU3D that contains both synthetic and real-world scanning sequences with ground-truth camera trajectories and surface models for the quantitative evaluation of 3D reconstruction algorithms. We test our algorithm on our dataset and demonstrate its accuracy compared with other state-of-the-art algorithms. We release both our dataset and code as open-source (https://github.com/zhangxaochen/CuFusion) for other researchers to reproduce and verify our results.",
"title": ""
},
{
"docid": "e2f57214cd2ec7b109563d60d354a70f",
"text": "Despite the recent successes in machine learning, there remain many open challenges. Arguably one of the most important and interesting open research problems is that of data efficiency. Supervised machine learning models, and especially deep neural networks, are notoriously data hungry, often requiring millions of labeled examples to achieve desired performance. However, labeled data is often expensive or difficult to obtain, hindering advances in interesting and important domains. What avenues might we pursue to increase the data efficiency of machine learning models? One approach is semi-supervised learning. In contrast to labeled data, unlabeled data is often easy and inexpensive to obtain. Semi-supervised learning is concerned with leveraging unlabeled data to improve performance in supervised tasks. Another approach is active learning: in the presence of a labeling mechanism (oracle), how can we choose examples to be labeled in a way that maximizes the gain in performance? In this thesis we are concerned with developing models that enable us to improve data efficiency of powerful models by jointly pursuing both of these approaches. Deep generative models parameterized by neural networks have emerged recently as powerful and flexible tools for unsupervised learning. They are especially useful for modeling high-dimensional and complex data. We propose a deep generative model with a discriminative component. By including the discriminative component in the model, after training is complete the model is used for classification rather than variational approximations. The model further includes stochastic inputs of arbitrary dimension for increased flexibility and expressiveness. We leverage the stochastic layer to learn representations of the data which naturally accommodate semi-supervised learning. We develop an efficient Gibbs sampling procedure to marginalize the stochastic inputs while inferring labels. We extend the model to include uncertainty in the weights, allowing us to explicitly capture model uncertainty, and demonstrate how this allows us to use the model for active learning as well as semi-supervised learning. I would like to dedicate this thesis to my loving wife, parents, and sister . . .",
"title": ""
},
{
"docid": "d5b4ba8e3491f4759924be4ceee8f418",
"text": "Researchers and practitioners have long regarded procrastination as a self-handicapping and dysfunctional behavior. In the present study, the authors proposed that not all procrastination behaviors either are harmful or lead to negative consequences. Specifically, the authors differentiated two types of procrastinators: passive procrastinators versus active procrastinators. Passive procrastinators are procrastinators in the traditional sense. They are paralyzed by their indecision to act and fail to complete tasks on time. In contrast, active procrastinators are a \"positive\" type of procrastinator. They prefer to work under pressure, and they make deliberate decisions to procrastinate. The present results showed that although active procrastinators procrastinate to the same degree as passive procrastinators, they are more similar to nonprocrastinators than to passive procrastinators in terms of purposive use of time, control of time, self-efficacy belief, coping styles, and outcomes including academic performance. The present findings offer a more sophisticated understanding of procrastination behavior and indicate a need to reevaluate its implications for outcomes of individuals.",
"title": ""
},
{
"docid": "4cef84bb3a1ff5f5ed64a4149d501f57",
"text": "In the future, intelligent machines will replace or enhance human capabilities in many areas. Artificial intelligence is the intelligence exhibited by machines or software. It is the subfield of computer science. Artificial Intelligence is becoming a popular field in computer science as it has enhanced the human life in many areas. Artificial intelligence in the last two decades has greatly improved performance of the manufacturing and service systems. Study in the area of artificial intelligence has given rise to the rapidly growing technology known as expert system. Application areas of Artificial Intelligence is having a huge impact on various fields of life as expert system is widely used these days to solve the complex problems in various areas as science, engineering, business, medicine, weather forecasting. The areas employing the technology of Artificial Intelligence have seen an increase in the quality and efficiency. This paper gives an overview of this technology and the application areas of this technology. This paper will also explore the current use of Artificial Intelligence technologies in the PSS design to damp the power system oscillations caused by interruptions, in Network Intrusion for protecting computer and communication networks from intruders, in the medical areamedicine, to improve hospital inpatient care, for medical image classification, in the accounting databases to mitigate the problems of it and in the computer games.",
"title": ""
},
{
"docid": "3c135cae8654812b2a4f805cec78132e",
"text": "Binarized Neural Network (BNN) removes bitwidth redundancy in classical CNN by using a single bit (-1/+1) for network parameters and intermediate representations, which has greatly reduced the off-chip data transfer and storage overhead. However, a large amount of computation redundancy still exists in BNN inference. By analyzing local properties of images and the learned BNN kernel weights, we observe an average of ~78% input similarity and ~59% weight similarity among weight kernels, measured by our proposed metric in common network architectures. Thus there does exist redundancy that can be exploited to further reduce the amount of on-chip computations.\n Motivated by the observation, in this paper, we proposed two types of fast and energy-efficient architectures for BNN inference. We also provide analysis and insights to pick the better strategy of these two for different datasets and network models. By reusing the results from previous computation, much cycles for data buffer access and computations can be skipped. By experiments, we demonstrate that 80% of the computation and 40% of the buffer access can be skipped by exploiting BNN similarity. Thus, our design can achieve 17% reduction in total power consumption, 54% reduction in on-chip power consumption and 2.4× maximum speedup, compared to the baseline without applying our reuse technique. Our design also shows 1.9× more area-efficiency compared to state-of-the-art BNN inference design. We believe our deployment of BNN on FPGA leads to a promising future of running deep learning models on mobile devices.",
"title": ""
},
{
"docid": "2a17f4c307fac8491410295640b5133c",
"text": "This work adopts a standard Denavit–Hartenberg method to model a PUMA 560 spot welding robot as the object of study. The forward and inverse kinematics solutions are then analyzed. To address the shortcomings of the ant colony algorithm, factors from the particle swarm optimization and the genetic algorithm are introduced into this algorithm. Subsequently, the resulting hybrid algorithm and the ant colony algorithm are used to conduct trajectory planning in the shortest path. Experimental data and simulation result show that the hybrid algorithm is significantly better in terms of initial solution speed and optimal solution quality than the ant colony algorithm. The feasibility and effectiveness of the hybrid algorithm in the trajectory planning of a robot are thus verified.",
"title": ""
},
{
"docid": "fcf8649ff7c2972e6ef73f837a3d3f4d",
"text": "The kitchen environment is one of the scenarios in the home where users can benefit from Ambient Assisted Living (AAL) applications. Moreover, it is the place where old people suffer from most domestic injuries. This paper presents a novel design, implementation and assessment of a Smart Kitchen which provides Ambient Assisted Living services; a smart environment that increases elderly and disabled people's autonomy in their kitchen-related activities through context and user awareness, appropriate user interaction and artificial intelligence. It is based on a modular architecture which integrates a wide variety of home technology (household appliances, sensors, user interfaces, etc.) and associated communication standards and media (power line, radio frequency, infrared and cabled). Its software architecture is based on the Open Services Gateway initiative (OSGi), which allows building a complex system composed of small modules, each one providing the specific functionalities required, and can be easily scaled to meet our needs. The system has been evaluated by a large number of real users (63) and carers (31) in two living labs in Spain and UK. Results show a large potential of system functionalities combined with good usability and physical, sensory and cognitive accessibility.",
"title": ""
},
{
"docid": "0c3ba78197c6d0f605b3b54149908705",
"text": "A novel design of solid phase microextraction fiber containing carbon nanotube reinforced sol-gel which was protected by polypropylene hollow fiber (HF-SPME) was developed for pre-concentration and determination of BTEX in environmental waste water and human hair samples. The method validation was included and satisfying results with high pre-concentration factors were obtained. In the present study orthogonal array experimental design (OAD) procedure with OA(16) (4(4)) matrix was applied to study the effect of four factors influencing the HF-SPME method efficiency: stirring speed, volume of adsorption organic solvent, extraction and desorption time of the sample solution, by which the effect of each factor was estimated using individual contributions as response functions in the screening process. Analysis of variance (ANOVA) was employed for estimating the main significant factors and their percentage contributions in extraction. Calibration curves were plotted using ten spiking levels of BTEX in the concentration ranges of 0.02-30,000ng/mL with correlation coefficients (r) 0.989-0.9991 for analytes. Under the optimized extraction conditions, the method showed good linearity (0.3-20,000ng/L), repeatability, low limits of detections (0.49-0.7ng/L) and excellent pre-concentration factors (185-1872). The best conditions which were estimated then applied for the analysis of BTEX compounds in the real samples.",
"title": ""
},
{
"docid": "9ba3c67136d573c4a10b133a2391d8bc",
"text": "Modern text collections often contain large documents that span several subject areas. Such documents are problematic for relevance feedback since inappropriate terms can easi 1y be chosen. This study explores the highly effective approach of feeding back passages of large documents. A less-expensive method that discards long documents is also reviewed and found to be effective if there are enough relevant documents. A hybrid approach that feeds back short documents and passages of long documents may be the best compromise.",
"title": ""
},
{
"docid": "3bb63838b4795c62b2c8e123daec2d7f",
"text": "To compare the quality of helical computed tomography (CT) images of the pelvis in patients with metal hip prostheses reconstructed using adaptive iterative dose reduction (AIDR) and AIDR with single-energy metal artifact reduction (SEMAR-A). This retrospective study included 28 patients (mean age, 64.6 ± 11.4 years; 6 men and 22 women). CT images were reconstructed using AIDR and SEMAR-A. Two radiologists evaluated the extent of metal artifacts and the depiction of structures in the pelvic region and looked for mass lesions. A radiologist placed a region of interest within the bladder and recorded CT attenuation. The metal artifacts were significantly reduced in SEMAR-A as compared to AIDR (p < 0.0001). The depictions of the bladder, ureter, prostate/uterus, rectum, and pelvic sidewall were significantly better with SEMAR-A than with AIDR (p < 0.02). All lesions were diagnosed with SEMAR-A, while some were not diagnosed with AIDR. The median and interquartile range (in parentheses) of CT attenuation within the bladder for AIDR were −34.0 (−46.6 to −15.0) Hounsfield units (HU) and were more variable than those seen for SEMAR-A [5.4 (−1.3 to 11.1)] HU (p = 0.033). In comparison with AIDR, SEMAR-A provided pelvic CT images of significantly better quality for patients with metal hip prostheses.",
"title": ""
},
{
"docid": "485f2cb9c34afe5fc19e2c4cc0a1ce54",
"text": "INTRODUCTION\nTo report our technique and experience in using a minimally invasive approach for aesthetic lateral canthoplasty.\n\n\nMETHODS\nRetrospective analysis of patients undergoing lateral canthoplasty through a minimally invasive, upper eyelid crease incision approach at Jules Stein Eye Institute by one surgeon (R.A.G.) between 2005 and 2008. Concomitant surgical procedures were recorded. Preoperative and postoperative photographs at the longest follow-up visit were analyzed and graded for functional and cosmetic outcomes.\n\n\nRESULTS\nA total of 600 patients (1,050 eyelids) underwent successful lateral canthoplasty through a small incision in the upper eyelid crease to correct lower eyelid malposition (laxity, ectropion, entropion, retraction) and/or lateral canthal dystopia, encompassing 806 reconstructive and 244 cosmetic lateral canthoplasties. There were 260 males and 340 females, with mean age of 55 years old (range, 4-92 years old). Minimum follow-up time was 3 months (mean, 6 months; maximum, 6 years). Complications were rare and minor, including transient postoperative chemosis. Eighteen patients underwent reoperation in the following 2 years for recurrent lower eyelid malposition and/or lateral canthal deformity.\n\n\nCONCLUSIONS\nLateral canthoplasty through a minimally invasive upper eyelid crease incision and resuspension technique can effectively address lower eyelid laxity and/or dystopia, resulting in an aesthetic lateral canthus.",
"title": ""
},
{
"docid": "db2937f923ef0a58e993729a05e6fb91",
"text": "The visual attention (VA) span is defined as the amount of distinct visual elements which can be processed in parallel in a multi-element array. Both recent empirical data and theoretical accounts suggest that a VA span deficit might contribute to developmental dyslexia, independently of a phonological disorder. In this study, this hypothesis was assessed in two large samples of French and British dyslexic children whose performance was compared to that of chronological-age matched control children. Results of the French study show that the VA span capacities account for a substantial amount of unique variance in reading, as do phonological skills. The British study replicates this finding and further reveals that the contribution of the VA span to reading performance remains even after controlling IQ, verbal fluency, vocabulary and single letter identification skills, in addition to phoneme awareness. In both studies, most dyslexic children exhibit a selective phonological or VA span disorder. Overall, these findings support a multi-factorial view of developmental dyslexia. In many cases, developmental reading disorders do not seem to be due to phonological disorders. We propose that a VA span deficit is a likely alternative underlying cognitive deficit in dyslexia.",
"title": ""
},
{
"docid": "63e3be30835fd8f544adbff7f23e13ab",
"text": "Deaths due to plastic bag suffocation or plastic bag asphyxia are not reported in Malaysia. In the West many suicides by plastic bag asphyxia, particularly in the elderly and those who are chronically and terminally ill, have been reported. Accidental deaths too are not uncommon in the West, both among small children who play with shopping bags and adolescents who are solvent abusers. Another well-known but not so common form of accidental death from plastic bag asphyxia is sexual asphyxia, which is mostly seen among adult males. Homicide by plastic bag asphyxia too is reported in the West and the victims are invariably infants or adults who are frail or terminally ill and who cannot struggle. Two deaths due to plastic bag asphyxia are presented. Both the autopsies were performed at the University Hospital Mortuary, Kuala Lumpur. Both victims were 50-year old married Chinese males. One death was diagnosed as suicide and the other as sexual asphyxia. Sexual asphyxia is generally believed to be a problem associated exclusively with the West. Specific autopsy findings are often absent in deaths due to plastic bag asphyxia and therefore such deaths could be missed when some interested parties have altered the scene and most importantly have removed the plastic bag. A visit to the scene of death is invariably useful.",
"title": ""
},
{
"docid": "748d71e6832288cd0120400d6069bf50",
"text": "This paper introduces the matrix formalism of optics as a useful approach to the area of “light fields”. It is capable of reproducing old results in Integral Photography, as well as generating new ones. Furthermore, we point out the equivalence between radiance density in optical phase space and the light field. We also show that linear transforms in matrix optics are applicable to light field rendering, and we extend them to affine transforms, which are of special importance to designing integral view cameras. Our main goal is to provide solutions to the problem of capturing the 4D light field with a 2D image sensor. From this perspective we present a unified affine optics view on all existing integral / light field cameras. Using this framework, different camera designs can be produced. Three new cameras are proposed. Figure 1: Integral view of a seagull",
"title": ""
},
{
"docid": "c55de58c07352373570ec7d46c5df03d",
"text": "Understanding human-object interactions is critical for extracting meaning from everyday visual scenes and requires integrating complex relationships between human pose and object identity into a new percept. To understand how the brain builds these representations, we conducted 2 fMRI experiments in which subjects viewed humans interacting with objects, noninteracting human-object pairs, and isolated humans and objects. A number of visual regions process features of human-object interactions, including object identity information in the lateral occipital complex (LOC) and parahippocampal place area (PPA), and human pose information in the extrastriate body area (EBA) and posterior superior temporal sulcus (pSTS). Representations of human-object interactions in some regions, such as the posterior PPA (retinotopic maps PHC1 and PHC2) are well predicted by a simple linear combination of the response to object and pose information. Other regions, however, especially pSTS, exhibit representations for human-object interaction categories that are not predicted by their individual components, indicating that they encode human-object interactions as more than the sum of their parts. These results reveal the distributed networks underlying the emergent representation of human-object interactions necessary for social perception.",
"title": ""
},
{
"docid": "53a49412d75190357df5d159b11843f0",
"text": "Perception and reasoning are basic human abilities that are seamlessly connected as part of human intelligence. However, in current machine learning systems, the perception and reasoning modules are incompatible. Tasks requiring joint perception and reasoning ability are difficult to accomplish autonomously and still demand human intervention. Inspired by the way language experts decoded Mayan scripts by joining two abilities in an abductive manner, this paper proposes the abductive learning framework. The framework learns perception and reasoning simultaneously with the help of a trial-and-error abductive process. We present the Neural-Logical Machine as an implementation of this novel learning framework. We demonstrate thatusing human-like abductive learningthe machine learns from a small set of simple hand-written equations and then generalizes well to complex equations, a feat that is beyond the capability of state-of-the-art neural network models. The abductive learning framework explores a new direction for approaching human-level learning ability.",
"title": ""
},
{
"docid": "e5f2101e7937c61a4d6b11d4525a7ed8",
"text": "This article reviews an emerging field that aims for autonomous reinforcement learning (RL) directly on sensor-observations. Straightforward end-to-end RL has recently shown remarkable success, but relies on large amounts of samples. As this is not feasible in robotics, we review two approaches to learn intermediate state representations from previous experiences: deep auto-encoders and slow-feature analysis. We analyze theoretical properties of the representations and point to potential improvements.",
"title": ""
},
{
"docid": "c408992e89867e583b8232b18f37edf0",
"text": "Fusion of information gathered from multiple sources is essential to build a comprehensive situation picture for autonomous ground vehicles. In this paper, an approach which performs scene parsing and data fusion for a 3D-LIDAR scanner (Velodyne HDL-64E) and a video camera is described. First of all, a geometry segmentation algorithm is proposed for detection of obstacles and ground areas from data collected by the Velodyne scanner. Then, corresponding image collected by the video camera is classified patch by patch into more detailed categories. After that, parsing result of each frame is obtained by fusing result of Velodyne data and that of image using the fuzzy logic inference framework. Finally, parsing results of consecutive frames are smoothed by the Markov random field based temporal fusion method. The proposed approach has been evaluated with datasets collected by our autonomous ground vehicle testbed in both rural and urban areas. The fused results are more reliable than that acquired via analysis of only images or Velodyne data. 2013 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
c80ddb45bcff0f43fdb0b6a7e4659462
|
Extinction-Based Shading and Illumination in GPU Volume Ray-Casting
|
[
{
"docid": "bb98b9a825a4c7d0f3d4b06fafb8ff37",
"text": "The tremendous evolution of programmable graphics hardware has made high-quality real-time volume graphics a reality. In addition to the traditional application of rendering volume data in scientific visualization, the interest in applying these techniques for real-time rendering of atmospheric phenomena and participating media such as fire, smoke, and clouds is growing rapidly. This course covers both applications in scientific visualization, e.g., medical volume data, and real-time rendering, such as advanced effects and illumination in computer games, in detail. Course participants will learn techniques for harnessing the power of consumer graphics hardware and high-level shading languages for real-time rendering of volumetric data and effects. Beginning with basic texture-based approaches including hardware ray casting, the algorithms are improved and expanded incrementally, covering local and global illumination, scattering, pre-integration, implicit surfaces and non-polygonal isosurfaces, transfer function design, volume animation and deformation, dealing with large volumes, high-quality volume clipping, rendering segmented volumes, higher-order filtering, and non-photorealistic volume rendering. Course participants are provided with documented source code covering details usually omitted in publications.",
"title": ""
}
] |
[
{
"docid": "b825426604420620e1bba43c0f45115e",
"text": "Taxonomies are the backbone of many structured, semantic knowledge resources. Recent works for extracting taxonomic relations from text focused on collecting lexical-syntactic patterns to extract the taxonomic relations by matching the patterns to text. These approaches, however, often show low coverage due to the lack of contextual analysis across sentences. To address this issue, we propose a novel approach that collectively utilizes contextual information of terms in syntactic structures such that if the set of contexts of a term includes most of contexts of another term, a subsumption relation between the two terms is inferred. We apply this method to the task of taxonomy construction from scratch, where we introduce another novel graph-based algorithm for taxonomic structure induction. Our experiment results show that the proposed method is well complementary with previous methods of linguistic pattern matching and significantly improves recall and thus F-measure.",
"title": ""
},
{
"docid": "6e2efc26a47be54ff2bffd1c01e54ca5",
"text": "In recent years, cyber attacks have caused substantial financial losses and been able to stop fundamental public services. Among the serious attacks, Advanced Persistent Threat (APT) has emerged as a big challenge to the cyber security hitting selected companies and organisations. The main objectives of APT are data exfiltration and intelligence appropriation. As part of the APT life cycle, an attacker creates a Point of Entry (PoE) to the target network. This is usually achieved by installing malware on the targeted machine to leave a back-door open for future access. A common technique employed to breach into the network, which involves the use of social engineering, is the spear phishing email. These phishing emails may contain disguised executable files. This paper presents the disguised executable file detection (DeFD) module, which aims at detecting disguised exe files transferred over the network connections. The detection is based on a comparison between the MIME type of the transferred file and the file name extension. This module was experimentally evaluated and the results show a successful detection of disguised executable files.",
"title": ""
},
{
"docid": "158de7fe10f35a78e4b62d2bc46d9b0d",
"text": "The Internet of Things promises ubiquitous connectivity of everything everywhere, which represents the biggest technology trend in the years to come. It is expected that by 2020 over 25 billion devices will be connected to cellular networks; far beyond the number of devices in current wireless networks. Machine-to-machine communications aims to provide the communication infrastructure for enabling IoT by facilitating the billions of multi-role devices to communicate with each other and with the underlying data transport infrastructure without, or with little, human intervention. Providing this infrastructure will require a dramatic shift from the current protocols mostly designed for human-to-human applications. This article reviews recent 3GPP solutions for enabling massive cellular IoT and investigates the random access strategies for M2M communications, which shows that cellular networks must evolve to handle the new ways in which devices will connect and communicate with the system. A massive non-orthogonal multiple access technique is then presented as a promising solution to support a massive number of IoT devices in cellular networks, where we also identify its practical challenges and future research directions.",
"title": ""
},
{
"docid": "e10886264acb1698b36c4d04cf2d9df6",
"text": "† This work was supported by the RGC CERG project PolyU 5065/98E and the Departmental Grant H-ZJ84 ‡ Corresponding author ABSTRACT Pattern discovery from time series is of fundamental importance. Particularly when the domain expert derived patterns do not exist or are not complete, an algorithm to discover specific patterns or shapes automatically from the time series data is necessary. Such an algorithm is noteworthy in that it does not assume prior knowledge of the number of interesting structures, nor does it require an exhaustive explanation of the patterns being described. In this paper, a clustering approach is proposed for pattern discovery from time series. In view of its popularity and superior clustering performance, the self-organizing map (SOM) was adopted for pattern discovery in temporal data sequences. It is a special type of clustering algorithm that imposes a topological structure on the data. To prepare for the SOM algorithm, data sequences are segmented from the numerical time series using a continuous sliding window. Similar temporal patterns are then grouped together using SOM into clusters, which may subsequently be used to represent different structures of the data or temporal patterns. Attempts have been made to tackle the problem of representing patterns in a multi-resolution manner. With the increase in the number of data points in the patterns (the length of patterns), the time needed for the discovery process increases exponentially. To address this problem, we propose to compress the input patterns by a perceptually important point (PIP) identification algorithm. The idea is to replace the original data segment by its PIP’s so that the dimensionality of the input pattern can be reduced. Encouraging results are observed and reported for the application of the proposed methods to the time series collected from the Hong Kong stock market.",
"title": ""
},
{
"docid": "cc6161fd350ac32537dc704cbfef2155",
"text": "The contribution of cloud computing and mobile computing technologies lead to the newly emerging mobile cloud computing paradigm. Three major approaches have been proposed for mobile cloud applications: 1) extending the access to cloud services to mobile devices; 2) enabling mobile devices to work collaboratively as cloud resource providers; 3) augmenting the execution of mobile applications on portable devices using cloud resources. In this paper, we focus on the third approach in supporting mobile data stream applications. More specifically, we study how to optimize the computation partitioning of a data stream application between mobile and cloud to achieve maximum speed/throughput in processing the streaming data.\n To the best of our knowledge, it is the first work to study the partitioning problem for mobile data stream applications, where the optimization is placed on achieving high throughput of processing the streaming data rather than minimizing the makespan of executions as in other applications. We first propose a framework to provide runtime support for the dynamic computation partitioning and execution of the application. Different from existing works, the framework not only allows the dynamic partitioning for a single user but also supports the sharing of computation instances among multiple users in the cloud to achieve efficient utilization of the underlying cloud resources. Meanwhile, the framework has better scalability because it is designed on the elastic cloud fabrics. Based on the framework, we design a genetic algorithm for optimal computation partition. Both numerical evaluation and real world experiment have been performed, and the results show that the partitioned application can achieve at least two times better performance in terms of throughput than the application without partitioning.",
"title": ""
},
{
"docid": "857e9430ebc5cf6aad2737a0ce10941e",
"text": "Despite a long tradition of effectiveness in laboratory tests, normative messages have had mixed success in changing behavior in field contexts, with some studies showing boomerang effects. To test a theoretical account of this inconsistency, we conducted a field experiment in which normative messages were used to promote household energy conservation. As predicted, a descriptive normative message detailing average neighborhood usage produced either desirable energy savings or the undesirable boomerang effect, depending on whether households were already consuming at a low or high rate. Also as predicted, adding an injunctive message (conveying social approval or disapproval) eliminated the boomerang effect. The results offer an explanation for the mixed success of persuasive appeals based on social norms and suggest how such appeals should be properly crafted.",
"title": ""
},
{
"docid": "adf530152b474c2b6147da07acf3d70d",
"text": "One of the basic services in a distributed network is clock synchronization as it enables a palette of services, such as synchronized measurements, coordinated actions, or time-based access to a shared communication medium. The IEEE 1588 standard defines the Precision Time Protocol (PTP) and provides a framework to synchronize multiple slave clocks to a master by means of synchronization event messages. While PTP is capable for synchronization accuracies below 1 ns, practical synchronization approaches are hitting a new barrier due to asymmetric line delays. Although compensation fields for the asymmetry are present in PTP version 2008, no specific measures to estimate the asymmetry are defined in the standard. In this paper we present a solution to estimate the line asymmetry in 100Base-TX networks based on line swapping. This approach seems appealing for existing installations as most Ethernet PHYs have the line swapping feature built in, and it only delays the network startup, but does not alter the operation of the network. We show by an FPGA-based prototype system that our approach is able to improve the synchronization offset from more than 10 ns down to below 200 ps.",
"title": ""
},
{
"docid": "b39904ccd087e59794cf2cc02e5d2644",
"text": "In this paper, we propose a novel walking method for torque controlled robots. The method is able to produce a wide range of speeds without requiring off-line optimizations and re-tuning of parameters. We use a quadratic whole-body optimization method running online which generates joint torques, given desired Cartesian accelerations of center of mass and feet. Using a dynamics model of the robot inside this optimizer, we ensure both compliance and tracking, required for fast locomotion. We have designed a foot-step planner that uses a linear inverted pendulum as simplified robot internal model. This planner is formulated as a quadratic convex problem which optimizes future steps of the robot. Fast libraries help us performing these calculations online. With very few parameters to tune and no perception, our method shows notable robustness against strong external pushes, relatively large terrain variations, internal noises, model errors and also delayed communication.",
"title": ""
},
{
"docid": "6d84b1ef838301a4c0f9136dffb1082f",
"text": "Power analysis is critical in research designs. This study discusses a simulation-based approach utilizing the likelihood ratio test to estimate the power of growth curve analysis. The power estimation is implemented through a set of SAS macros. The application of the SAS macros is demonstrated through several examples, including missing data and nonlinear growth trajectory situations. The results of the examples indicate that the power of growth curve analysis increases with the increase of sample sizes, effect sizes, and numbers of measurement occasions. In addition, missing data can reduce power. The SAS macros can be modified to accommodate more complex power analysis for both linear and nonlinear growth curve models.",
"title": ""
},
{
"docid": "e5d474fc8c0d2c97cc798eda4f9c52dd",
"text": "Gesture typing is an efficient input method for phones and tablets using continuous traces created by a pointed object (e.g., finger or stylus). Translating such continuous gestures into textual input is a challenging task as gesture inputs exhibit many features found in speech and handwriting such as high variability, co-articulation and elision. In this work, we address these challenges with a hybrid approach, combining a variant of recurrent networks, namely Long Short Term Memories [1] with conventional Finite State Transducer decoding [2]. Results using our approach show considerable improvement relative to a baseline shape-matching-based system, amounting to 4% and 22% absolute improvement respectively for small and large lexicon decoding on real datasets and 2% on a synthetic large scale dataset.",
"title": ""
},
{
"docid": "ae527d90981c371c4807799802dbc5a8",
"text": "We present our efforts to deploy mobile robots in office environments, focusing in particular on the challenge of planning a schedule for a robot to accomplish user-requested actions. We concretely aim to make our CoBot mobile robots available to execute navigational tasks requested by users, such as telepresence, and picking up and delivering messages or objects at different locations. We contribute an efficient web-based approach in which users can request and schedule the execution of specific tasks. The scheduling problem is converted to a mixed integer programming problem. The robot executes the scheduled tasks using a synthetic speech and touch-screen interface to interact with users, while allowing users to follow the task execution online. Our robot uses a robust Kinect-based safe navigation algorithm, moves fully autonomously without the need to be chaperoned by anyone, and is robust to the presence of moving humans, as well as non-trivial obstacles, such as legged chairs and tables. Our robots have already performed 15km of autonomous service tasks. Introduction and Related Work We envision a system in which autonomous mobile robots robustly perform service tasks in indoor environments. The robots perform tasks which are requested by building residents over the web, such as delivering mail, fetching coffee, or guiding visitors. To fulfill the users’ requests, we must plan a schedule of when the robot will execute each task in accordance with the constraints specified by the users. Many efforts have used the web to access robots, including the early examples of the teleoperation of a robotic arm (Goldberg et al. 1995; Taylor and Trevelyan 1995) and interfacing with a mobile robot (e.g, (Simmons et al. 1997; Siegwart and Saucy 1999; Saucy and Mondada 2000; Schulz et al. 2000)), among others. The robot Xavier (Simmons et al. 1997; 2000) allowed users to make requests over the web for the robot to go to specific places, and other mobile robots soon followed (Siegwart and Saucy 1999; Grange, Fong, and Baur 2000; Saucy and Mondada 2000; Schulz et al. 2000). The RoboCup@Home initiative (Visser and Burkhard 2007) provides competition setups for indoor Copyright © 2011, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: CoBot-2, an omnidirectional mobile robot for indoor users. service autonomous robots, with an increasingly wide scope of challenges focusing on robot autonomy and verbal interaction with users. In this work, we present our architecture to effectively make a fully autonomous indoor service robot available to general users. We focus on the problem of planning a schedule for the robot, and present a mixed integer linear programming solution for planning a schedule. We ground our work on the CoBot-2 platform1, shown in Figure 1. CoBot-2 autonomously localizes and navigates in a multi-floor office environment while effectively avoiding obstacles (Biswas and Veloso 2010). The robot carries a variety of sensing and computing devices, including a camera, a Kinect depthcamera, a Hokuyo LIDAR, a touch-screen tablet, microphones, speakers, and wireless communication. CoBot-2 executes tasks sent by users over the web, and we have devised a user-friendly web interface that allows users to specify tasks. Currently, the robot executes three types of tasks: a GoToRoom task where the robot visits a location, a Telepresence task where the robot goes to a location CoBot-2 was designed and built by Michael Licitra, [email protected], as a scaled-up version of the CMDragons small-size soccer robots, also designed and built by him. 27 Automated Action Planning for Autonomous Mobile Robots: Papers from the 2011 AAAI Workshop (WS-11-09)",
"title": ""
},
{
"docid": "dfc5f6899ceeb886b4197f3b70b7f6e7",
"text": "In cognitive radio networks, the secondary users can use the frequency bands when the primary users are not present. Hence secondary users need to constantly sense the presence of the primary users. When the primary users are detected, the secondary users have to vacate that channel. This makes the probability of detection important to the primary users as it indicates their protection level from secondary users. When the secondary users detect the presence of a primary user which is in fact not there, it is referred to as false alarm. The probability of false alarm is important to the secondary users as it determines their usage of an unoccupied channel. Depending on whose interest is of priority, either a targeted probability of detection or false alarm shall be set. After setting one of the probabilities, the other can be optimized through cooperative sensing. In this paper, we show that cooperating all secondary users in the network does not necessary achieve the optimum performance, but instead, it is achieved by cooperating a certain number of users with the highest primary user's signal to noise ratio. Computer simulations have shown that the Pd can increase from 92.03% to 99.88% and Pf can decrease from 6.02% to 0.06% in a network with 200 users.",
"title": ""
},
{
"docid": "29aa73eec85fd015a3a5f4679209c2d4",
"text": "We present a broadband waveguide ortho-mode transducer for the WR10 band that was designed for CLOVER, an astrophysics experiment aiming to characterize the polarization of the cosmic microwave background radiation. The design, based on a turnstile junction, was manufactured and then tested using a millimeter-wave vector network analyzer. The average measured return loss and isolation were -22 dB and -45 dB, respectively, across the entire WR10 band",
"title": ""
},
{
"docid": "d3562d7a7dafeb4971563d90e4c31fd6",
"text": "A challenging problem in open information extraction and text mining is the learning of the selectional restrictions of semantic relations. We propose a minimally supervised bootstrapping algorithm that uses a single seed and a recursive lexico-syntactic pattern to learn the arguments and the supertypes of a diverse set of semantic relations from the Web. We evaluate the performance of our algorithm on multiple semantic relations expressed using “verb”, “noun”, and “verb prep” lexico-syntactic patterns. Humanbased evaluation shows that the accuracy of the harvested information is about 90%. We also compare our results with existing knowledge base to outline the similarities and differences of the granularity and diversity of the harvested knowledge.",
"title": ""
},
{
"docid": "77af12d87cd5827f35d92968d1888162",
"text": "Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.",
"title": ""
},
{
"docid": "7a1aa7db367a45ff48fb31f1c04b7fef",
"text": "As the size of software systems increases, the algorithms and data structures of the computation no longer constitute the major design problems. When systems are constructed from many components, the organization of the overall system—the software architecture—presents a new set of design problems. This level of design has been addressed in a number of ways including informal diagrams and descriptive terms, module interconnection languages, templates and frameworks for systems that serve the needs of specific domains, and formal models of component integration mechanisms. In this paper we provide an introduction to the emerging field of software architecture. We begin by considering a number of common architectural styles upon which many systems are currently based and show how different styles can be combined in a single design. Then we present six case studies to illustrate how architectural representations can improve our understanding of complex software systems. Finally, we survey some of the outstanding problems in the field, and consider a few of the promising research directions.",
"title": ""
},
{
"docid": "587f58f291732bfb8954e34564ba76fd",
"text": "Blood pressure oscillometric waveforms behave as amplitude modulated nonlinear signals with frequency fluctuations. Their oscillating nature can be better analyzed by the digital Taylor-Fourier transform (DTFT), recently proposed for phasor estimation in oscillating power systems. Based on a relaxed signal model that includes Taylor components greater than zero, the DTFT is able to estimate not only the oscillation itself, as does the digital Fourier transform (DFT), but also its derivatives included in the signal model. In this paper, an oscillometric waveform is analyzed with the DTFT, and its zeroth and first oscillating harmonics are illustrated. The results show that the breathing activity can be separated from the cardiac one through the critical points of the first component, determined by the zero crossings of the amplitude derivatives estimated from the third Taylor order model. On the other hand, phase derivative estimates provide the fluctuations of the cardiac frequency and its derivative, new parameters that could improve the precision of the systolic and diastolic blood pressure assignment. The DTFT envelope estimates uniformly converge from K=3, substantially improving the harmonic separation of the DFT.",
"title": ""
},
{
"docid": "8f79bd3f51ec54a3e86553514881088c",
"text": "A time series is a sequence of observations collected over fixed sampling intervals. Several real-world dynamic processes can be modeled as a time series, such as stock price movements, exchange rates, temperatures, among others. As a special kind of data stream, a time series may present concept drift, which affects negatively time series analysis and forecasting. Explicit drift detection methods based on monitoring the time series features may provide a better understanding of how concepts evolve over time than methods based on monitoring the forecasting error of a base predictor. In this paper, we propose an online explicit drift detection method that identifies concept drifts in time series by monitoring time series features, called Feature Extraction for Explicit Concept Drift Detection (FEDD). Computational experiments showed that FEDD performed better than error-based approaches in several linear and nonlinear artificial time series with abrupt and gradual concept drifts.",
"title": ""
},
{
"docid": "42584c93c05f512bc2f0bc8d73e90cc8",
"text": "This sketch describes a new, flexible, natural, intuitive, volumetric modeling and animation technique that combines implicit functions with turbulence-based procedural techniques. A cloud is modeled to demonstrate its advantages.",
"title": ""
},
{
"docid": "c7a985966fb6a04a712c67bf2580af61",
"text": "There is much knowledge about Business models (BM) (Zott 2009, Zott 2010, Zott 2011, Fielt 2011, Teece 2010, Lindgren 2013) but very little knowledge and research about Business Model Eco system (BMES) – those “ecosystems” where the BM’s really operates and works as value-adding mechanism – objects or “species”. How are these BMES actually constructed – How do they function – what are their characteristics and How can we really define these BMES? There are until now not an accepted language developed for BMES’s nor is the term BMES generally accepted in the BM Literature. This paper intends to commence the journey of building up such language on behalf of case studies within the Wind Mill, Health-, Agriculture-, and Fair line of BMES. A preliminary study of “AS IS” and “TO BE” BM’s related to these BMES present our first findings and preliminary understanding of BMES. The paper attempt to define what is a BMES and the dimensions and components of BMES. In this context we build upon a comprehensive review of academic business and BM literature together with an analogy study to ecological eco systems and ecosystem frameworks. We commence exploring the origin of the term business, BM and ecosystems and then relate this to a proposed BMES framework and the concept of the Multi BM framework (Lindgren 2013).",
"title": ""
}
] |
scidocsrr
|
0129aaa59ba73744e694cf807a9e7e1d
|
Inferring Taxi Status Using GPS Trajectories
|
[
{
"docid": "3bc48489d80e824efb7e3512eafc6f30",
"text": "GPS-equipped taxis can be regarded as mobile sensors probing traffic flows on road surfaces, and taxi drivers are usually experienced in finding the fastest (quickest) route to a destination based on their knowledge. In this paper, we mine smart driving directions from the historical GPS trajectories of a large number of taxis, and provide a user with the practically fastest route to a given destination at a given departure time. In our approach, we propose a time-dependent landmark graph, where a node (landmark) is a road segment frequently traversed by taxis, to model the intelligence of taxi drivers and the properties of dynamic road networks. Then, a Variance-Entropy-Based Clustering approach is devised to estimate the distribution of travel time between two landmarks in different time slots. Based on this graph, we design a two-stage routing algorithm to compute the practically fastest route. We build our system based on a real-world trajectory dataset generated by over 33,000 taxis in a period of 3 months, and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. As a result, 60-70% of the routes suggested by our method are faster than the competing methods, and 20% of the routes share the same results. On average, 50% of our routes are at least 20% faster than the competing approaches.",
"title": ""
},
{
"docid": "10858cdad9f821a88c3e2e56642b239c",
"text": "The clustering algorithm DBSCAN relies on a density-based notion of clusters and is designed to discover clusters of arbitrary shape as well as to distinguish noise. In this paper, we generalize this algorithm in two important directions. The generalized algorithm—called GDBSCAN—can cluster point objects as well as spatially extended objects according to both, their spatial and their nonspatial attributes. In addition, four applications using 2D points (astronomy), 3D points (biology), 5D points (earth science) and 2D polygons (geography) are presented, demonstrating the applicability of GDBSCAN to real-world problems.",
"title": ""
},
{
"docid": "40f21a8702b9a0319410b716bda0a11e",
"text": "A number of supervised learning methods have been introduced in the last decade. Unfortunately, the last comprehensive empirical evaluation of supervised learning was the Statlog Project in the early 90's. We present a large-scale empirical comparison between ten supervised learning methods: SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps. We also examine the effect that calibrating the models via Platt Scaling and Isotonic Regression has on their performance. An important aspect of our study is the use of a variety of performance criteria to evaluate the learning methods.",
"title": ""
},
{
"docid": "cbaf7cd4e17c420b7546d132959b3283",
"text": "User mobility has given rise to a variety of Web applications, in which the global positioning system (GPS) plays many important roles in bridging between these applications and end users. As a kind of human behavior, transportation modes, such as walking and driving, can provide pervasive computing systems with more contextual information and enrich a user's mobility with informative knowledge. In this article, we report on an approach based on supervised learning to automatically infer users' transportation modes, including driving, walking, taking a bus and riding a bike, from raw GPS logs. Our approach consists of three parts: a change point-based segmentation method, an inference model and a graph-based post-processing algorithm. First, we propose a change point-based segmentation method to partition each GPS trajectory into separate segments of different transportation modes. Second, from each segment, we identify a set of sophisticated features, which are not affected by differing traffic conditions (e.g., a person's direction when in a car is constrained more by the road than any change in traffic conditions). Later, these features are fed to a generative inference model to classify the segments of different modes. Third, we conduct graph-based postprocessing to further improve the inference performance. This postprocessing algorithm considers both the commonsense constraints of the real world and typical user behaviors based on locations in a probabilistic manner. The advantages of our method over the related works include three aspects. (1) Our approach can effectively segment trajectories containing multiple transportation modes. (2) Our work mined the location constraints from user-generated GPS logs, while being independent of additional sensor data and map information like road networks and bus stops. (3) The model learned from the dataset of some users can be applied to infer GPS data from others. Using the GPS logs collected by 65 people over a period of 10 months, we evaluated our approach via a set of experiments. As a result, based on the change-point-based segmentation method and Decision Tree-based inference model, we achieved prediction accuracy greater than 71 percent. Further, using the graph-based post-processing algorithm, the performance attained a 4-percent enhancement.",
"title": ""
}
] |
[
{
"docid": "3560950e32bb5cc2a2c80a53ad7e0617",
"text": "Automatically solving mathematical word problems (MWPs) is challenging, primarily due to the semantic gap between human-readable words and machine-understandable logics. Despite a long history dated back to the 1960s, MWPs has regained intensive attention in the past few years with the advancement of Artificial Intelligence (AI). To solve MWPs successfully is considered as a milestone towards general AI. Many systems have claimed promising results in self-crafted and small-scale datasets. However, when applied on large and diverse datasets, none of the proposed methods in the literatures achieves a high precision, revealing that current MWPs solvers are still far from intelligent. This motivated us to present a comprehensive survey to deliver a clear and complete picture of automatic math problem solvers. In this survey, we emphasize on algebraic word problems, summarize their extracted features and proposed techniques to bridge the semantic gap, and compare their performance in the publicly accessible datasets. We will also cover automatic solvers for other types of math problems such as geometric problems that require the understanding of diagrams. Finally, we will identify several emerging research directions for the readers with interests in MWPs.",
"title": ""
},
{
"docid": "169ea06b2ec47b77d01fe9a4d4f8a265",
"text": "One of the main challenges in security today is defending against malware attacks. As trends and anecdotal evidence show, preventing these attacks, regardless of their indiscriminate or targeted nature, has proven difficult: intrusions happen and devices get compromised, even at security-conscious organizations. As a consequence, an alternative line of work has focused on detecting and disrupting the individual steps that follow an initial compromise and are essential for the successful progression of the attack. In particular, several approaches and techniques have been proposed to identify the command and control (C8C) channel that a compromised system establishes to communicate with its controller.\n A major oversight of many of these detection techniques is the design’s resilience to evasion attempts by the well-motivated attacker. C8C detection techniques make widespread use of a machine learning (ML) component. Therefore, to analyze the evasion resilience of these detection techniques, we first systematize works in the field of C8C detection and then, using existing models from the literature, go on to systematize attacks against the ML components used in these approaches.",
"title": ""
},
{
"docid": "9b1874fb7e440ad806aa1da03f9feceb",
"text": "Given an existing trained neural network, it is often desirable to learn new capabilities without hindering performance of those already learned. Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added domain, typically as many as the original network. We propose a method called Deep Adaptation Modules (DAM) that constrains newly learned filters to be linear combinations of existing ones. DAMs precisely preserve performance on the original domain, require a fraction (typically 13%, dependent on network architecture) of the number of parameters compared to standard fine-tuning procedures and converge in less cycles of training to a comparable or better level of performance. When coupled with standard network quantization techniques, we further reduce the parameter cost to around 3% of the original with negligible or no loss in accuracy. The learned architecture can be controlled to switch between various learned representations, enabling a single network to solve a task from multiple different domains. We conduct extensive experiments showing the effectiveness of our method on a range of image classification tasks and explore different aspects of its behavior.",
"title": ""
},
{
"docid": "6a5b587073c46cc584fc01c4f3519fab",
"text": "Baggage inspection using X-ray screening is a priority task that reduces the risk of crime and terrorist attacks. Manual detection of threat items is tedious because very few bags actually contain threat items and the process requires a high degree of concentration. An automated solution would be a welcome development in this field. We propose a methodology for automatic detection of threat objects using single X-ray images. Our approach is an adaptation of a methodology originally created for recognizing objects in photographs based on implicit shape models. Our detection method uses a visual vocabulary and an occurrence structure generated from a training dataset that contains representative X-ray images of the threat object to be detected. Our method can be applied to single views of grayscale X-ray images obtained using a single energy acquisition system. We tested the effectiveness of our method for the detection of three different threat objects: 1) razor blades; 2) shuriken (ninja stars); and 3) handguns. The testing dataset for each threat object consisted of 200 X-ray images of bags. The true positive and false positive rates (TPR and FPR) are: (0.99 and 0.02) for razor blades, (0.97 and 0.06) for shuriken, and (0.89 and 0.18) for handguns. If other representative training datasets were utilized, we believe that our methodology could aid in the detection of other kinds of threat objects.",
"title": ""
},
{
"docid": "ae3b4397ebc759bbf20850f949bc7376",
"text": "Circulating tumor cell clusters (CTC clusters) are present in the blood of patients with cancer but their contribution to metastasis is not well defined. Using mouse models with tagged mammary tumors, we demonstrate that CTC clusters arise from oligoclonal tumor cell groupings and not from intravascular aggregation events. Although rare in the circulation compared with single CTCs, CTC clusters have 23- to 50-fold increased metastatic potential. In patients with breast cancer, single-cell resolution RNA sequencing of CTC clusters and single CTCs, matched within individual blood samples, identifies the cell junction component plakoglobin as highly differentially expressed. In mouse models, knockdown of plakoglobin abrogates CTC cluster formation and suppresses lung metastases. In breast cancer patients, both abundance of CTC clusters and high tumor plakoglobin levels denote adverse outcomes. Thus, CTC clusters are derived from multicellular groupings of primary tumor cells held together through plakoglobin-dependent intercellular adhesion, and though rare, they greatly contribute to the metastatic spread of cancer.",
"title": ""
},
{
"docid": "fe18b85af942d35b4e4ec1165e2e63c3",
"text": "The retrofitting of existing buildings to resist the seismic loads is very important to avoid losing lives or financial disasters. The aim at retrofitting processes is increasing total structure strength by increasing stiffness or ductility ratio. In addition, the response modification factors (R) have to satisfy the code requirements for suggested retrofitting types. In this study, two types of jackets are used, i.e. full reinforced concrete jackets and surrounding steel plate jackets. The study is carried out on an existing building in Madinah by performing static pushover analysis before and after retrofitting the columns. The selected model building represents nearly all-typical structure lacks structure built before 30 years ago in Madina City, KSA. The comparison of the results indicates a good enhancement of the structure respect to the applied seismic forces. Also, the response modification factor of the RC building is evaluated for the studied cases before and after retrofitting. The design of all vertical elements (columns) is given. The results show that the design of retrofitted columns satisfied the code's design stress requirements. However, for some retrofitting types, the ductility requirements represented by response modification factor do not satisfy KSA design code (SBC301). Keywords—Concrete jackets, steel jackets, RC buildings pushover analysis, non-linear analysis.",
"title": ""
},
{
"docid": "131a866cba7a8b2e4f66f2496a80cb41",
"text": "The Python language is highly dynamic, most notably due to late binding. As a consequence, programs using Python typically run an order of magnitude slower than their C counterpart. It is also a high level language whose semantic can be made more static without much change from a user point of view in the case of mathematical applications. In that case, the language provides several vectorization opportunities that are studied in this paper, and evaluated in the context of Pythran, an ahead-of-time compiler that turns Python module into C++ meta-programs.",
"title": ""
},
{
"docid": "0df681e77b30e9143f7563b847eca5c6",
"text": "BRIDGE bot is a 158 g, 10.7 × 8.9 × 6.5 cm3, magnetic-wheeled robot designed to traverse and inspect steel bridges. Utilizing custom magnetic wheels, the robot is able to securely adhere to the bridge in any orientation. The body platform features flexible, multi-material legs that enable a variety of plane transitions as well as robot shape manipulation. The robot is equipped with a Cortex-M0 processor, inertial sensors, and a modular wireless radio. A camera is included to provide images for detection and evaluation of identified problems. The robot has been demonstrated moving through plane transitions from 45° to 340° as well as over obstacles up to 9.5 mm in height. Preliminary use of sensor feedback to improve plane transitions has also been demonstrated.",
"title": ""
},
{
"docid": "780e49047bdacda9862c51338aa1397f",
"text": "We consider stochastic volatility models under parameter uncertainty and investigate how model derived prices of European options are affected. We let the pricing parameters evolve dynamically in time within a specified region, and formalise the problem as a control problem where the control acts on the parameters to maximise/minimise the option value. Through a dual representation with backward stochastic differential equations, we obtain explicit equations for Heston’s model and investigate several numerical solutions thereof. In an empirical study, we apply our results to market data from the S&P 500 index where the model is estimated to historical asset prices. We find that the conservative model-prices cover 98% of the considered market-prices for a set of European call options.",
"title": ""
},
{
"docid": "3b27f02b96f079e57714ef7c2f688b48",
"text": "Polycystic ovary syndrome (PCOS) affects 5-10% of women in reproductive age and is characterized by oligo/amenorrhea, androgen excess, insulin resistance, and typical polycystic ovarian morphology. It is the most common cause of infertility secondary to ovulatory dysfunction. The underlying etiology is still unknown but is believed to be multifactorial. Insulin-sensitizing compounds such as inositol, a B-complex vitamin, and its stereoisomers (myo-inositol and D-chiro-inositol) have been studied as an effective treatment of PCOS. Administration of inositol in PCOS has been shown to improve not only the metabolic and hormonal parameters but also ovarian function and the response to assisted-reproductive technology (ART). Accumulating evidence suggests that it is also capable of improving folliculogenesis and embryo quality and increasing the mature oocyte yield following ovarian stimulation for ART in women with PCOS. In the current review, we collate the evidence and summarize our current knowledge on ovarian stimulation and ART outcomes following inositol treatment in women with PCOS undergoing in vitro fertilization (IVF) and/or intracytoplasmic sperm injection (ICSI).",
"title": ""
},
{
"docid": "30e80cceb7e63f89c6ab0cd20988bedb",
"text": "This work is focused on the development of a new management system for building and home automation that provides a fully real time monitor of household appliances and home environmental parameters. The developed system consists of a smart sensing unit, wireless sensors and actuators and a Web-based interface for remote and mobile applications. The main advantages of the proposed solution rely on the reliability of the developed algorithmics, on modularity and open-system characteristics, on low power consumption and system cost efficiency.",
"title": ""
},
{
"docid": "7687f85746acf4e3cd24d512e5efd31e",
"text": "Thyroid eye disease is a multifactorial autoimmune disease with a spectrum of signs and symptoms. Oftentimes, the diagnosis of thyroid eye disease is straightforward, based upon history and physical examination. The purpose of this review is to assist the eye-care practitioner in staging the severity of thyroid eye disease (mild, moderate-to-severe and sight-threatening) and correlating available treatment modalities. Eye-care practitioners play an important role in the multidisciplinary team by assessing functional vision while also managing ocular health.",
"title": ""
},
{
"docid": "510a43227819728a77ff0c7fa06fa2d0",
"text": "The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While there is a plethora of classification algorithms that can be applied to time series, all of the current empirical evidence suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping. In this work we make a surprising claim. There is an invariance that the community has missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where complex objects are incorrectly assigned to a simpler class. We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series classification experiments ever attempted, and show that complexity-invariant distance measures can produce improvements in accuracy in the vast majority of cases.",
"title": ""
},
{
"docid": "cb92003c6e6344fcb2c735d3e93801b9",
"text": "Recommendation systems play a vital role to keep users engaged with personalized content in modern online platforms. Deep learning has revolutionized many research fields and there is a recent surge of interest in applying it to collaborative filtering (CF). However, existing methods compose deep learning architectures with the latent factor model ignoring a major class of CF models, neighborhood or memory-based approaches. We propose Collaborative Memory Networks (CMN), a deep architecture to unify the two classes of CF models capitalizing on the strengths of the global structure of latent factor model and local neighborhood-based structure in a nonlinear fashion. Motivated by the success of Memory Networks, we fuse a memory component and neural attention mechanism as the neighborhood component. The associative addressing scheme with the user and item memories in the memory module encodes complex user-item relations coupled with the neural attention mechanism to learn a user-item specific neighborhood. Finally, the output module jointly exploits the neighborhood with the user and item memories to produce the ranking score. Stacking multiple memory modules together yield deeper architectures capturing increasingly complex user-item relations. Furthermore, we show strong connections between CMN components, memory networks and the three classes of CF models. Comprehensive experimental results demonstrate the effectiveness of CMN on three public datasets outperforming competitive baselines. Qualitative visualization of the attention weights provide insight into the model's recommendation process and suggest the presence of higher order interactions.",
"title": ""
},
{
"docid": "1403f10c08b41c9cb037e4c50fbb570f",
"text": "Semantic web is a paradigm that is proposed for configuring and controlling the overwhelming volumes of information on the web. One important challenge in semantic web is decreasing execution times of queries. Reordering triple patterns is an approach for decreasing execution times of queries. In this study, an ant colony optimization approach for optimizing SPARQL queries by reordering triple patterns is proposed. Contributions of this approach are optimizing order of triple patterns in SPARQL queries using ant colony optimization for lesser execution time and real time optimization without requiring any prior domain knowledge. This proposed novel method is implemented using ARQ query engine and it optimizes the queries for in-memory models of ontologies. Experiments show that proposed method reduces execution time considerably.",
"title": ""
},
{
"docid": "af0a1a8af70423ec09e0bb1e47f2e3f6",
"text": "Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to replicate some of these abilities with a neural network that implements curiosity-driven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which the agent can move and interact with objects it sees, the agent learns a world model predicting the dynamic consequences of its actions. Simultaneously, the agent learns to take actions that adversarially challenge the developing world model, pushing the agent to explore novel and informative interactions with its environment. We demonstrate that this policy leads to the self-supervised emergence of a spectrum of complex behaviors, including ego motion prediction, object attention, and object gathering. Moreover, the world model that the agent learns supports improved performance on object dynamics prediction and localization tasks. Our results are a proof-of-principle that computational models of intrinsic motivation might account for key features of developmental visuomotor learning in infants.",
"title": ""
},
{
"docid": "9af1423c296f59683b8e6528ad039d5c",
"text": "We present a novel approach to natural language generation (NLG) that applies hierarchical reinforcement learning to text generation in the wayfinding domain. Our approach aims to optimise the integration of NLG tasks that are inherently different in nature, such as decisions of content selection, text structure, user modelling, referring expression generation (REG), and surface realisation. It also aims to capture existing interdependencies between these areas. We apply hierarchical reinforcement learning to learn a generation policy that captures these interdependencies, and that can be transferred to other NLG tasks. Our experimental results—in a simulated environment—show that the learnt wayfinding policy outperforms a baseline policy that takes reasonable actions but without optimization.",
"title": ""
},
{
"docid": "461f422f7705f0b5ef8e8edde989719e",
"text": "In this paper we consider deterministic policy gradient algorithms for reinforcement learning with continuous actions. The deterministic policy gradient has a particularly appealing form: it is the expected gradient of the action-value function. This simple form means that the deterministic policy gradient can be estimated much more efficiently than the usual stochastic policy gradient. To ensure adequate exploration, we introduce an off-policy actor-critic algorithm that learns a deterministic target policy from an exploratory behaviour policy. We demonstrate that deterministic policy gradient algorithms can significantly outperform their stochastic counterparts in high-dimensional action spaces.",
"title": ""
},
{
"docid": "cb29a1fc5a8b70b755e934c9b3512a36",
"text": "The problem of pedestrian detection in image and video frames has been extensively investigated in the past decade. However, the low performance in complex scenes shows that it remains an open problem. In this paper, we propose to cascade simple Aggregated Channel Features (ACF) and rich Deep Convolutional Neural Network (DCNN) features for efficient and effective pedestrian detection in complex scenes. The ACF based detector is used to generate candidate pedestrian windows and the rich DCNN features are used for fine classification. Experiments show that the proposed approach achieved leading performance in the INRIA dataset and comparable performance to the state-of-the-art in the Caltech and ETH datasets.",
"title": ""
},
{
"docid": "d4617c6819cddbd79b1fca62a71e90b0",
"text": "In this paper, a single-stage three-phase isolated ac-dc converter topology utilizing SiC MOSFETs is proposed for power rectification with a stepped-down output voltage. Unlike the conventional two-stage [front-end power factor correction (PFC) stage and isolated dc-dc stage] ac-dc converters, the full/half bridge structure in dc-dc stage is eliminated in this structure. The high-frequency pulsating voltage is obtained directly from the PFC stage and is applied across the high-frequency transformer, leading to a more compact design. In addition, there is an advantage of zero voltage switching (ZVS) in four PFC MOSFETs connected to the high-frequency tank, which is not achievable in the case of a conventional two-staged ac-dc converter. A sine-pulse width modulation (PWM)-based control scheme is applied with the common-mode duty ratio injection method to minimize the current harmonics without affecting the power factor. An LC filter is used after the PFC semistage to suppress the line-frequency voltage ripple. Furthermore, the intermediate dc-link capacitor value can be greatly reduced through no additional ripple constraints. Experimental and simulation results are included for a laboratory prototype, which converts 115-V, 400-Hz three-phase input voltage to 28-V dc output voltage. The experimental results demonstrate a power factor of 0.993 with a conversion efficiency of 95.4%, and total harmonic distortion (THD) as low as 3.5% at 2.1-kW load condition.",
"title": ""
}
] |
scidocsrr
|
8c816d54c05be71aafe2578ef9a36e3c
|
Deep Captioning with Attention-Based Visual Concept Transfer Mechanism for Enriching Description
|
[
{
"docid": "e2d8da3d28f560c4199991dbdffb8c2c",
"text": "Attention-based neural encoder-decoder frameworks have been widely adopted for image captioning. Most methods force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as the and of. Other words that may seem visual can often be predicted reliably just from the language model e.g., sign after behind a red stop or phone following talking on a cell. In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time step, our model decides whether to attend to the image (and if so, to which regions) or to the visual sentinel. The model decides whether to attend to the image and where, in order to extract meaningful information for sequential word generation. We test our method on the COCO image captioning 2015 challenge dataset and Flickr30K. Our approach sets the new state-of-the-art by a significant margin.",
"title": ""
},
{
"docid": "527d7c091cfc63c8e9d36afdd6b7bdfe",
"text": "Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image.",
"title": ""
}
] |
[
{
"docid": "691da5852aad20ace40be20bfeae3ea7",
"text": "Experimental manipulations of affect induced by a brief newspaper report of a tragic event produced a pervasive increase in subjects' estimates of the frequency of many risks and other undesirable events. Contrary to expectation, the effect was independent of the similarity between the report arid the estimated risk. An account of a fatal stabbing did not increase the frequency estimate of a closely related risk, homicide, more than the estimates of unrelated risks such as natural hazards. An account of a happy event that created positive affect produced a comparable global decrease in judged frequency of risks.",
"title": ""
},
{
"docid": "39bfd705fb71e9ba4a503246408c6820",
"text": "We develop a theoretical model to describe and explain variation in corporate governance among advanced capitalist economies, identifying the social relations and institutional arrangements that shape who controls corporations, what interests corporations serve, and the allocation of rights and responsibilities among corporate stakeholders. Our “actor-centered” institutional approach explains firm-level corporate governance practices in terms of institutional factors that shape how actors’ interests are defined (“socially constructed”) and represented. Our model has strong implications for studying issues of international convergence.",
"title": ""
},
{
"docid": "97e2b9a4c2dcd7b7180c06490816101f",
"text": "Natural images exhibit geometric structures that are informative of the properties of the underlying scene. Modern image processing algorithms respect such characteristics by employing regularizers that capture the statistics of natural images. For instance, total variation (TV) respects the highly kurtotic distribution of the pointwise gradient by allowing for large magnitude outlayers. However, the gradient magnitude alone does not capture the directionality and scale of local structures in natural images. The structure tensor provides a more meaningful description of gradient information as it describes both the size and orientation of the image gradients in a neighborhood of each point. Based on this observation, we propose a variational model for image reconstruction that employs a regularization functional adapted to the local geometry of image by means of its structure tensor. Our method alternates two minimization steps: 1) robust estimation of the structure tensor as a semidefinite program and 2) reconstruction of the image with an adaptive regularizer defined from this tensor. This two-step procedure allows us to extend anisotropic diffusion into the convex setting and develop robust, efficient, and easy-to-code algorithms for image denoising, deblurring, and compressed sensing. Our method extends naturally to nonlocal regularization, where it exploits the local self-similarity of natural images to improve nonlocal TV and diffusion operators. Our experiments show a consistent accuracy improvement over classic regularization.",
"title": ""
},
{
"docid": "453e4343653f2d84bc4b5077d9556de1",
"text": "Device-to-Device (D2D) communication is the technology enabling user equipments (UEs) to directly communicate with each other without help of evolved nodeB (eNB). Due to this characteristic, D2D communication can reduce end-to-end delay and traffic load offered to eNB. However, by applying D2D communication into cellular systems, interference between D2D and eNB relaying UEs can occur if D2D UEs reuse frequency band for eNB relaying UEs. In cellular systems, fractional frequency reuse (FFR) is used to reduce inter-cell interference of cell outer UEs. In this paper, we propose a radio resource allocation scheme for D2D communication underlaying cellular networks using FFR. In the proposed scheme, D2D and cellular UEs use the different frequency bands chosen as users' locations. The proposed radio resource allocation scheme can alleviate interference between D2D and cellular UEs if D2D device is located in cell inner region. If D2D UEs is located in cell outer region, D2D and cellular UEs experience tolerable interference. By simulations, we show that the proposed scheme improves the performance of D2D and cellular UEs by reducing interference between them.",
"title": ""
},
{
"docid": "4c2756db19aec0a7bf59d1a054da59c2",
"text": "A modular real-time diminished reality pipeline for indoor applications is presented. The pipeline includes a novel inpainting method which requires no prior information of the textures behind the object to be diminished. The inpainting method operates on rectified images and adapts to scene illumination. In typically challenging illumination situations, the method produces more realistic results in indoor scenes than previous approaches. Modularity enables using alternative implementations in different stages and adapting the pipeline for different applications. Finally, practical solutions to problems occurring in diminished reality applications, for example interior design, are discussed.",
"title": ""
},
{
"docid": "c9d833d872ab0550edb0aa26565ac76b",
"text": "In this paper we investigate the potential of the neural machine translation (NMT) when taking into consideration the linguistic aspect of target language. From this standpoint, the NMT approach with attention mechanism [1] is extended in order to produce several linguistically derived outputs. We train our model to simultaneously output the lemma and its corresponding factors (e.g. part-of-speech, gender, number). The word level translation is built with a mapping function using a priori linguistic information. Compared to the standard NMT system, factored architecture increases significantly the vocabulary coverage while decreasing the number of unknown words. With its richer architecture, the Factored NMT approach allows us to implement several training setup that will be discussed in detail along this paper. On the IWSLT’15 English-to-French task, FNMT model outperforms NMT model in terms of BLEU score. A qualitative analysis of the output on a set of test sentences shows the effectiveness of the FNMT model.",
"title": ""
},
{
"docid": "7f75e0b789e7b2bbaa47c7fa06efb852",
"text": "A significant increase in the capability for controlling motion dynamics in key frame animation is achieved through skeleton control. This technique allows an animator to develop a complex motion sequence by animating a stick figure representation of an image. This control sequence is then used to drive an image sequence through the same movement. The simplicity of the stick figure image encourages a high level of interaction during the design stage. Its compatibility with the basic key frame animation technique permits skeleton control to be applied selectively to only those components of a composite image sequence that require enhancement.",
"title": ""
},
{
"docid": "fbb48416c34d4faee1a87ac2efaf466d",
"text": "Do unsupervised methods for learning rich, contextualized token representations obviate the need for explicit modeling of linguistic structure in neural network models for semantic role labeling (SRL)? We address this question by incorporating the massively successful ELMo embeddings (Peters et al., 2018) into LISA (Strubell et al., 2018), a strong, linguisticallyinformed neural network architecture for SRL. In experiments on the CoNLL-2005 shared task we find that though ELMo outperforms typical word embeddings, beginning to close the gap in F1 between LISA with predicted and gold syntactic parses, syntactically-informed models still outperform syntax-free models when both use ELMo, especially on out-of-domain data. Our results suggest that linguistic structures are indeed still relevant in this golden age of deep learning for NLP.",
"title": ""
},
{
"docid": "f162fdc93b4cea2cf1ffa6042b1a2b54",
"text": "Our research examined the effects of handsfree cell-phone conversations on simulated driving. We found that even when participants looked directly at objects in the driving environment, they were less likely to create a durable memory of those objects if they were conversing on a cell phone. This pattern was obtained for objects of both high and low relevance, suggesting that very little semantic analysis of the objects occurs outside the restricted focus of attention. Moreover, in-vehicle conversations do not interfere with driving as much as cell-phone conversations do, because drivers are better able to synchronize the processing demands of driving with invehicle conversations than with cell-phone conversations. Together, these data support an inattention-blindness interpretation wherein the disruptive effects of cell-phone conversations on driving are due in large part to the diversion of attention from driving to the phone conversation. KEYWORDS—driver distraction; inattention blindness; attention; cell phones This article focuses on a dual-task activity that over 100 million drivers in the United States currently engage in: the concurrent use of a cell phone while operating a motor vehicle. It is now well established that cell-phone use significantly impairs driving performance (e.g., McEvoy et al., 2005; Redelmeier & Tibshirani, 1997; Strayer, Drews, & Johnston, 2003; Strayer & Johnston, 2001). For example, our earlier research found that cell-phone conversations made drivers more likely to miss traffic signals and react more slowly to the signals that they did detect (Strayer & Johnston, 2001). Moreover, equivalent deficits in driving performance were obtained for users of both hand-held and hands-free cell phones (see also Strayer, Drews, & Crouch, 2006). By contrast, listening to radio broadcasts or books on tape did not impair driving. These findings are important because they demonstrate that listening to verbal material, by itself, is not sufficient to produce the dual-task interference associated with using a cell phone while driving. The data indicate that when a driver becomes involved in a cell-phone conversation, attention is withdrawn from the processing of the information in the driving environment necessary for safe operation of the motor vehicle. EVIDENCE OF INATTENTION BLINDNESS The objective of this article is to muster evidence in support of the hypothesis that cell-phone conversations impair driving by inducing a form of inattention blindness in which drivers fail to see objects in their driving environment when they are talking on a cell phone. Our first study examined how cell-phone conversations affect drivers’ attention to objects they encounter while driving. We contrasted performance when participants were driving but not conversing (i.e., single-task conditions) with that when participants were driving and conversing on a hands-free cell phone (i.e., dual-task conditions). We used an incidentalrecognition-memory paradigm to assess what information in the driving scene participants attended to while driving. The procedure required participants to perform a simulated driving task without the foreknowledge that their memory for objects in the driving scene would be subsequently tested. Later, participants were given a surprise recognition-memory test in which they were shown objects that had been presented while they were driving and were asked to discriminate these objects from foils that had not been in the driving scene. Differences in incidental recognition memory between singleand dual-task conditions provide an estimate of the degree to which attention to visual information in the driving environment is distracted by cellphone conversations. Each of the four studies we report here used a computerized driving simulator (made by I-SIM; shown in Fig. 1) with high-resolution displays providing a 180-degree field of view. (The dashboard instrumentation, steering wheel, gas, and brake pedal are from a Ford Crown Victoria sedan with an automatic transmission.) The simulator incorporates vehicle-dynamics, Address correspondence to David Strayer, Department of Psychology, 380 S. 1530 E. RM 502, University of Utah, Salt Lake City, UT 84112; e-mail: [email protected]. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 128 Volume 16—Number 3 Copyright r 2007 Association for Psychological Science traffic-scenario, and road-surface software to provide realistic scenes and traffic conditions. We monitored the eye fixations of participants using a video-based eye-tracker (Applied Science Laboratories Model 501) that allows a free range of head and eye movements, thereby affording naturalistic viewing conditions for participants as they negotiated the driving environment. The dual-task conditions in our studies involved naturalistic conversations with a confederate on a cell phone. To avoid any possible interference from manual components of cell-phone use, participants used a hands-free cell phone that was positioned and adjusted before driving began (see Fig. 1). Additionally, the call was begun before participants began the dual-task scenarios. Thus, any dual-task interference that we observed had to be due to the cell-phone conversation itself, as there was no manual manipulation of the cell phone during the dual-task portions of the study. Our first study focused on the conditional probability of participants recognizing objects that they had fixated on while driving. This analysis specifically tested for memory of objects presented where a given driver’s eyes had been directed. The conditional probability analysis revealed that participants were more than twice as likely to recognize roadway signs encountered in the single-task condition than in the dual-task condition. That is, when we focused our analysis on objects in the driving scene on which participants had fixated, we found significant differences in recognition memory between singleand dual-task conditions. Moreover, our analysis found that even when participants’ eyes were directed at objects in the driving environment for the same duration, they were less likely to remember them if they were conversing on a cellular phone. The data are consistent with the inattention-blindness hypothesis: The cell-phone conversation disrupts performance by diverting attention from the external environment associated with the driving task to an engaging context associated with the cellphone conversation. Our second study examined the extent to which drivers who engage in cell-phone conversations strategically reallocate attention from the processing of less-relevant information in the driving scene to the cell-phone conversation while continuing to give highest priority to the processing of task-relevant information in the driving scene. If such a reallocation policy were observed, it would suggest that drivers might be able to learn how to safely use cell phones while driving. The procedure was similar to that of the first study except that we used a two-alternative forced-choice recognition-memory paradigm to determine what information in the driving scene participants attended to while driving. We placed 30 objects varying in relevance to safe driving (e.g., pedestrians, cars, trucks, signs, billboards, etc.) along the roadway in the driving scene; another 30 objects were not presented in the driving scene and served as foils in the recognition-memory task. There were different driving scenarios for different participants and target objects for some participants were foil objects for others. Objects in the driving scene were positioned so that they were clearly in view as participants drove past them, and the target and foils were counterbalanced across participants. Here again, participants were not informed about the memory test until after they had completed the driving portions of the study. As in the first study, we computed the conditional probability of recognizing an object given that participants fixated on it while driving. Like the first study, this analysis specifically tested for memory of objects that were located where the driver’s eyes had been directed. We found that participants were more likely to recognize objects encountered in the single-task condition than in the dual-task condition and that this difference was not affected by how long they had fixated on the objects. Thus, when we ensured that participants looked at an object for the same amount of time, we found significant differences in recognition memory between singleand dual-task conditions. After each forced-choice judgment, participants were also asked to rate the objects in terms of their relevance to safe driving, using a 10-point scale (participants were initially given an example in which a child playing near the road might receive a rating of 9 or 10, whereas a sign documenting that a volunteer group cleans a particular section of the highway might receive a rating of 1). Participants’ safety-relevance ratings ranged from 1.5 to 8, with an average of 4.1. A series of regression analyses revealed that there was no association between recognition memory and traffic relevance. In fact, traffic relevance had absolutely no effect on the difference in recognition memory between singleand dual-task conditions, suggesting that the contribution of an object’s perceived relevance to recognitionmemory performance is negligible. This analysis is important because it indicates that drivers do not strategically reallocate attention from the processing of less-relevant information in the driving scene to the cell-phone conversation while continuing to give highest priority to the processing of task-relevant information in the driving scene. Fig. 1. A participant talking on a hands-free cell phone while driving in the simulator. Volume 16—Number 3 129 David L. Strayer and Frank A. Drews",
"title": ""
},
{
"docid": "3d3fa8c36232b4d29c348e800423fc7e",
"text": "In this paper I review what insights we have gained about economic and financial relationships from the use of wavelets and speculate on what further insights we may gain in the future. Wavelets are treated as a “lens” that enables the researcher to explore relationships that previously were",
"title": ""
},
{
"docid": "f4fbd925fb46f05c526b228993f5e326",
"text": "Obesity in the world has spread to epidemic proportions. In 2008 the World Health Organization (WHO) reported that 1.5 billion adults were suffering from some sort of overweightness. Obesity treatment requires constant monitoring and a rigorous control and diet to measure daily calorie intake. These controls are expensive for the health care system, and the patient regularly rejects the treatment because of the excessive control over the user. Recently, studies have suggested that the usage of technology such as smartphones may enhance the treatments of obesity and overweight patients; this will generate a degree of comfort for the patient, while the dietitian can count on a better option to record the food intake for the patient. In this paper we propose a smart system that takes advantage of the technologies available for the Smartphones, to build an application to measure and monitor the daily calorie intake for obese and overweight patients. Via a special technique, the system records a photo of the food before and after eating in order to estimate the consumption calorie of the selected food and its nutrient components. Our system presents a new instrument in food intake measuring which can be more useful and effective.",
"title": ""
},
{
"docid": "7e22005412f4e7e924103102cbcb7374",
"text": "Most of the clustering algorithms are based on Euclidean distance as measure of similarity between data objects. Theses algorithms also require initial setting of parameters as a prior, for example the number of clusters. The Euclidean distance is very sensitive to scales of variables involved and independent of correlated variables. To conquer these drawbacks a hybrid clustering algorithm based on Mahalanobis distance is proposed in this paper. The reason for the hybridization is to relieve the user from setting the parameters in advance. The experimental results of the proposed algorithm have been presented for both synthetic and real datasets. General Terms Data Mining, Clustering, Pattern Recognition, Algorithms.",
"title": ""
},
{
"docid": "519c5735d8debde98645a6d41f68d3af",
"text": "Although the writing of a thesis is a very important step for scientists undertaking a career in research, little information exists on the impact of theses as a source of scientific information. Knowing the impact of theses is relevant not only for students undertaking graduate studies, but also for the building of repositories of electronic theses and dissertations (ETD) and the substantial investment this involves. This paper shows that the impact of theses as information sources has been generally declining over the last century, apart from during the period of the ‘golden years’ of research, 1945 to 1975. There is no evidence of ETDs having a positive impact; on the contrary, since their introduction the impact of theses has actually declined more rapidly. This raises questions about the justification for ETDs and the appropriateness of writing monograph style theses as opposed to publication of a series of peer-reviewed papers as the requirement for fulfilment of graduate studies.",
"title": ""
},
{
"docid": "99880fca88bef760741f48166a51ca6f",
"text": "This paper describes first results using the Unified Medical Language System (UMLS) for distantly supervised relation extraction. UMLS is a large knowledge base which contains information about millions of medical concepts and relations between them. Our approach is evaluated using existing relation extraction data sets that contain relations that are similar to some of those in UMLS.",
"title": ""
},
{
"docid": "60cac74e5feffb45f3b926ce2ec8b0b9",
"text": "Battery power is an important resource in ad hoc networks. It has been observed that in ad hoc networks, energy consumption does not reflect the communication activities in the network. Many existing energy conservation protocols based on electing a routing backbone for global connectivity are oblivious to traffic characteristics. In this paper, we propose an extensible on-demand power management framework for ad hoc networks that adapts to traffic load. Nodes maintain soft-state timers that determine power management transitions. By monitoring routing control messages and data transmission, these timers are set and refreshed on-demand. Nodes that are not involved in data delivery may go to sleep as supported by the MAC protocol. This soft state is aggregated across multiple flows and its maintenance requires no additional out-of-band messages. We implement a prototype of our framework in the ns-2 simulator that uses the IEEE 802.11 MAC protocol. Simulation studies using our scheme with the Dynamic Source Routing protocol show a reduction in energy consumption near 50% when compared to a network without power management under both long-lived CBR traffic and on-off traffic loads, with comparable throughput and latency. Preliminary results also show that it outperforms existing routing backbone election approaches.",
"title": ""
},
{
"docid": "2ccae5b48fc5ac10f948b79fc4fb6ff3",
"text": "Hierarchical attention networks have recently achieved remarkable performance for document classification in a given language. However, when multilingual document collections are considered, training such models separately for each language entails linear parameter growth and lack of cross-language transfer. Learning a single multilingual model with fewer parameters is therefore a challenging but potentially beneficial objective. To this end, we propose multilingual hierarchical attention networks for learning document structures, with shared encoders and/or shared attention mechanisms across languages, using multi-task learning and an aligned semantic space as input. We evaluate the proposed models on multilingual document classification with disjoint label sets, on a large dataset which we provide, with 600k news documents in 8 languages, and 5k labels. The multilingual models outperform monolingual ones in low-resource as well as full-resource settings, and use fewer parameters, thus confirming their computational efficiency and the utility of cross-language transfer.",
"title": ""
},
{
"docid": "79fd1db13ce875945c7e11247eb139c8",
"text": "This paper provides a comprehensive review of outcome studies and meta-analyses of effectiveness studies of psychodynamic therapy (PDT) for the major categories of mental disorders. Comparisons with inactive controls (waitlist, treatment as usual and placebo) generally but by no means invariably show PDT to be effective for depression, some anxiety disorders, eating disorders and somatic disorders. There is little evidence to support its implementation for post-traumatic stress disorder, obsessive-compulsive disorder, bulimia nervosa, cocaine dependence or psychosis. The strongest current evidence base supports relatively long-term psychodynamic treatment of some personality disorders, particularly borderline personality disorder. Comparisons with active treatments rarely identify PDT as superior to control interventions and studies are generally not appropriately designed to provide tests of statistical equivalence. Studies that demonstrate inferiority of PDT to alternatives exist, but are small in number and often questionable in design. Reviews of the field appear to be subject to allegiance effects. The present review recommends abandoning the inherently conservative strategy of comparing heterogeneous \"families\" of therapies for heterogeneous diagnostic groups. Instead, it advocates using the opportunities provided by bioscience and computational psychiatry to creatively explore and assess the value of protocol-directed combinations of specific treatment components to address the key problems of individual patients.",
"title": ""
},
{
"docid": "866e60129032c4e41761b7b19483c74a",
"text": "The technology to immerse people in computer generated worlds was proposed by Sutherland in 1965, and realised in 1968 with a head-mounted display that could present a user with a stereoscopic 3-dimensional view slaved to a sensing device tracking the user's head movements (Sutherland 1965; 1968). The views presented at that time were simple wire frame models. The advance of computer graphics knowledge and technology, itself tied to the enormous increase in processing power and decrease in cost, together with the development of relatively efficient and unobtrusive sensing devices, has led to the emergence of participatory immersive virtual environments, commonly referred to as \"virtual reality\" (VR) (Fisher 1982; Fisher et. al. 1986; Teitel 1990; see also SIGGRAPH Panel Proceedings 1989,1990). Ellis defines virtualisation as \"the process by which a human viewer interprets a patterned sensory impression to be an extended object in an environment other than that in which it physically exists\" (Ellis, 1991). In this definition the idea is taken from geometric optics, where the concept of a \"virtual image\" is precisely defined, and is well understood. In the context of virtual reality the \"patterned sensory impressions\" are generated to the human senses through visual, auditory, tactile and kinesthetic displays, though systems that effectively present information in all such sensory modalities do not exist at present. Ellis further distinguishes between a virtual space, image and environment. An example of the first is a flat surface on which an image is rendered. Perspective depth cues, texture gradients, occlusion, and other similar aspects of the image lead to an observer perceiving",
"title": ""
},
{
"docid": "90945e286b10df6564248c5c3d4b9116",
"text": "OBJECTIVE\nThe purpose is to develop and evaluate the ability of the computer-aided diagnosis (CAD) methods that apply texture analysis and pattern classification to differentiate malignant and benign bone and soft-tissue lesions on 18F-fluorodeoxy-glucose positron emission tomography/computed tomography ((18)F-FDG PET/CT) images.\n\n\nMETHODS\nSubjects were 103 patients with 59 malignant and 44 benign bone and soft tissue lesions larger than 25 mm in diameter. Variable texture parameters of standardized uptake values (SUV) and CT Hounsfield unit values were three-dimensionally calculated in lesional volumes-of-interest segmented on PET/CT images. After selection of a subset of the most optimal texture parameters, a support vector machine classifier was used to automatically differentiate malignant and benign lesions. We developed three kinds of CAD method. Two of them utilized only texture parameters calculated on either CT or PET images, and the other one adopted the combined PET and CT texture parameters. Their abilities of differential diagnosis were compared with the SUV method with an optimal cut-off value of the maximum SUV.\n\n\nRESULTS\nThe CAD methods utilizing only optimal PET (or CT) texture parameters showed sensitivity of 83.05 % (81.35 %), specificity of 63.63 % (61.36 %), and accuracy of 74.76 % (72.82 %). Although the ability of differential diagnosis by PET or CT texture analysis alone was not significantly different from the SUV method whose sensitivity, specificity, and accuracy were 64.41, 61.36, and 63.11 % (the optimal cut-off SUVmax was 5.4 ± 0.9 in the 10-fold cross-validation test), the CAD method with the combined PET and CT optimal texture parameters (PET: entropy and coarseness, CT: entropy and correlation) exhibited significantly better performance compared with the SUV method (p = 0.0008), showing a sensitivity of 86.44 %, specificity of 77.27 %, and accuracy of 82.52 %.\n\n\nCONCLUSIONS\nThe present CAD method using texture analysis to analyze the distribution/heterogeneity of SUV and CT values for malignant and benign bone and soft-tissue lesions improved the differential diagnosis on (18)F-FDG PET/CT images.",
"title": ""
},
{
"docid": "6c58c147bef99a2408859bdfa63da3a7",
"text": "We propose randomized least-squares value iteration (RLSVI) – a new reinforcement learning algorithm designed to explore and generalize efficiently via linearly parameterized value functions. We explain why versions of least-squares value iteration that use Boltzmann or -greedy exploration can be highly inefficient, and we present computational results that demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish an upper bound on the expected regret of RLSVI that demonstrates nearoptimality in a tabula rasa learning context. More broadly, our results suggest that randomized value functions offer a promising approach to tackling a critical challenge in reinforcement learning: synthesizing efficient exploration and effective generalization.",
"title": ""
}
] |
scidocsrr
|
b355c99c3db8c7848945e7f65029433c
|
Stream Compilation for Real-Time Embedded Multicore Systems
|
[
{
"docid": "22d5dd06ca164aa0b012b0764d7c4440",
"text": "As multicore architectures enter the mainstream, there is a pressing demand for high-level programming models that can effectively map to them. Stream programming offers an attractive way to expose coarse-grained parallelism, as streaming applications (image, video, DSP, etc.) are naturally represented by independent filters that communicate over explicit data channels.In this paper, we demonstrate an end-to-end stream compiler that attains robust multicore performance in the face of varying application characteristics. As benchmarks exhibit different amounts of task, data, and pipeline parallelism, we exploit all types of parallelism in a unified manner in order to achieve this generality. Our compiler, which maps from the StreamIt language to the 16-core Raw architecture, attains a 11.2x mean speedup over a single-core baseline, and a 1.84x speedup over our previous work.",
"title": ""
}
] |
[
{
"docid": "bde516c748dcd4a9b16ec8228220fa90",
"text": "BACKGROUND\nFew studies on foreskin development and the practice of circumcision have been done in Chinese boys. This study aimed to determine the natural development process of foreskin in children.\n\n\nMETHODS\nA total of 10 421 boys aged 0 to 18 years were studied. The condition of foreskin was classified into type I (phimosis), type II (partial phimosis), type III (adhesion of prepuce), type IV (normal), and type V (circumcised). Other abnormalities of the genitalia were also determined.\n\n\nRESULTS\nThe incidence of a completely retractile foreskin increased from 0% at birth to 42.26% in adolescence; however, the phimosis rate decreased with age from 99.7% to 6.81%. Other abnormalities included web penis, concealed penis, cryptorchidism, hydrocele, micropenis, inguinal hernia, and hypospadias.\n\n\nCONCLUSIONS\nIncomplete separation of foreskin is common in children. Since it is a natural phenomenon to approach the adult condition until puberty, circumcision should be performed with cautions in children.",
"title": ""
},
{
"docid": "edacac86802497e0e43c4a03bfd3b925",
"text": "This paper presents a novel tightly-coupled monocular visual-inertial Simultaneous Localization and Mapping algorithm, which provides accurate and robust localization within the globally consistent map in real time on a standard CPU. This is achieved by firstly performing the visual-inertial extended kalman filter(EKF) to provide motion estimate at a high rate. However the filter becomes inconsistent due to the well known linearization issues. So we perform a keyframe-based visual-inertial bundle adjustment to improve the consistency and accuracy of the system. In addition, a loop closure detection and correction module is also added to eliminate the accumulated drift when revisiting an area. Finally, the optimized motion estimates and map are fed back to the EKF-based visual-inertial odometry module, thus the inconsistency and estimation error of the EKF estimator are reduced. In this way, the system can continuously provide reliable motion estimates for the long-term operation. The performance of the algorithm is validated on public datasets and real-world experiments, which proves the superiority of the proposed algorithm.",
"title": ""
},
{
"docid": "a1f838270925e4769e15edfb37b281fd",
"text": "Assess extensor carpi ulnaris (ECU) tendon position in the ulnar groove, determine the frequency of tendon “dislocation” with the forearm prone, neutral, and supine, and determine if an association exists between ulnar groove morphology and tendon position in asymptomatic volunteers. Axial proton density-weighted MR was performed through the distal radioulnar joint with the forearm prone, neutral, and supine in 38 asymptomatic wrists. The percentage of the tendon located beyond the ulnar-most border of the ulnar groove was recorded. Ulnar groove depth and length was measured and ECU tendon signal was assessed. 15.8 % of tendons remained within the groove in all forearm positions. In 76.3 %, the tendon translated medially from prone to supine. The tendon “dislocated” in 0, 10.5, and 39.5 % with the forearm prone, neutral and supine, respectively. In 7.9 % prone, 5.3 % neutral, and 10.5 % supine exams, the tendon was 51–99 % beyond the ulnar border of the ulnar groove. Mean ulnar groove depth and length were 1.6 and 7.7 mm, respectively, with an overall trend towards greater degrees of tendon translation in shorter, shallower ulnar grooves. The ECU tendon shifts in a medial direction when the forearm is supine; however, tendon “dislocation” has not been previously documented in asymptomatic volunteers. The ECU tendon medially translated or frankly dislocated from the ulnar groove in the majority of our asymptomatic volunteers, particularly when the forearm is supine. Overall greater degrees of tendon translation were observed in shorter and shallower ulnar grooves.",
"title": ""
},
{
"docid": "55f80d7b459342a41bb36a5c0f6f7e0d",
"text": "A smart phone is a handheld device that combines the functionality of a cellphone, a personal digital assistant (PDA) and other information appliances such a music player. These devices can however be used in a crime and would have to be quickly analysed for evidence. This data is collected using either a forensic tool which resides on a PC or specialised hardware. This paper proposes the use of an on-phone forensic tool to collect the contents of the device and store it on removable storage. This approach requires less equipment and can retrieve the volatile information that resides on the phone such as running processes. The paper discusses the Symbian operating system, the evidence that is stored on the device and contrasts the approach with that followed by other tools.",
"title": ""
},
{
"docid": "6d0ae5d9d8cff434cfaabe476d608cb6",
"text": "10.1 INTRODUCTION Pulse compression involves the transmission of a long coded pulse and the processing of the received echo to obtain a relatively narrow pulse. The increased detection capability of a long-pulse radar system is achieved while retaining the range resolution capability of a narrow-pulse system. Several advantages are obtained. Transmission of long pulses permits a more efficient use of the average power capability of the radar. Generation of high peak power signals is avoided. The average power of the radar may be increased without increasing the pulse repetition frequency (PRF) and, hence, decreasing the radar's unambiguous range. An increased system resolving capability in doppler is also obtained as a result of the use of the long pulse. In addition, the radar is less vulnerable to interfering signals that differ from the coded transmitted signal. A long pulse may be generated from a narrow pulse. A narrow pulse contains a large number of frequency components with a precise phase relationship between them. If the relative phases are changed by a phase-distorting filter, the frequency components combine to produce a stretched, or expanded, pulse. This expanded pulse is the pulse that is transmitted. The received echo is processed in the receiver by a compression filter. The compression filter readjusts the relative phases of the frequency components so that a narrow or compressed pulse is again produced. The pulse compression ratio is the ratio of the width of the expanded pulse to that of the compressed pulse. The pulse compression ratio is also equal to the product of the time duration and the spectral bandwidth (time-bandwidth product) of the transmitted signal. A pulse compression radar is a practical implementation of a matched-filter system. The coded signal may be represented either as a frequency response H(U) or as an impulse time response h(i) of a coding filter. In Fig. 10. Ia 9 the coded signal is obtained by exciting the coding filter //(<*>) with a unit impulse. The received signal is fed to the matched filter, whose frequency response is the complex conjugate #*(a>) of the coding filter. The output of the matched-filter section is the compressed pulse, which is given by the inverse Fourier transform of the product of the signal spectrum //(a>) and the matched-filter response //*(o>):",
"title": ""
},
{
"docid": "9feac5bf882c3e812755f87a21a59652",
"text": "In 2013, George Church and his colleagues at Harvard University [2] in Cambridge, Massachusetts published \"RNA-Guided Human Genome Engineering via Cas 9,\" in which they detailed their use of RNA-guided Cas 9 to genetically modify genes [3] in human cells. Researchers use RNA-guided Cas 9 technology to modify the genetic information of organisms, DNA, by targeting specific sequences of DNA and subsequently replacing those targeted sequences with different DNA sequences. Church and his team used RNA-guided Cas 9 technology to edit the genetic information in human cells. Church and his colleagues also created a database that identified 190,000 unique guide RNAs for targeting almost half of the human genome [4] that codes for proteins. In \"RNA-Guided Human Genome Engineering via Cas 9,\" the authors demonstrated that RNA-guided Cas 9 was a robust and simple tool for genetic engineering, which has enabled scientists to more easily manipulate genomes for the study of biological processes and genetic diseases.",
"title": ""
},
{
"docid": "6ba2aed7930d4c7fee807a0f4904ddc5",
"text": "This work is released in biometric field and has as goal, development of a full automatic fingerprint identification system based on support vector machine. Promising Results of first experiences pushed us to develop codification and recognition algorithms which are specifically associated to this system. In this context, works were consecrated on algorithm developing of the original image processing, minutiae and singular points localization; Gabor filters coding and testing these algorithms on well known databases which are: FVC2004 databases & FingerCell database. Performance Evaluating has proved that SVM achieved a good recognition rate in comparing with results obtained using a classic neural network RBF. Keywords—Biometry, Core and Delta points Detection, Gabor filters coding, Image processing and Support vector machine.",
"title": ""
},
{
"docid": "b192bce1472ba8392af48982fde5da20",
"text": "This paper presents a new setup and investigates neural model predictive and variable structure controllers designed to control the single-degree-of-freedom rotary manipulator actuated by shape memory alloy (SMA). SMAs are a special group of metallic materials and have been widely used in the robotic field because of their particular mechanical and electrical characteristics. SMA-actuated manipulators exhibit severe hysteresis, so the controllers should confront this problem and make the manipulator track the desired angle. In this paper, first, a mathematical model of the SMA-actuated robot manipulator is proposed and simulated. The controllers are then designed. The results set out the high performance of the proposed controllers. Finally, stability analysis for the closed-loop system is derived based on the dissipativity theory.",
"title": ""
},
{
"docid": "7c4444cba23e78f7159e336638947189",
"text": "Certification of keys and attributes is in practice typically realized by a hierarchy of issuers. Revealing the full chain of issuers for certificate verification, however, can be a privacy issue since it can leak sensitive information about the issuer's organizational structure or about the certificate owner. Delegatable anonymous credentials solve this problem and allow one to hide the full delegation (issuance) chain, providing privacy during both delegation and presentation of certificates. However, the existing delegatable credentials schemes are not efficient enough for practical use.\n In this paper, we present the first hierarchical (or delegatable) anonymous credential system that is practical. To this end, we provide a surprisingly simple ideal functionality for delegatable credentials and present a generic construction that we prove secure in the UC model. We then give a concrete instantiation using a recent pairing-based signature scheme by Groth and describe a number of optimizations and efficiency improvements that can be made when implementing our concrete scheme. The latter might be of independent interest for other pairing-based schemes as well. Finally, we report on an implementation of our scheme in the context of transaction authentication for blockchain, and provide concrete performance figures.",
"title": ""
},
{
"docid": "58920ab34e358c13612d793bb3127c9f",
"text": "We revisit the problem of interval estimation of a binomial proportion. The erratic behavior of the coverage probability of the standard Wald confidence interval has previously been remarked on in the literature (Blyth and Still, Agresti and Coull, Santner and others). We begin by showing that the chaotic coverage properties of the Wald interval are far more persistent than is appreciated. Furthermore, common textbook prescriptions regarding its safety are misleading and defective in several respects and cannot be trusted. This leads us to consideration of alternative intervals. A number of natural alternatives are presented, each with its motivation and context. Each interval is examined for its coverage probability and its length. Based on this analysis, we recommend the Wilson interval or the equal-tailed Jeffreys prior interval for small n and the interval suggested in Agresti and Coull for larger n. We also provide an additional frequentist justification for use of the Jeffreys interval.",
"title": ""
},
{
"docid": "83b79fc95e90a303f29a44ef8730a93f",
"text": "Internet of Things (IoT) is a concept that envisions all objects around us as part of internet. IoT coverage is very wide and includes variety of objects like smart phones, tablets, digital cameras and sensors. Once all these devices are connected to each other, they enable more and more smart processes and services that support our basic needs, environment and health. Such enormous number of devices connected to internet provides many kinds of services. They also produce huge amount of data and information. Cloud computing is one such model for on-demand access to a shared pool of configurable resources (computer, networks, servers, storage, applications, services, and software) that can be provisioned as infrastructures ,software and applications. Cloud based platforms help to connect to the things around us so that we can access anything at any time and any place in a user friendly manner using customized portals and in built applications. Hence, cloud acts as a front end to access IoT. Applications that interact with devices like sensors have special requirements of massive storage to store big data, huge computation power to enable the real time processing of the data, information and high speed network to stream audio or video. Here we have describe how Internet of Things and Cloud computing can work together can address the Big Data problems. We have also illustrated about Sensing as a service on cloud using few applications like Augmented Reality, Agriculture, Environment monitoring,etc. Finally, we propose a prototype model for providing sensing as a service on cloud.",
"title": ""
},
{
"docid": "4d5e72046bfd44b9dc06dfd02812f2d6",
"text": "Recommender systems in the last decade opened new interactive channels between buyers and sellers leading to new concepts involved in the marketing strategies and remarkable positive gains in online sales. Businesses intensively aim to maintain customer loyalty, satisfaction and retention; such strategic longterm values need to be addressed by recommender systems in a more tangible and deeper manner. The reason behind the considerable growth of recommender systems is for tracking and analyzing the buyer behavior on the one to one basis to present items on the web that meet his preference, which is the core concept of personalization. Personalization is always related to the relationship between item and user leaving out the contextual information about this relationship. User's buying decision is not only affected by the presented item, but also influenced by its price and the context in which the item is presented, such as time or place. Recently, new system has been designed based on the concept of utilizing price personalization in the recommendation process. This system is newly coined as personalized pricing recommender system (PPRS). We propose personalized pricing recommender system with a novel approach of calculating consumer online real value to determine dynamically his personalized discount, which can be generically applied on the normal price of any recommend item through its predefined discount rules.",
"title": ""
},
{
"docid": "f3820e94a204cd07b04e905a9b1e4834",
"text": "Successful analysis of player skills in video games has important impacts on the process of enhancing player experience without undermining their continuous skill development. Moreover, player skill analysis becomes more intriguing in team-based video games because such form of study can help discover useful factors in effective team formation. In this paper, we consider the problem of skill decomposition in MOBA (MultiPlayer Online Battle Arena) games, with the goal to understand what player skill factors are essential for the outcome of a game match. To understand the construct of MOBA player skills, we utilize various skill-based predictive models to decompose player skills into interpretative parts, the impact of which are assessed in statistical terms. We apply this analysis approach on two widely known MOBAs, namely League of Legends (LoL) and Defense of the Ancients 2 (DOTA2). The finding is that base skills of in-game avatars, base skills of players, and players’ champion-specific skills are three prominent skill components influencing LoL’s match outcomes, while those of DOTA2 are mainly impacted by in-game avatars’ base skills but not much by the other two. PLAYER SKILL DECOMPOSITION IN MULTIPLAYER ONLINE BATTLE ARENAS 3 Player Skill Decomposition in Multiplayer Online Battle Arenas",
"title": ""
},
{
"docid": "77df82cf7a9ddca2038433fa96a43cef",
"text": "In this study, new algorithms are proposed for exposing forgeries in soccer images. We propose a new and automatic algorithm to extract the soccer field, field side and the lines of field in order to generate an image of real lines for forensic analysis. By comparing the image of real lines and the lines in the input image, the forensic analyzer can easily detect line displacements of the soccer field. To expose forgery in the location of a player, we measure the height of the player using the geometric information in the soccer image and use the inconsistency of the measured height with the true height of the player as a clue for detecting the displacement of the player. In this study, two novel approaches are proposed to measure the height of a player. In the first approach, the intersections of white lines in the soccer field are employed for automatic calibration of the camera. We derive a closed-form solution to calculate different camera parameters. Then the calculated parameters of the camera are used to measure the height of a player using an interactive approach. In the second approach, the geometry of vanishing lines and the dimensions of soccer gate are used to measure a player height. Various experiments using real and synthetic soccer images show the efficiency of the proposed algorithms.",
"title": ""
},
{
"docid": "538ae92edc07057ff0b40c9c657deba4",
"text": "Test case prioritization techniques schedule test cases in an order that increases their effectiveness in meeting some performance goal. One performance goal, rate of fault detection, is a measure of how quickly faults are detected within the testing process; an improved rate of fault detection can provide faster feedback on the system under test, and let software engineers begin locating and correcting faults earlier than might otherwise be possible. In previous work, we reported the results of studies that showed that prioritization techniques can significantly improve rate of fault detection. Those studies, however, raised several additional questions: (1) can prioritization techniques be effective when aimed at specific modified versions; (2) what tradeoffs exist between fine granularity and coarse granularity prioritization techniques; (3) can the incorporation of measures of fault proneness into prioritization techniques improve their effectiveness? This paper reports the results of new experiments addressing these questions.",
"title": ""
},
{
"docid": "429f27ab8039a9e720e9122f5b1e3bea",
"text": "We give a new method for direct reconstruction of three-dimensional objects from a few electron micrographs taken at angles which need not exceed a range of 60 degrees. The method works for totally asymmetric objects, and requires little computer time or storage. It is also applicable to X-ray photography, and may greatly reduce the exposure compared to current methods of body-section radiography.",
"title": ""
},
{
"docid": "1faaf86a7f43f6921d8c754fbc9ea0e1",
"text": "Department of Mechanical Engineering, Politécnica/COPPE, Federal University of Rio de Janeiro, UFRJ, Cid. Universitaria, Cx. Postal: 68503, Rio de Janeiro, RJ, 21941-972, Brazil, [email protected], [email protected], [email protected], [email protected] Department of Mechanical and Materials Engineering, Florida International University, 10555 West Flagler Street, EC 3462, Miami, Florida 33174, U.S.A., [email protected] Department of Subsea Technology, Petrobras Research and Development Center – CENPES, Av. Horácio Macedo, 950, Cidade Universitária, Ilha do Fundão, 21941-915, Rio de Janeiro, RJ, Brazil, [email protected] Université de Toulouse ; Mines Albi ; CNRS; Centre RAPSODEE, Campus Jarlard, F-81013 Albi cedex 09, France, [email protected]",
"title": ""
},
{
"docid": "fdfbcacd5a31038ecc025315c7483b5a",
"text": "Most work on natural language question answering today focuses on answer selection: given a candidate list of sentences, determine which contains the answer. Although important, answer selection is only one stage in a standard end-to-end question answering pipeline. is paper explores the eectiveness of convolutional neural networks (CNNs) for answer selection in an end-to-end context using the standard TrecQA dataset. We observe that a simple idf-weighted word overlap algorithm forms a very strong baseline, and that despite substantial eorts by the community in applying deep learning to tackle answer selection, the gains are modest at best on this dataset. Furthermore, it is unclear if a CNN is more eective than the baseline in an end-to-end context based on standard retrieval metrics. To further explore this nding, we conducted a manual user evaluation, which conrms that answers from the CNN are detectably beer than those from idf-weighted word overlap. is result suggests that users are sensitive to relatively small dierences in answer selection quality.",
"title": ""
},
{
"docid": "ba1d1f2cfeac871bf63164cb0b431af9",
"text": "The motivation behind model-driven software development is to move the focus of work from programming to solution modeling. The model-driven approach has a potential to increase development productivity and quality by describing important aspects of a solution with more human-friendly abstractions and by generating common application fragments with templates. For this vision to become reality, software development tools need to automate the many tasks of model construction and transformation, including construction and transformation of models that can be round-trip engineered into code. In this article, we briefly examine different approaches to model transformation and offer recommendations on the desirable characteristics of a language for describing model transformations. In doing so, we are hoping to offer a measuring stick for judging the quality of future model transformation technologies.",
"title": ""
},
{
"docid": "6a7bc6a1f1d9486304edac87635dc0e9",
"text": "We exploit the falloff of acuity in the visual periphery to accelerate graphics computation by a factor of 5-6 on a desktop HD display (1920x1080). Our method tracks the user's gaze point and renders three image layers around it at progressively higher angular size but lower sampling rate. The three layers are then magnified to display resolution and smoothly composited. We develop a general and efficient antialiasing algorithm easily retrofitted into existing graphics code to minimize \"twinkling\" artifacts in the lower-resolution layers. A standard psychophysical model for acuity falloff assumes that minimum detectable angular size increases linearly as a function of eccentricity. Given the slope characterizing this falloff, we automatically compute layer sizes and sampling rates. The result looks like a full-resolution image but reduces the number of pixels shaded by a factor of 10-15.\n We performed a user study to validate these results. It identifies two levels of foveation quality: a more conservative one in which users reported foveated rendering quality as equivalent to or better than non-foveated when directly shown both, and a more aggressive one in which users were unable to correctly label as increasing or decreasing a short quality progression relative to a high-quality foveated reference. Based on this user study, we obtain a slope value for the model of 1.32-1.65 arc minutes per degree of eccentricity. This allows us to predict two future advantages of foveated rendering: (1) bigger savings with larger, sharper displays than exist currently (e.g. 100 times speedup at a field of view of 70° and resolution matching foveal acuity), and (2) a roughly linear (rather than quadratic or worse) increase in rendering cost with increasing display field of view, for planar displays at a constant sharpness.",
"title": ""
}
] |
scidocsrr
|
4d2d06a55152dd5fd9f4ebae403a7b73
|
How Information Technology Governance Mechanisms and Strategic Alignment Influence Organizational Performance: Insights from a Matched Survey of Business and IT Managers
|
[
{
"docid": "4874f55e577bea77deed2750a9a73b30",
"text": "Best practice exemplars suggest that digital platforms play a critical role in managing supply chain activities and partnerships that generate perjormance gains for firms. However, there is Umited academic investigation on how and why information technology can create performance gains for firms in a supply chain management (SCM) context. Grant's (1996) theoretical notion of higher-order capabilities and a hierarchy of capabilities has been used in recent information systems research by Barua et al. (2004). Sambamurthy et al. (2003), and Mithas et al. (2004) to reframe the conversation from the direct performance impacts of IT resources and investments to how and why IT shapes higher-order proeess capabilities that ereate performance gains for firms. We draw on the emerging IT-enabled organizational capabilities perspective to suggest that firms that develop IT infrastrueture integration for SCM and leverage it to create a higher-order supply chain integration capability generate significant and sustainable performance gains. A research model is developed to investigate the hierarchy oflT-related capabilities and their impaet on firm performance. Data were collected from } 10 supply chain and logisties managers in manufacturing and retail organizations. Our results suggest that integrated IT infrastructures enable firms to develop the higher-order capability of supply chain process integration. This eapability enables firms to unbundle information flows from physical flows, and to share information with their supply chain partners to create information-based approaches for superior demand planning, for the staging and movement of physical products, and for streamlining voluminous and complex financial work processes. Furthermore. IT-enabled supply chain integration capability results in significant and sustained firm performance gains, especially in operational excellence and revenue growth. Managerial",
"title": ""
}
] |
[
{
"docid": "ed28d1b8142a2149a1650e861deb7c53",
"text": "Over the last few years, the use of virtualization technologies has increased dramatically. This makes the demand for efficient and secure virtualization solutions become more obvious. Container-based virtualization and hypervisor-based virtualization are two main types of virtualization technologies that have emerged to the market. Of these two classes, container-based virtualization is able to provide a more lightweight and efficient virtual environment, but not without security concerns. In this paper, we analyze the security level of Docker, a well-known representative of container-based approaches. The analysis considers two areas: (1) the internal security of Docker, and (2) how Docker interacts with the security features of the Linux kernel, such as SELinux and AppArmor, in order to harden the host system. Furthermore, the paper also discusses and identifies what could be done when using Docker to increase its level of security.",
"title": ""
},
{
"docid": "55d7db89621dc57befa330c6dea823bf",
"text": "In this paper we propose CUDA-based implementations of two 3D point sets registration algorithms: Soft assign and EM-ICP. Both algorithms are known for being time demanding, even on modern multi-core CPUs. Our GPUbased implementations vastly outperform CPU ones. For instance, our CUDA EM-ICP aligns 5000 points in less than 7 seconds on a GeForce 8800GT, while the same implementation in OpenMP on an Intel Core 2 Quad would take 7 minutes.",
"title": ""
},
{
"docid": "5c26713d33001fc91ce19f551adac492",
"text": "Recurrent neural network language models (RNNLMs) have recently become increasingly popular for many applications i ncluding speech recognition. In previous research RNNLMs have normally been trained on well-matched in-domain data. The adaptation of RNNLMs remains an open research area to be explored. In this paper, genre and topic based RNNLM adaptation techniques are investigated for a multi-genre broad cast transcription task. A number of techniques including Proba bilistic Latent Semantic Analysis, Latent Dirichlet Alloc ation and Hierarchical Dirichlet Processes are used to extract sh ow level topic information. These were then used as additional input to the RNNLM during training, which can facilitate unsupervised test time adaptation. Experiments using a state-o f-theart LVCSR system trained on 1000 hours of speech and more than 1 billion words of text showed adaptation could yield pe rplexity reductions of 8% relatively over the baseline RNNLM and small but consistent word error rate reductions.",
"title": ""
},
{
"docid": "2536b839d46da28bfe65209e4573b771",
"text": "This paper presents a language-independent ontology (LION) construction method that uses tagged images in an image folksonomy. Existing multilingual frameworks that construct an ontology deal with concepts translated on the basis of parallel corpora, which are not always available; however, the proposed method enables LION construction without parallel corpora by using visual features extracted from tagged images as the alternative. In the proposed method, visual similarities in tagged images are leveraged to aggregate synonymous concepts across languages. The aggregated concepts take on intrinsic semantics of themselves, while they also hold distinct characteristics in different languages. Then relationships between concepts are extracted on the basis of visual and textual features. The proposed method constructs a LION whose nodes and edges correspond to the aggregated concepts and relationships between them, respectively. The LION enables successful image retrieval across languages since each of the aggregated concepts can be referred to in different languages. Consequently, the proposed method removes the language barriers by providing an easy way to access a broader range of tagged images for users in the folksonomy, regardless of the language they use.",
"title": ""
},
{
"docid": "4cf05216efd9f075024d4a3e63cdd511",
"text": "BACKGROUND\nSecondary failure of oral hypoglycemic agents is common in patients with type 2 diabetes mellitus (T2DM); thus, patients often need insulin therapy. The most common complication of insulin treatment is lipohypertrophy (LH).\n\n\nOBJECTIVES\nThis study was conducted to estimate the prevalence of LH among insulin-treated patients with Patients with T2DM, to identify the risk factors for the development of LH, and to examine the association between LH and glycemic control.\n\n\nPATIENTS AND METHODS\nA total of 1090 patients with T2DM aged 20 to 89 years, who attended the diabetes clinics at the National Center for Diabetes, Endocrinology, and Genetics (NCDEG, Amman, Jordan) between October 2011 and January 2012, were enrolled. The presence of LH was examined by inspection and palpation of insulin injection sites at the time of the visit as relevant clinical and laboratory data were obtained. The LH was defined as a local tumor-like swelling of subcutaneous fatty tissue at the site of repeated insulin injections.\n\n\nRESULTS\nThe overall prevalence of LH was 37.3% (27.4% grade 1, 9.7% grade 2, and 0.2% grade 3). The LH was significantly associated with the duration of diabetes, needle length, duration of insulin therapy, lack of systematic rotation of insulin injection sites, and poor glycemic control.\n\n\nCONCLUSIONS\nThe LH is a common problem in insulin-treated Jordanian patients with T2DM. More efforts are needed to educate patients and health workers on simple interventions such as using shorter needles and frequent rotation of the insulin injection sites to avoid LH and improve glycemic control.",
"title": ""
},
{
"docid": "4b3425ce40e46b7a595d389d61daca06",
"text": "Genetic or acquired destabilization of the dermal extracellular matrix evokes injury- and inflammation-driven progressive soft tissue fibrosis. Dystrophic epidermolysis bullosa (DEB), a heritable human skin fragility disorder, is a paradigmatic disease to investigate these processes. Studies of DEB have generated abundant new information on cellular and molecular mechanisms at play in skin fibrosis which are not only limited to intractable diseases, but also applicable to some of the most common acquired conditions. Here, we discuss recent advances in understanding the biological and mechanical mechanisms driving the dermal fibrosis in DEB. Much of this progress is owed to the implementation of cell and tissue omics studies, which we pay special attention to. Based on the novel findings and increased understanding of the disease mechanisms in DEB, translational aspects and future therapeutic perspectives are emerging.",
"title": ""
},
{
"docid": "af820bc868ea4a04480ee466c0f2fafa",
"text": "Attackers target many different types of computer systems in use today, exploiting software vulnerabilities to take over the device and make it act maliciously. Reports of numerous attacks have been published, against the constrained embedded devices of the Internet of Things, mobile devices like smartphones and tablets, high-performance desktop and server environments, as well as complex industrial control systems. Trusted computing architectures give users and remote parties like software vendors guarantees about the behaviour of the software they run, protecting them against software-level attackers. This paper defines the security properties offered by them, and presents detailed descriptions of twelve hardware-based attestation and isolation architectures from academia and industry. We compare all twelve designs with respect to the security properties and architectural features they offer. The presented architectures have been designed for a wide range of devices, supporting different security properties.",
"title": ""
},
{
"docid": "78437d8aafd3bf09522993447b0a4d50",
"text": "Over the past 30 years, policy makers and professionals who provide services to older adults with chronic conditions and impairments have placed greater emphasis on conceptualizing aging in place as an attainable and worthwhile goal. Little is known, however, of the changes in how this concept has evolved in aging research. To track trends in aging in place, we examined scholarly articles published from 1980 to 2010 that included the concept in eleven academic gerontology journals. We report an increase in the absolute number and proportion of aging-in-place manuscripts published during this period, with marked growth in the 2000s. Topics related to the environment and services were the most commonly examined during 2000-2010 (35% and 31%, resp.), with a substantial increase in manuscripts pertaining to technology and health/functioning. This underscores the increase in diversity of topics that surround the concept of aging-in-place literature in gerontological research.",
"title": ""
},
{
"docid": "ef3ce72d709d2eca9c86080053d5afd6",
"text": "This work investigates the process of selecting, extracting and reorganizing content from Semantic Web information sources, to produce an ontology meeting the specifications of a particular domain and/or task. The process is combined with traditional text-based ontology learning methods to achieve tolerance to knowledge incompleteness. The paper describes the approach and presents experiments in which an ontology was built for a diet evaluation task. Although the example presented concerns the specific case of building a nutritional ontology, the methods employed are domain independent and transferrable to other use cases.",
"title": ""
},
{
"docid": "529929af902100d25e08fe00d17e8c1a",
"text": "Engagement is the holy grail of learning whether it is in a classroom setting or an online learning platform. Studies have shown that engagement of the student while learning can benefit students as well as the teacher if the engagement level of the student is known. It is difficult to keep track of the engagement of each student in a face-to-face learning happening in a large classroom. It is even more difficult in an online learning platform where, the user is accessing the material at different instances. Automatic analysis of the engagement of students can help to better understand the state of the student in a classroom setting as well as online learning platforms and is more scalable. In this paper we propose a framework that uses Temporal Convolutional Network (TCN) to understand the intensity of engagement of students attending video material from Massive Open Online Courses (MOOCs). The input to the TCN network is the statistical features computed on 10 second segments of the video from the gaze, head pose and action unit intensities available in OpenFace library. The ability of the TCN architecture to capture long term dependencies gives it the ability to outperform other sequential models like LSTMs. On the given test set in the EmotiW 2018 sub challenge-\"Engagement in the Wild\", the proposed approach with Dilated-TCN achieved an average mean square error of 0.079.",
"title": ""
},
{
"docid": "1585951d989c0e5210e5fee28e91f353",
"text": "Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, stored in electronic medical records are episodic and irregular in time. We introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors and models patient health state trajectories by the memory of historical records. Built on Long Short-Term Memory (LSTM), DeepCare introduces methods to handle irregularly timed events by moderating the forgetting and consolidation of memory. DeepCare also explicitly models medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden - diabetes and mental health - the results show improved prediction accuracy.",
"title": ""
},
{
"docid": "153b5c38978c54391bd5ec097416883c",
"text": "Applying simple natural language processing methods on social media data have shown to be able to reveal insights of specific mental disorders. However, few studies have employed fine-grained sentiment or emotion related analysis approaches in the detection of mental health conditions from social media messages. This work, for the first time, employed fine-grained emotions as features and examined five popular machine learning classifiers in the task of identifying users with selfreported mental health conditions (i.e. Bipolar, Depression, PTSD, and SAD) from the general public. We demonstrated that the support vector machines and the random forests classifiers with emotion-based features and combined features showed promising improvements to the performance on this task.",
"title": ""
},
{
"docid": "be0354a12d54c7be2ae4cd4c4ae22866",
"text": "CONTEXT\nThe association of body mass index (BMI) with cause-specific mortality has not been reported for the US population.\n\n\nOBJECTIVE\nTo estimate cause-specific excess deaths associated with underweight (BMI <18.5), overweight (BMI 25-<30), and obesity (BMI > or =30).\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nCause-specific relative risks of mortality from the National Health and Nutrition Examination Survey (NHANES) I, 1971-1975; II, 1976-1980; and III, 1988-1994, with mortality follow-up through 2000 (571,042 person-years of follow-up) were combined with data on BMI and other covariates from NHANES 1999-2002 with underlying cause of death information for 2.3 million adults 25 years and older from 2004 vital statistics data for the United States.\n\n\nMAIN OUTCOME MEASURES\nCause-specific excess deaths in 2004 by BMI levels for categories of cardiovascular disease (CVD), cancer, and all other causes (noncancer, non-CVD causes).\n\n\nRESULTS\nBased on total follow-up, underweight was associated with significantly increased mortality from noncancer, non-CVD causes (23,455 excess deaths; 95% confidence interval [CI], 11,848 to 35,061) but not associated with cancer or CVD mortality. Overweight was associated with significantly decreased mortality from noncancer, non-CVD causes (-69 299 excess deaths; 95% CI, -100 702 to -37 897) but not associated with cancer or CVD mortality. Obesity was associated with significantly increased CVD mortality (112,159 excess deaths; 95% CI, 87,842 to 136,476) but not associated with cancer mortality or with noncancer, non-CVD mortality. In further analyses, overweight and obesity combined were associated with increased mortality from diabetes and kidney disease (61 248 excess deaths; 95% CI, 49 685 to 72,811) and decreased mortality from other noncancer, non-CVD causes (-105,572 excess deaths; 95% CI, -161 816 to -49,328). Obesity was associated with increased mortality from cancers considered obesity-related (13,839 excess deaths; 95% CI, 1920 to 25,758) but not associated with mortality from other cancers. Comparisons across surveys suggested a decrease in the association of obesity with CVD mortality over time.\n\n\nCONCLUSIONS\nThe BMI-mortality association varies by cause of death. These results help to clarify the associations of BMI with all-cause mortality.",
"title": ""
},
{
"docid": "d2086d9c52ca9d4779a2e5070f9f3009",
"text": "Though action recognition based on complete videos has achieved great success recently, action prediction remains a challenging task as the information provided by partial videos is not discriminative enough for classifying actions. In this paper, we propose a Deep Residual Feature Learning (DeepRFL) framework to explore more discriminative information from partial videos, achieving similar representations as those of complete videos. The proposed method is based on residual learning, which captures the salient differences between partial videos and their corresponding full videos. The partial videos can attain the missing information by learning from features of complete videos and thus improve the discriminative power. Moreover, our model can be trained efficiently in an end-to-end fashion. Extensive evaluations on the challenging UCF101 and HMDB51 datasets demonstrate that the proposed method outperforms state-of-the-art results.",
"title": ""
},
{
"docid": "ff39f9fdb98981137f93d156150e1b83",
"text": "We describe a method for recovering 3D human body pose from silhouettes. Our model is based on learning a latent space using the Gaussian Process Latent Variable Model (GP-LVM) [1] encapsulating both pose and silhouette features Our method is generative, this allows us to model the ambiguities of a silhouette representation in a principled way. We learn a dynamical model over the latent space which allows us to disambiguate between ambiguous silhouettes by temporal consistency. The model has only two free parameters and has several advantages over both regression approaches and other generative methods. In addition to the application shown in this paper the suggested model is easily extended to multiple observation spaces without constraints on type.",
"title": ""
},
{
"docid": "09323dc1ce43e3f963d04172168a03c2",
"text": "Geometric measurements of the human hand have been used for identity authentication in a number of commercial systems. Yet, there is not much open public literature addressing research issues underlying hand geometry-based identity authentication. This work is our attempt to draw attention to this important biometric by designing a prototype hand geometrybased identity authentication system. We also present our preliminary verification results based on hand measurements of 50 individuals captured over a period of time. The results are encouraging and we plan to address issues to improve the system performance.",
"title": ""
},
{
"docid": "6f045c9f48ce87f6b425ac6c5f5d5e9d",
"text": "In the modern web, the browser has emerged as the vehicle of choice, which users are to trust, customize, and use, to access a wealth of information and online services. However, recent studies show that the browser can also be used to invisibly fingerprint the user: a practice that may have serious privacy and security implications.\n In this paper, we report on the design, implementation and deployment of FPDetective, a framework for the detection and analysis of web-based fingerprinters. Instead of relying on information about known fingerprinters or third-party-tracking blacklists, FPDetective focuses on the detection of the fingerprinting itself. By applying our framework with a focus on font detection practices, we were able to conduct a large scale analysis of the million most popular websites of the Internet, and discovered that the adoption of fingerprinting is much higher than previous studies had estimated. Moreover, we analyze two countermeasures that have been proposed to defend against fingerprinting and find weaknesses in them that might be exploited to bypass their protection. Finally, based on our findings, we discuss the current understanding of fingerprinting and how it is related to Personally Identifiable Information, showing that there needs to be a change in the way users, companies and legislators engage with fingerprinting.",
"title": ""
},
{
"docid": "3d2e170b4cd31d0e1a28c968f0b75cf6",
"text": "Fog Computing is a new variety of the cloud computing paradigm that brings virtualized cloud services to the edge of the network to control the devices in the IoT. We present a pattern for fog computing which describes its architecture, including its computing, storage and networking services. Fog computing is implemented as an intermediate platform between end devices and cloud computing data centers. The recent popularity of the Internet of Things (IoT) has made fog computing a necessity to handle a variety of devices. It has been recognized as an important platform to provide efficient, location aware, close to the edge, cloud services. Our model includes most of the functionality found in current fog architectures.",
"title": ""
},
{
"docid": "f1cd96ddd519f35cf3ddc19f84d232cf",
"text": "This paper presents a deep learning approach for automatic detection and visual analysis of invasive ductal carcinoma (IDC) tissue regions in whole slide images (WSI) of breast cancer (BCa). Deep learning approaches are learn-from-data methods involving computational modeling of the learning process. This approach is similar to how human brain works using different interpretation levels or layers of most representative and useful features resulting into a hierarchical learned representation. These methods have been shown to outpace traditional approaches of most challenging problems in several areas such as speech recognition and object detection. Invasive breast cancer detection is a time consuming and challenging task primarily because it involves a pathologist scanning large swathes of benign regions to ultimately identify the areas of malignancy. Precise delineation of IDC in WSI is crucial to the subsequent estimation of grading tumor aggressiveness and predicting patient outcome. DL approaches are particularly adept at handling these types of problems, especially if a large number of samples are available for training, which would also ensure the generalizability of the learned features and classifier. The DL framework in this paper extends a number of convolutional neural networks (CNN) for visual semantic analysis of tumor regions for diagnosis support. The CNN is trained over a large amount of image patches (tissue regions) from WSI to learn a hierarchical part-based representation. The method was evaluated over a WSI dataset from 162 patients diagnosed with IDC. 113 slides were selected for training and 49 slides were held out for independent testing. Ground truth for quantitative evaluation was provided via expert delineation of the region of cancer by an expert pathologist on the digitized slides. The experimental evaluation was designed to measure classifier accuracy in detecting IDC tissue regions in WSI. Our method yielded the best quantitative results for automatic detection of IDC regions in WSI in terms of F-measure and balanced accuracy (71.80%, 84.23%), in comparison with an approach using handcrafted image features (color, texture and edges, nuclear textural and architecture), and a machine learning classifier for invasive tumor classification using a Random Forest. The best performing handcrafted features were fuzzy color histogram (67.53%, 78.74%) and RGB histogram (66.64%, 77.24%). Our results also suggest that at least some of the tissue classification mistakes (false positives and false negatives) were less due to any fundamental problems associated with the approach, than the inherent limitations in obtaining a very highly granular annotation of the diseased area of interest by an expert pathologist.",
"title": ""
},
{
"docid": "18c866d49ecee8eeaceb1dc365a81ffd",
"text": "The design of a new high-efficiency switching power amplifier with an ultralow-power spread spectrum clock generator (SSCG) is first reported in this paper. An effective low-power frequency modulation method is first proposed to reduce the electromagnetic interference of the pulse width modulation class D power amplifier without degrading its power efficiency. Also, a simple RC voltage feedback circuit is used to reduce the total harmonic distortion (THD). This amplifier proves to be a cost-effective solution for designing high fidelity and high efficiency audio power amplifiers for portable applications. Measurement results show that the power efficiency and THD can reach 90% and 0.05%, respectively. The power dissipation of the SSCG is only 112 ¿W. The harmonic peaks of the switching frequency are greatly reduced when the SSCG technique is applied to the amplifier design. The impact of the SSCG on the THD of the class D power amplifier is also first reported in this paper. This switching power amplifier is implemented using a Taiwan Semiconductor Manufacture Company (TSMC) 0.35- ¿m CMOS process.",
"title": ""
}
] |
scidocsrr
|
396b65824d1d9a865202d59c9aa80592
|
Study on flip chip assembly of high density micro-LED array
|
[
{
"docid": "5e0663f759b23147f9d1a3eeb6ab4b04",
"text": "We describe the fabrication and characterization of matrix-addressable microlight-emitting diode (micro-LED) arrays based on InGaN, having elemental diameter of 20 /spl mu/m and array size of up to 128 /spl times/ 96 elements. The introduction of a planar topology prior to contact metallization is an important processing step in advancing the performance of these devices. Planarization is achieved by chemical-mechanical polishing of the SiO/sub 2/-deposited surface. In this way, the need for a single contact pad for each individual element can be eliminated. The resulting significant simplification in the addressing of the pixels opens the way to scaling to devices with large numbers of elements. Compared to conventional broad-area LEDs, the micrometer-scale devices exhibit superior light output and current handling capabilities, making them excellent candidates for a range of uses including high-efficiency and robust microdisplays.",
"title": ""
}
] |
[
{
"docid": "f29d0ea5ff5c96dadc440f4d4aa229c6",
"text": "Wikipedia infoboxes are a valuable source of structured knowledge for global knowledge sharing. However, infobox information is very incomplete and imbalanced among the Wikipedias in different languages. It is a promising but challenging problem to utilize the rich structured knowledge from a source language Wikipedia to help complete the missing infoboxes for a target language. In this paper, we formulate the problem of cross-lingual knowledge extraction from multilingual Wikipedia sources, and present a novel framework, called WikiCiKE, to solve this problem. An instancebased transfer learning method is utilized to overcome the problems of topic drift and translation errors. Our experimental results demonstrate that WikiCiKE outperforms the monolingual knowledge extraction method and the translation-based method.",
"title": ""
},
{
"docid": "6fac5265abac9f07d355dc794522a061",
"text": "The deployment of cryptocurrencies in e-commerce has reached a significant number of transactions and continuous increases in monetary circulation; nevertheless, they face two impediments: a lack of awareness of the technological utility, and a lack of trust among consumers. E-commerce carried out through social networks expands its application to a new paradigm called social commerce. Social commerce uses the content generated within social networks to attract new consumers and influence their behavior. The objective of this paper is to analyze the role played by social media in increasing trust and intention to use cryptocurrencies in making electronic payments. It develops a model that combines constructs from social support theory, social commerce, and the technology acceptance model. This model is evaluated using the partial least square analysis. The obtained results show that social commerce increases the trust and intention to use cryptocurrencies. However, mutual support among participants does not generate sufficient trust to adequately promote the perceived usefulness of cryptocurrencies. This research provides a practical tool for analyzing how collaborative relationships that emerge in social media can influence or enhance the adoption of a new technology in terms of perceived trust and usefulness. Furthermore, it provides a significant contribution to consumer behavior research by applying the social support theory to the adoption of new information technologies. These theoretical and practical contributions are detailed in the final section of the paper.",
"title": ""
},
{
"docid": "59dfc58e555690ce8ffab2f92d427d2d",
"text": "Testbeds and experimental network facilities accelerate the expansion of disruptive Internet services and support their evolution. The integration of IoT technologies in the context of Unmanned Vehicles (UxVs) and their deployment in federated, real–world testbeds introduce various challenging research issues. This paper presents the Semantic Aggregate Manager (SAM) that exploits semantic technologies for modeling and managing resources of federated IoT Testbeds. SAM introduces new semantics–based features tailored to the needs of IoT enabled UxVs, but on the same time allows the compatibility with existing legacy, “de facto” standardised protocols, currently utilized by multiple federated testbed management systems. The proposed framework is currently being deployed in order to be evaluated in real–world testbeds across several sites in Europe.",
"title": ""
},
{
"docid": "ace9af1a19077f66b57275677cac60cb",
"text": "Recently several researchers have investigated techniques for using data to learn Bayesian networks containing compact representations for the conditional probability distributions (CPDs) stored at each node. The majority of this work has concentrated on using decision-tree representations for the CPDs. In addition, researchers typically apply non-Bayesian (or asymptotically Bayesian) scoring functions such as MDL to evaluate the goodness-oft of networks to the data. In this paper we investigate a Bayesian approach to learning Bayesian networks that contain the more general decision-graph representations of the CPDs. First, we describe how to evaluate the posterior probability|that is, the Bayesian score|of such a network, given a database of observed cases. Second, we describe various search spaces that can be used, in conjunction with a scoring function and a search procedure, to identify one or more high-scoring networks. Finally, we present an experimental evaluation of the search spaces, using a greedy algorithm and a Bayesian scoring function.",
"title": ""
},
{
"docid": "517ec608208a669872a1d11c1d7836a3",
"text": "Hafez is an automatic poetry generation system that integrates a Recurrent Neural Network (RNN) with a Finite State Acceptor (FSA). It generates sonnets given arbitrary topics. Furthermore, Hafez enables users to revise and polish generated poems by adjusting various style configurations. Experiments demonstrate that such “polish” mechanisms consider the user’s intention and lead to a better poem. For evaluation, we build a web interface where users can rate the quality of each poem from 1 to 5 stars. We also speed up the whole system by a factor of 10, via vocabulary pruning and GPU computation, so that adequate feedback can be collected at a fast pace. Based on such feedback, the system learns to adjust its parameters to improve poetry quality.",
"title": ""
},
{
"docid": "df114396d546abfc9b6f1767e3bab8db",
"text": "I briefly highlight the salient properties of modified-inertia formulations of MOND, contrasting them with those of modified-gravity formulations, which describe practically all theories propounded to date. Future data (e.g. the establishment of the Pioneer anomaly as a new physics phenomenon) may prefer one of these broad classes of theories over the other. I also outline some possible starting ideas for modified inertia. 1 Modified MOND inertia vs. modified MOND gravity MOND is a modification of non-relativistic dynamics involving an acceleration constant a 0. In the formal limit a 0 → 0 standard Newtonian dynamics is restored. In the deep MOND limit, a 0 → ∞, a 0 and G appear in the combination (Ga 0). Much of the NR phenomenology follows from this simple prescription, including the asymptotic flatness of rotation curves, the mass-velocity relations (baryonic Tully-fisher and Faber Jackson relations), mass discrepancies in LSB galaxies, etc.. There are many realizations (theories) that embody the above dictates, relativistic and non-relativistic. The possibly very significant fact that a 0 ∼ cH 0 ∼ c(Λ/3) 1/2 may hint at the origin of MOND, and is most probably telling us that a. MOND is an effective theory having to do with how the universe at large shapes local dynamics, and b. in a Lorentz universe (with H 0 = 0, Λ = 0) a 0 = 0 and standard dynamics holds. We can broadly classify modified theories into two classes (with the boundary not so sharply defined): In modified-gravity (MG) formulations the field equation of the gravitational field (potential, metric) is modified; the equations of motion of other degrees of freedom (DoF) in the field are not. In modified-inertia (MI) theories the opposite it true. More precisely, in theories derived from an action modifying inertia is tantamount to modifying the kinetic (free) actions of the non-gravitational degrees of freedom. Local, relativistic theories in which the kinetic",
"title": ""
},
{
"docid": "209d202fd4b0e2376894345e3806bb70",
"text": "Support vector data description (SVDD) is a useful method for outlier detection and has been applied to a variety of applications. However, in the existing optimization procedure of SVDD, there are some issues which may lead to improper usage of SVDD. Some of the issues might already be known in practice, but the theoretical discussion, justification and correction are still lacking. Given the wide use of SVDD, these issues inspire us to carefully study SVDD in the view of convex optimization. In particular, we derive the dual problem with strong duality, prove theorems to handle theoretical insufficiency in the literature of SVDD, investigate some novel extensions of SVDD, and come up with an implementation of training SVDD with theoretical guarantee.",
"title": ""
},
{
"docid": "39321bc85746dc43736a0435c939c7da",
"text": "We use recent network calculus results to study some properties of lossless multiplexing as it may be used in guaranteed service networks. We call network calculus a set of results that apply min-plus algebra to packet networks. We provide a simple proof that shaping a traffic stream to conform to a burstiness constraint preserves the original constraints satisfied by the traffic stream We show how all rate-based packet schedulers can be modeled with a simple rate latency service curve. Then we define a general form of deterministic effective bandwidth and equivalent capacity. We find that call acceptance regions based on deterministic criteria (loss or delay) are convex, in contrast to statistical cases where it is the complement of the region which is convex. We thus find that, in general, the limit of the call acceptance region based on statistical multiplexing when the loss probability target tends to 0 may be strictly larger than the call acceptance region based on lossless multiplexing. Finally, we consider the problem of determining the optimal parameters of a variable bit rate (VBR) connection when it is used as a trunk, or tunnel, given that the input traffic is known. We find that there is an optimal peak rate for the VBR trunk, essentially insensitive to the optimization criteria. For a linear cost function, we find an explicit algorithm for the optimal remaining parameters of the VBR trunk.",
"title": ""
},
{
"docid": "974d9752c8aaf2ab309c99337802e8a4",
"text": "Utility functions provide a natural and advantageous framework for achieving self-optimization in distributed autonomic computing systems. We present a distributed architecture, implemented in a realistic prototype data center, that demonstrates how utility functions can enable a collection of autonomic elements to continually optimize the use of computational resources in a dynamic, heterogeneous environment. Broadly, the architecture is a two-level structure of independent autonomic elements that supports flexibility, modularity, and self-management. Individual autonomic elements manage application resource usage to optimize local service-level utility functions, and a global arbiter allocates resources among application environments based on resource-level utility functions obtained from the managers of the applications. We present empirical data that demonstrate the effectiveness of our utility function scheme in handling realistic, fluctuating Web-based transactional workloads running on a Linux cluster.",
"title": ""
},
{
"docid": "1d8db3e4aada7f5125cd72df4dfab1f4",
"text": "Advances in 3D scanning technologies have enabled the practical creation of meshes with hundreds of millions of polygons. Traditional algorithms for display, simplification, and progressive transmission of meshes are impractical for data sets of this size. We describe a system for representing and progressively displaying these meshes that combines a multiresolution hierarchy based on bounding spheres with a rendering system based on points. A single data structure is used for view frustum culling, backface culling, level-of-detail selection, and rendering. The representation is compact and can be computed quickly, making it suitable for large data sets. Our implementation, written for use in a large-scale 3D digitization project, launches quickly, maintains a user-settable interactive frame rate regardless of object complexity or camera position, yields reasonable image quality during motion, and refines progressively when idle to a high final image quality. We have demonstrated the system on scanned models containing hundreds of millions of samples.",
"title": ""
},
{
"docid": "50ddbbddb1fb432965690739b66bed94",
"text": "Nowadays, many e-commerce websites allow users to login with their existing social networking accounts. When a new user comes to an e-commerce website, it is interesting to study whether the information from external social media platforms can be utilized to alleviate the cold-start problem. In this paper, we focus on a specific task on cross-site information sharing, i.e., leveraging the text posted by a user on the social media platform (termed as social text) to infer his/her purchase preference of product categories on an e-commerce platform. To solve the task, a key problem is how to effectively represent the social text in a way that its information can be utilized on the e-commerce platform. We study two major kinds of text representation methods for predicting cross-site purchase preference, including shallow textual features and deep textual features learned by deep neural network models. We conduct extensive experiments on a large linked dataset, and our experimental results indicate that it is promising to utilize the social text for predicting purchase preference. Specially, the deep neural network approach has shown a more powerful predictive ability when the number of categories becomes large.",
"title": ""
},
{
"docid": "6112a0dc02fde9788730b6e634177475",
"text": "Reviews of products or services on Internet marketplace websites contain a rich amount of information. Users often wish to survey reviews or review snippets from the perspective of a certain aspect, which has resulted in a large body of work on aspect identification and extraction from such corpora. In this work, we evaluate a newly-proposed neural model for aspect extraction on two practical tasks. The first is to extract canonical sentences of various aspects from reviews, and is judged by human evaluators against alternatives. A kmeans baseline does remarkably well in this setting. The second experiment focuses on the suitability of the recovered aspect distributions to represent users by the reviews they have written. Through a set of review reranking experiments, we find that aspect-based profiles can largely capture notions of user preferences, by showing that divergent users generate markedly different review rankings.",
"title": ""
},
{
"docid": "067ec456d76cce7978b3d2f0c67269ed",
"text": "With the development of deep learning, the performance of hyperspectral image (HSI) classification has been greatly improved in recent years. The shortage of training samples has become a bottleneck for further improvement of performance. In this paper, we propose a novel convolutional neural network framework for the characteristics of hyperspectral image data called HSI-CNN, which can also provides ideas for the processing of one-dimensional data. Firstly, the spectral-spatial feature is extracted from a target pixel and its neighbors. Then, a number of one-dimensional feature maps, obtained by convolution operation on spectral-spatial features, are stacked into a two-dimensional matrix. Finally, the two-dimensional matrix considered as an image is fed into standard CNN. This is why we call it HSI-CNN. In addition, we also implements two depth network classification models, called HSI-CNN+XGBoost and HSI-CapsNet, in order to compare the performance of our framework. Experiments show that the performance of hyperspectral image classification is improved efficiently with HSI-CNN framework. We evaluate the model's performance using four popular HSI datasets, which are the Kennedy Space Center (KSC), Indian Pines (IP), Pavia University scene (PU) and Salinas scene (SA). As far as we concerned, the accuracy of HSI-CNN has kept pace with the state-of-art methods, which is 99.28%, 99.09%, 99.57%, 98.97% separately.",
"title": ""
},
{
"docid": "a60f8086361222cc739e32921a6b1631",
"text": "In this paper, we provide the first systematic and comprehensive analysis of off-state degradation in Drain-Extended PMOS transistors - an enabling input/output (I/O) component in many systems and a prototypical example of devices with correlated degradation (i.e., hot carrier damage leading to gate dielectric failure). We use a wide range of characterization tools (e.g., Charge-pumping and multi-frequency charge pumping to probe damage generation, IDLIN measurement for parametric degradation, current-ratio technique to locate breakdown spot, etc.) along with broad range of computational models (e.g., process, device, Monte Carlo models for hot-carrier profiling, asymmetric percolation for failure statistics, etc.) to carefully and systematically map the spatial and temporal dynamics of correlated trap generation in DePMOS transistors. Our key finding is that, despite the apparent complexity and randomness of the trap-generation process, appropriate scaling shows that the mechanics of trap generation is inherently universal. We use the universality to understand the parametric degradation and TDDB of DePMOS transistors and to perform lifetime projections from stress to operating conditions.",
"title": ""
},
{
"docid": "3939958f235df9dbf7733f946bfa5051",
"text": "This paper presents preliminary findings from our empirical study of the cognition employed by performers in improvisational theatre. Our study has been conducted in a laboratory setting with local improvisers. Participants performed predesigned improv \"games\", which were videotaped and shown to each individual participant for a retrospective protocol collection. The participants were then shown the video again as a group to elicit data on group dynamics, misunderstandings, etc. This paper presents our initial findings that we have built based on our initial analysis of the data and highlights details of interest.",
"title": ""
},
{
"docid": "e13dcab3abbd1abf159ed87ba67dc490",
"text": "A virtual keyboard takes a large portion of precious screen real estate. We have investigated whether an invisible keyboard is a feasible design option, how to support it, and how well it performs. Our study showed users could correctly recall relative key positions even when keys were invisible, although with greater absolute errors and overlaps between neighboring keys. Our research also showed adapting the spatial model in decoding improved the invisible keyboard performance. This method increased the input speed by 11.5% over simply hiding the keyboard and using the default spatial model. Our 3-day multi-session user study showed typing on an invisible keyboard could reach a practical level of performance after only a few sessions of practice: the input speed increased from 31.3 WPM to 37.9 WPM after 20 - 25 minutes practice on each day in 3 days, approaching that of a regular visible keyboard (41.6 WPM). Overall, our investigation shows an invisible keyboard with adapted spatial model is a practical and promising interface option for the mobile text entry systems.",
"title": ""
},
{
"docid": "42cfbb2b2864e57d59a72ec91f4361ff",
"text": "Objective. This prospective open trial aimed to evaluate the efficacy and safety of isotretinoin (13-cis-retinoic acid) in patients with Cushing's disease (CD). Methods. Sixteen patients with CD and persistent or recurrent hypercortisolism after transsphenoidal surgery were given isotretinoin orally for 6-12 months. The drug was started on 20 mg daily and the dosage was increased up to 80 mg daily if needed and tolerated. Clinical, biochemical, and hormonal parameters were evaluated at baseline and monthly for 6-12 months. Results. Of the 16 subjects, 4% (25%) persisted with normal urinary free cortisol (UFC) levels at the end of the study. UFC reductions of up to 52.1% were found in the rest. Only patients with UFC levels below 2.5-fold of the upper limit of normal achieved sustained UFC normalization. Improvements of clinical and biochemical parameters were also noted mostly in responsive patients. Typical isotretinoin side-effects were experienced by 7 patients (43.7%), though they were mild and mostly transient. We also observed that the combination of isotretinoin with cabergoline, in relatively low doses, may occasionally be more effective than either drug alone. Conclusions. Isotretinoin may be an effective and safe therapy for some CD patients, particularly those with mild hypercortisolism.",
"title": ""
},
{
"docid": "ffd7afcf6e3b836733b80ed681e2a2b9",
"text": "The emergence of cloud management systems, and the adoption of elastic cloud services enable dynamic adjustment of cloud hosted resources and provisioning. In order to effectively provision for dynamic workloads presented on cloud platforms, an accurate forecast of the load on the cloud resources is required. In this paper, we investigate various forecasting methods presented in recent research, identify and adapt evaluation metrics used in literature and compare forecasting methods on prediction performance. We investigate the performance gain of ensemble models when combining three of the best performing models into one model. We find that our 30th order Auto-regression model and Feed-Forward Neural Network method perform the best when evaluated on Google's Cluster dataset and using the provision specific metrics identified. We also show an improvement in forecasting accuracy when evaluating two ensemble models.",
"title": ""
},
{
"docid": "7f94ebc8ebdde9e337e6dd345c5c529e",
"text": "Forms are a standard way of gathering data into a database. Many applications need to support multiple users with evolving data gathering requirements. It is desirable to automatically link dynamic forms to the back-end database. We have developed the FormMapper system, a fully automatic solution that accepts user-created data entry forms, and maps and integrates them into an existing database in the same domain. The solution comprises of two components: tree extraction and form integration. The tree extraction component leverages a probabilistic process, Hidden Markov Model (HMM), for automatically extracting a semantic tree structure of a form. In the form integration component, we develop a merging procedure that maps and integrates a tree into an existing database and extends the database with desired properties. We conducted experiments evaluating the performance of the system on several large databases designed from a number of complex forms. Our experimental results show that the FormMapper system is promising: It generated databases that are highly similar (87% overlapped) to those generated by the human experts, given the same set of forms.",
"title": ""
}
] |
scidocsrr
|
810c072e56b43b28b603f007cc327d78
|
An augmented reality interface for visualizing and interacting with virtual content
|
[
{
"docid": "1b8d9c6a498821823321572a5055ecc3",
"text": "The objective of stereo camera calibration is to estimate the internal and external parameters of each camera. Using these parameters, the 3-D position of a point in the scene, which is identified and matched in two stereo images, can be determined by the method of triangulation. In this paper, we present a camera model that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortions. The proposed calibration procedure consists of two steps. In the first step, the calibration parameters are estimated using a closed-form solution based on a distortion-free camera model. In the second step, the parameters estimated in the first step are improved iteratively through a nonlinear optimization, taking into account camera distortions. According to minimum variance estimation, the objective function to be minimized is the mean-square discrepancy between the observed image points and their inferred image projections computed with the estimated calibration parameters. We introduce a type of measure that can be used to directly evaluate the performance of calibration and compare calibrations among different systems. The validity and performance of our calibration procedure are tested with both synthetic data and real images taken by teleand wide-angle lenses. The results consistently show significant improvements over less complete camera models.",
"title": ""
}
] |
[
{
"docid": "94aa0777f80aa25ec854f159dc3e0706",
"text": "To develop a knowledge-aware recommender system, a key data problem is how we can obtain rich and structured knowledge information for recommender system (RS) items. Existing datasets or methods either use side information from original recommender systems (containing very few kinds of useful information) or utilize private knowledge base (KB). In this paper, we present the first public linked KB dataset for recommender systems, named KB4Rec v1.0, which has linked three widely used RS datasets with the popular KB Freebase. Based on our linked dataset, we first preform some interesting qualitative analysis experiments, in which we discuss the effect of two important factors (i.e., popularity and recency) on whether a RS item can be linked to a KB entity. Finally, we present the comparison of several knowledge-aware recommendation algorithms on our linked dataset.",
"title": ""
},
{
"docid": "cf18b74f3b91facd80583126c1b70a44",
"text": "PURPOSE\nThe objectives of this systematic review are (1) to quantitatively estimate the esthetic outcomes of implants placed in postextraction sites, and (2) to evaluate the influence of simultaneous bone augmentation procedures on these outcomes.\n\n\nMATERIALS AND METHODS\nElectronic and manual searches of the dental literature were performed to collect information on esthetic outcomes based on objective criteria with implants placed after extraction of maxillary anterior and premolar teeth. All levels of evidence were accepted (case series studies required a minimum of 5 cases).\n\n\nRESULTS\nFrom 1,686 titles, 114 full-text articles were evaluated and 50 records included for data extraction. The included studies reported on single-tooth implants adjacent to natural teeth, with no studies on multiple missing teeth identified (6 randomized controlled trials, 6 cohort studies, 5 cross-sectional studies, and 33 case series studies). Considerable heterogeneity in study design was found. A meta-analysis of controlled studies was not possible. The available evidence suggests that esthetic outcomes, determined by esthetic indices (predominantly the pink esthetic score) and positional changes of the peri-implant mucosa, may be achieved for single-tooth implants placed after tooth extraction. Immediate (type 1) implant placement, however, is associated with a greater variability in outcomes and a higher frequency of recession of > 1 mm of the midfacial mucosa (eight studies; range 9% to 41% and median 26% of sites, 1 to 3 years after placement) compared to early (type 2 and type 3) implant placement (2 studies; no sites with recession > 1 mm). In two retrospective studies of immediate (type 1) implant placement with bone graft, the facial bone wall was not detectable on cone beam CT in 36% and 57% of sites. These sites had more recession of the midfacial mucosa compared to sites with detectable facial bone. Two studies of early implant placement (types 2 and 3) combined with simultaneous bone augmentation with GBR (contour augmentation) demonstrated a high frequency (above 90%) of facial bone wall visible on CBCT. Recent studies of immediate (type 1) placement imposed specific selection criteria, including thick tissue biotype and an intact facial socket wall, to reduce esthetic risk. There were no specific selection criteria for early (type 2 and type 3) implant placement.\n\n\nCONCLUSIONS\nAcceptable esthetic outcomes may be achieved with implants placed after extraction of teeth in the maxillary anterior and premolar areas of the dentition. Recession of the midfacial mucosa is a risk with immediate (type 1) placement. Further research is needed to investigate the most suitable biomaterials to reconstruct the facial bone and the relationship between long-term mucosal stability and presence/absence of the facial bone, the thickness of the facial bone, and the position of the facial bone crest.",
"title": ""
},
{
"docid": "68093a9767aea52026a652813c3aa5fd",
"text": "Conventional capacitively coupled neural recording amplifiers often present a large input load capacitance to the neural signal source and hence take up large circuit area. They suffer due to the unavoidable trade-off between the input capacitance and chip area versus the amplifier gain. In this work, this trade-off is relaxed by replacing the single feedback capacitor with a clamped T-capacitor network. With this simple modification, the proposed amplifier can achieve the same mid-band gain with less input capacitance, resulting in a higher input impedance and a smaller silicon area. Prototype neural recording amplifiers based on this proposal were fabricated in 0.35 μm CMOS, and their performance is reported. The amplifiers occupy smaller area and have lower input loading capacitance compared to conventional neural amplifiers. One of the proposed amplifiers occupies merely 0.056 mm2. It achieves 38.1-dB mid-band gain with 1.6 pF input capacitance, and hence has an effective feedback capacitance of 20 fF. Consuming 6 μW, it has an input referred noise of 13.3 μVrms over 8.5 kHz bandwidth and NEF of 7.87. In-vivo recordings from animal experiments are also demonstrated.",
"title": ""
},
{
"docid": "aa7059275a76ee5b7a35354e1f8b4df9",
"text": "We present AMADA, a platform for storing Web data (in particular, XML documents and RDF graphs) based on the Amazon Web Services (AWS) cloud infrastructure. AMADA operates in a Software as a Service (SaaS) approach, allowing users to upload, index, store, and query large volumes of Web data. The demonstration shows (i) the step-by-step procedure for building and exploiting the warehouse (storing, indexing, querying) and (ii) the monitoring tools enabling one to control the expenses (monetary costs) charged by AWS for the operations involved while running AMADA.",
"title": ""
},
{
"docid": "19d8aff7e6c7d20f4aa17d33d3b46eee",
"text": "PURPOSE\nTo evaluate the usefulness of transperineal sonography of the anal sphincter complex for differentiating between an anteriorly displaced anus, which is a normal anatomical variant, and a low-type imperforate anus with perineal fistula, which is a pathological developmental abnormality requiring surgical repair.\n\n\nMATERIALS AND METHODS\nTransperineal sonography was performed with a 13-MHz linear-array transducer on 8 infants (1 day-5.3 months old) who were considered on clinical grounds to have an anteriorly displaced anus and on 9 infants (0-8 months old) with a low-type imperforate anus and perineal fistula confirmed at surgery. The anal sphincter complex was identified and the relationship between the anal canal and the anal sphincter complex was evaluated.\n\n\nRESULTS\nTransperineal sonography was feasible for all children without any specific preparation. An anal canal running within an intact sphincter complex was identified in all infants with an anteriorly displaced anus (n = 8). In 8 of 9 infants with a low-type imperforate anus, a perineal fistula running outside the anal sphincter complex was correctly diagnosed by transperineal sonography. In one infant with a low-type imperforate anus, transperineal sonography revealed a deficient anal sphincter complex.\n\n\nCONCLUSION\nTransperineal sonography appears to be a useful non-invasive imaging technique for assessing congenital anorectal abnormalities in neonates and infants, allowing the surgeon to select infants who would benefit from surgical repair.",
"title": ""
},
{
"docid": "18959618a153812f6c4f38ce2803084a",
"text": "This decade sees a growing number of applications of Unmanned Aerial Vehicles (UAVs) or drones. UAVs are now being experimented for commercial applications in public areas as well as used in private environments such as in farming. As such, the development of efficient communication protocols for UAVs is of much interest. This paper compares and contrasts recent communication protocols of UAVs with that of Vehicular Ad Hoc Networks (VANETs) using Wireless Access in Vehicular Environments (WAVE) protocol stack as the reference model. The paper also identifies the importance of developing light-weight communication protocols for certain applications of UAVs as they can be both of low processing power and limited battery energy.",
"title": ""
},
{
"docid": "5bd61380b9b05b3e89d776c6cbeb0336",
"text": "Cross-domain text classification aims to automatically train a precise text classifier for a target domain by using labelled text data from a related source domain. To this end, one of the most promising ideas is to induce a new feature representation so that the distributional difference between domains can be reduced and a more accurate classifier can be learned in this new feature space. However, most existing methods do not explore the duality of the marginal distribution of examples and the conditional distribution of class labels given labeled training examples in the source domain. Besides, few previous works attempt to explicitly distinguish the domain-independent and domain-specific latent features and align the domain-specific features to further improve the cross-domain learning. In this paper, we propose a model called Partially Supervised Cross-Collection LDA topic model (PSCCLDA) for cross-domain learning with the purpose of addressing these two issues in a unified way. Experimental results on nine datasets show that our model outperforms two standard classifiers and four state-of-the-art methods, which demonstrates the effectiveness of our proposed model.",
"title": ""
},
{
"docid": "00086e7ea6d034136eabdd79fc37466d",
"text": "This paper represents how to de-blurred image with Wiener filter with information of the Point Spread Function (PSF) corrupted blurred image with different values and then corrupted by additive noise. Image is restored using Wiener deconvolution (it works in the frequency domain, attempting to minimize the impact of deconvoluted noise at frequencies which have a poor signal-to-noise ratio). Noise-to-signal ratio is used to control of noise. For better restoration of the blurred and noisy images, there is use of full autocorrelations functions (ACF). ACF is recovered through fast Fourier transfer shifting.",
"title": ""
},
{
"docid": "adb46bea91457f027c6040cd1d706a76",
"text": "Several new algorithms for visual correspondence based on graph cuts [6, 13, 16] have recently been developed. While these methods give very strong results in practice, they do not handle occlusions properly. Specifically, they treat the two input images asymmetrically, and they do not ensure that a pixel corresponds to at most one pixel in the other image. In this paper, we present two new methods which properly address occlusions, while preserving the advantages of graph cut algorithms. We give experimental results for stereo as well as motion, which demonstrate that our methods perform well both at detecting occlusions and computing disparities.",
"title": ""
},
{
"docid": "4de80563d7c651b02764499d4b7e679f",
"text": "Spam is any unwanted electronic message or material in any form posted too many people. As the world is growing as global world, social networking sites play an important role in making world global providing people from different parts of the world a platform to meet and express their views. Among different social networking sites Facebook become the leading one. With increase in usage different users start abusive use of Facebook by posting or creating ways to post spam. This paper highlights the potential spam types nowadays Facebook users’ faces. This paper also provide the reason how user become victim to spam attack. A methodology is proposed in the end discusses how to handle different types of spam. Keywords—Artificial neural networks, Facebook spam, social networking sites, spam filter.",
"title": ""
},
{
"docid": "4590fe5f3e5edff3548d6f78a450402c",
"text": "State of the art methods in astronomical image reconstruction rely on the resolution of a regularized or constrained optimization problem. Solving this problem can be computationally intensive and usually leads to a quadratic or at least superlinear complexity w.r.t. the number of pixels in the image. We investigate in this work the use of convolutional neural networks for image reconstruction in astronomy. With neural networks, the computationally intensive tasks is the training step, but the prediction step has a fixed complexity per pixel, i.e. a linear complexity. Numerical experiments show that our approach is both computationally efficient and competitive with other state of the art methods in addition to being interpretable.",
"title": ""
},
{
"docid": "b8be26e7ddf9dbf4cd9e6fd5be5868f3",
"text": "Software logging is a conventional programming practice. As its efficacy is often important for users and developers to understand what have happened in production run, yet software logging is often done in an arbitrarily manner. So far, there have been little study for understanding logging practices in real world software. This paper makes the first attempt (to the best of our knowledge) to provide quantitative characteristic study of the current log messages within four pieces of large open-source software. First, we quantitatively show that software logging is pervasive. By examining developers’ own modifications to logging code in revision history, we find that they often do not make the log messages right in their first attempts, and thus need to spend significant amount of efforts to modify the log messages as after-thoughts. Our study further provides several interesting findings on where developers spend most of their efforts in modifying the log messages, which can give insights for programmers, tool developers, and language and compiler designers to improve the current logging practice. To demonstrate the benefit of our study, we built a simple checker based on one of our findings and effectively detected 138 new problematic logging code from studied software (24 of them are already confirmed and fixed by developers).",
"title": ""
},
{
"docid": "8a34e31b058c01501c2257fc61b79833",
"text": "This paper proposes a new robust adaptive beamformer applicable to microphone arrays. The proposed beamformer is a generalized sidelobe canceller (GSC) with a variable blocking matrix using coefficient-constrained adaptive filters (CCAFs). The CCAFs, whose common input signal is the output of a fixed beamformer, minimize leakage of the target signal into the interference path of the GSC. Each coefficient of the CCAFs is constrained to avoid mistracking. In the multipleinput canceller, leaky adaptive filters are used to decrease undesirable target-signal cancellation. The proposed beamformer can allow large look-direction error with almost no degradation in interference-reduction performance and can be implemented with a small number of microphones. The maximum allowable look-direction error can be specified by the user. Simulation results show that the proposed beamformer, when designed to allow about 20◦ of look-direction error, can suppress interference by more than 17 dB. key words: beamforming, microphone array, adaptive signal processing, noise reduction",
"title": ""
},
{
"docid": "1e4ea38a187881d304ea417f98a608d1",
"text": "Breast cancer represents the second leading cause of cancer deaths in women today and it is the most common type of cancer in women. This paper presents some experiments for tumour detection in digital mammography. We investigate the use of different data mining techniques, neural networks and association rule mining, for anomaly detection and classification. The results show that the two approaches performed well, obtaining a classification accuracy reaching over 70% percent for both techniques. Moreover, the experiments we conducted demonstrate the use and effectiveness of association rule mining in image categorization.",
"title": ""
},
{
"docid": "60511dbd1dbb4c01881dac736dd7f988",
"text": "The current study reconceptualized self-construal as a social cognitive indicator of self-observation that individuals employ for developing and maintaining social relationship with others. From the social cognitive perspective, this study investigated how consumers’ self-construal can affect consumers’ electronic word of mouth (eWOM) behavior through two cognitive factors (online community engagement self-efficacy and social outcome expectations) in the context of a social networking site. This study conducted an online experiment that directed 160 participants to visit a newly created online community. The results demonstrated that consumers’ relational view became salient when the consumers’ self-construal was primed to be interdependent rather than independent. Further, the results showed that such interdependent self-construal positively influenced consumers’ eWOM behavioral intentions through their community engagement self-efficacy and their social outcome expectations. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cbc6bd586889561cc38696f758ad97d2",
"text": "Introducing a new hobby for other people may inspire them to join with you. Reading, as one of mutual hobby, is considered as the very easy hobby to do. But, many people are not interested in this hobby. Why? Boring is the reason of why. However, this feel actually can deal with the book and time of you reading. Yeah, one that we will refer to break the boredom in reading is choosing design of experiments statistical principles of research design and analysis as the reading material.",
"title": ""
},
{
"docid": "6e9432d2669ae81a350814df94f9edc3",
"text": "In parallel with the meteoric rise of mobile software, we are witnessing an alarming escalation in the number and sophistication of the security threats targeted at mobile platforms, particularly Android, as the dominant platform. While existing research has made significant progress towards detection and mitigation of Android security, gaps and challenges remain. This paper contributes a comprehensive taxonomy to classify and characterize the state-of-the-art research in this area. We have carefully followed the systematic literature review process, and analyzed the results of more than 100 research papers, resulting in the most comprehensive and elaborate investigation of the literature in this area of research. The systematic analysis of the research literature has revealed patterns, trends, and gaps in the existing literature, and underlined key challenges and opportunities that will shape the focus of future research efforts.",
"title": ""
},
{
"docid": "7b36abede1967f89b79975883074a34d",
"text": "In this paper, we introduce a generalized value iteration network (GVIN), which is an end-to-end neural network planning module. GVIN emulates the value iteration algorithm by using a novel graph convolution operator, which enables GVIN to learn and plan on irregular spatial graphs. We propose three novel differentiable kernels as graph convolution operators and show that the embedding-based kernel achieves the best performance. Furthermore, we present episodic Q-learning, an improvement upon traditional n-step Q-learning that stabilizes training for VIN and GVIN. Lastly, we evaluate GVIN on planning problems in 2D mazes, irregular graphs, and realworld street networks, showing that GVIN generalizes well for both arbitrary graphs and unseen graphs of larger scale and outperforms a naive generalization of VIN (discretizing a spatial graph into a 2D image).",
"title": ""
},
{
"docid": "6182626269d38c81fa63eb2cab91caca",
"text": "Environmental management, a term encompassing environmental planning, protection, monitoring, assessment, research, education, conservation and sustainable use of resources, is now accepted as a major guiding factor for sustainable development at the regional and national level. It is now being increasingly recognized that environmental factors and ecological imperatives must be in built to the total planning process if the long-term goal of making industrial development sustainable is to be achieved. Here we will try to define and discuss the role of Environmental Analysis in the strategic management process of organization. The present complex world require as far as is feasible, it consider impact of important factors related to organizations in strategic planning. The strategic planning of business includes all functional subdivisions and forwards them in a united direction. One of these subsystems is human resource management. Strategic human resource management comes after the strategic planning, and followed by strategic human resource planning as a major activity in all the industries. In strategic planning, it can use different analytical methods and techniques that one of them is PEST analysis. This paper introduces how to apply it in a new manner.",
"title": ""
}
] |
scidocsrr
|
291e3159bb67c2ddb40253c72748be69
|
Multimedia Cloud Computing
|
[
{
"docid": "f085832faf1a2921eedd3d00e8e592db",
"text": "There are billions of photographs on the Internet, comprising the largest and most diverse photo collection ever assembled. How can computer vision researchers exploit this imagery? This paper explores this question from the standpoint of 3D scene modeling and visualization. We present structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like “Notre Dame” or “Trevi Fountain.” This approach, which we call Photo Tourism, has enabled reconstructions of numerous well-known world sites. This paper presents these algorithms and results as a first step towards 3D modeling of the world’s well-photographed sites, cities, and landscapes from Internet imagery, and discusses key open problems and challenges for the research community.",
"title": ""
}
] |
[
{
"docid": "c77b2b45f189b6246c9f2e2ed527772f",
"text": "PaaS vendors face challenges in efficiently providing services with the growth of their offerings. In this paper, we explore how PaaS vendors are using containers as a means of hosting Apps. The paper starts with a discussion of PaaS Use case and the current adoption of Container based PaaS architectures with the existing vendors. We explore various container implementations - Linux Containers, Docker, Warden Container, lmctfy and OpenVZ. We look at how each of this implementation handle Process, FileSystem and Namespace isolation. We look at some of the unique features of each container and how some of them reuse base Linux Container implementation or differ from it. We also explore how IaaSlayer itself has started providing support for container lifecycle management along with Virtual Machines. In the end, we look at factors affecting container implementation choices and some of the features missing from the existing implementations for the next generation PaaS.",
"title": ""
},
{
"docid": "6b17bc7d1c6e19c4a4a5395f0b6b9ec9",
"text": "The development of accurate models for cyber-physical systems (CPSs) is hampered by the complexity of these systems, fundamental differences in the operation of cyber and physical components, and significant interdependencies among these components. Agent-based modeling shows promise in overcoming these challenges, due to the flexibility of software agents as autonomous and intelligent decision-making components. Semantic agent systems are even more capable, as the structure they provide facilitates the extraction of meaningful content from the data provided to the software agents. In this paper, we present a multi-agent model for a CPS, where the semantic capabilities are underpinned by sensor networks that provide information about the physical operation to the cyber infrastructure. This model is used to represent the static structure and dynamic behavior of an intelligent water distribution network as a CPS case study.",
"title": ""
},
{
"docid": "e982aa23c644bad4870bafaf7344d15a",
"text": "In this work we introduce a structured prediction model that endows the Deep Gaussian Conditional Random Field (G-CRF) with a densely connected graph structure. We keep memory and computational complexity under control by expressing the pairwise interactions as inner products of low-dimensional, learnable embeddings. The G-CRF system matrix is therefore low-rank, allowing us to solve the resulting system in a few milliseconds on the GPU by using conjugate gradient. As in G-CRF, inference is exact, the unary and pairwise terms are jointly trained end-to-end by using analytic expressions for the gradients, while we also develop even faster, Potts-type variants of our embeddings. We show that the learned embeddings capture pixel-to-pixel affinities in a task-specific manner, while our approach achieves state of the art results on three challenging benchmarks, namely semantic segmentation, human part segmentation, and saliency estimation. Our implementation is fully GPU based, built on top of the Caffe library, and is available at https://github.com/siddharthachandra/gcrf-v2.0.",
"title": ""
},
{
"docid": "3169b7b287e065f89f3a737a2ce98c28",
"text": "In digital forensics, the detection of the presence of tampered images is of significant importance. The problem with the existing literature is that majority of them identify certain features in images tampered by a specific tampering method (such as copy-move, splicing, etc). This means that the method does not work reliably across various tampering methods. In addition, in terms of tampered region localization, most of the work targets only JPEG images due to the exploitation of double compression artifacts left during the re-compression of the manipulated image. However, in reality, digital forensics tools should not be specific to any image format and should also be able to localize the region of the image that was modified. In this paper, we propose a two stage deep learning approach to learn features in order to detect tampered images in different image formats. For the first stage, we utilize a Stacked Autoencoder model to learn the complex feature for each individual patch. For the second stage, we integrate the contextual information of each patch so that the detection can be conducted more accurately. In our experiments, we were able to obtain an overall tampered region localization accuracy of 91.09% over both JPEG and TIFF images from CASIA dataset, with a fall-out of 4.31% and a precision of 57.67% respectively. The accuracy over the JPEG tampered images is 87.51%, which outperforms the 40.84% and 79.72% obtained from two state of the art tampering detection approaches.",
"title": ""
},
{
"docid": "61aeadb580e72e8c09840f26f164e19d",
"text": "Unmanned Air Systems (UAS) show great promise for a range of civilian applications, especially „dull, dirty or dangerous‟ missions such as air-sea rescue, coastal and border surveillance, fisheries protection and disaster relief. As the demand for autonomy increases, the importance of correctly identifying and responding to faults becomes more apparent, as fully autonomous systems must base their decisions solely upon the sensors readings they receive – as there is no human on board. A UAS must be capable of performing all the functions that would be expected from a human pilot, including reasoning about faults and making decisions about how to best mitigate their consequences, given the larger context of the overall mission. As these autonomous techniques are developed their benefits can also be realised in non-autonomous systems, as realtime aids to human operators or crew. This paper proposes a novel approach to PHM that combines advanced Functional Failure Mode Analysis with a reasoning system, to provide effective PHM for autonomous systems and improved diagnosis capability for manned aircraft. *",
"title": ""
},
{
"docid": "c441a85ace8ac9f75fc106e5b378aff1",
"text": "Reconstructing a high-resolution 3D model of an object is a challenging task in computer vision. Designing scalable and light-weight architectures is crucial while addressing this problem. Existing point-cloud based reconstruction approaches directly predict the entire point cloud in a single stage. Although this technique can handle low-resolution point clouds, it is not a viable solution for generating dense, high-resolution outputs. In this work, we introduce DensePCR, a deep pyramidal network for point cloud reconstruction that hierarchically predicts point clouds of increasing resolution. Towards this end, we propose an architecture that first predicts a low-resolution point cloud, and then hierarchically increases the resolution by aggregating local and global point features to deform a grid. Our method generates point clouds that are accurate, uniform and dense. Through extensive quantitative and qualitative evaluation on synthetic and real datasets, we demonstrate that DensePCR outperforms the existing state-of-the-art point cloud reconstruction works, while also providing a light-weight and scalable architecture for predicting high-resolution outputs.",
"title": ""
},
{
"docid": "ba58efc16a48e8a2203189781d58cb03",
"text": "Introduction The typical size of large networks such as social network services, mobile phone networks or the web now counts in millions when not billions of nodes and these scales demand new methods to retrieve comprehensive information from their structure. A promising approach consists in decomposing the networks into communities of strongly connected nodes, with the nodes belonging to different communities only sparsely connected. Finding exact optimal partitions in networks is known to be computationally intractable, mainly due to the explosion of the number of possible partitions as the number of nodes increases. It is therefore of high interest to propose algorithms to find reasonably “good” solutions of the problem in a reasonably “fast” way. One of the fastest algorithms consists in optimizing the modularity of the partition in a greedy way (Clauset et al, 2004), a method that, even improved, does not allow to analyze more than a few millions nodes (Wakita et al, 2007).",
"title": ""
},
{
"docid": "843114fa31397e6154c63561e30add48",
"text": "Many animals engage in many behaviors that reduce their exposure to pathogens. Ants line their nests with resins that inhibit the growth of fungi and bacteria (Chapuisat, Oppliger, Magliano, & Christe, 2008). Mice avoid mating with other mice that are infected with parasitic protozoa (Kavaliers & Colwell, 1995). Animals of many kinds—from physiologically primitive nematode worms to neurologically sophisticated chimpanzees—strategically avoid physical contact with specific things (including their own conspecifics) that, on the basis of superficial sensory cues, appear to pose some sort of infection risk (Goodall, 1986; Kiesecker, Skelly, Beard, & Preisser, 1999; Schulenburg & Müller, 2004).",
"title": ""
},
{
"docid": "aee91ee5d4cbf51d9ce1344be4e5448c",
"text": "Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. This paper aims to establish formal connections between GANs and VAEs through a new formulation of them. We interpret sample generation in GANs as performing posterior inference, and show that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to transfer techniques across research lines in a principled way. For example, we apply the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism that leverages generated samples. Experiments show generality and effectiveness of the transfered techniques.",
"title": ""
},
{
"docid": "e2a863f5407ce843af196c105adfb2fe",
"text": "We study the Student-Project Allocation problem (SPA), a generalisation of the classical Hospitals / Residents problem (HR). An instance of SPA involves a set of students, projects and lecturers. Each project is offered by a unique lecturer, and both projects and lecturers have capacity constraints. Students have preferences over projects, whilst lecturers have preferences over students. We present two optimal linear-time algorithms for allocating students to projects, subject to the preference and capacity constraints. In particular, each algorithm finds a stable matching of students to projects. Here, the concept of stability generalises the stability definition in the HR context. The stable matching produced by the first algorithm is simultaneously best-possible for all students, whilst the one produced by the second algorithm is simultaneously best-possible for all lecturers. We also prove some structural results concerning the set of stable matchings in a given instance of SPA. The SPA problem model that we consider is very general and has applications to a range of different contexts besides student-project allocation.",
"title": ""
},
{
"docid": "9d7df314d1f8c535bdee7497fd61eec6",
"text": "For prosthetic hand manipulation, the surface Electromyography(sEMG) has been widely applied. Researchers usually focus on the recognition of hand grasps or gestures, but ignore the hand force, which is equally important for robotic hand control. Therefore, this paper concentrates on the methods of finger forces estimation based on multichannel sEMG signal. A custom-made sEMG sleeve system omitting the stage of muscle positioning is utilised to capture the sEMG signal on the forearm. A mathematic model for muscle activation extraction is established to describe the relationship between finger pinch forces and sEMG signal, where the genetic algorithm is employed to optimise the coefficients. The results of experiments in this paper shows three main contributions: 1) There is a systematical relationship between muscle activations and the pinch finger forces. 2) To estimate the finger force, muscle precise positioning for electrodes placement is not inevitable. 3) In a multi-channel EMG system, selecting specific combinations of several channels can improve the estimation accuracy for specific gestures.",
"title": ""
},
{
"docid": "ca0d5a3f9571f288d244aee0b2c2f801",
"text": "This paper proposes, focusing on random forests, the increa singly used statistical method for classification and regre ssion problems introduced by Leo Breiman in 2001, to investigate two classi cal issues of variable selection. The first one is to find impor tant variables for interpretation and the second one is more rest rictive and try to design a good prediction model. The main co tribution is twofold: to provide some insights about the behavior of th e variable importance index based on random forests and to pr opose a strategy involving a ranking of explanatory variables usi ng the random forests score of importance and a stepwise asce nding variable introduction strategy.",
"title": ""
},
{
"docid": "73581b5a936a75f936112747bd05003e",
"text": "We consider the problem of creating secure and resourceefficient blockchain networks i.e., enable a group of mutually distrusting participants to efficiently share state and then agree on an append-only history of valid operations on that shared state. This paper proposes a new approach to build such blockchain networks. Our key observation is that an append-only, tamper-resistant ledger (when used as a communication medium for messages sent by participants in a blockchain network) offers a powerful primitive to build a simple, flexible, and efficient consensus protocol, which in turn serves as a solid foundation for building secure and resource-efficient blockchain networks. A key ingredient in our approach is the abstraction of a blockchain service provider (BSP), which oversees creating and updating an append-only, tamper-resistant ledger, and a new distributed protocol called Caesar consensus, which leverages the BSP’s interface to enable members of a blockchain network to reach consensus on the BSP’s ledger—even when the BSP or a threshold number of members misbehave arbitrarily. By design, the BSP is untrusted, so it can run on any untrusted infrastructure and can be optimized for better performance without affecting end-to-end security. We implement our proposal in a system called VOLT. Our experimental evaluation suggests that VOLT incurs low resource costs and provides better performance compared to alternate approaches.",
"title": ""
},
{
"docid": "7d0e59bee3b2a430ba0436f5df5621c0",
"text": "The vertical dimension of interpersonal relations (relating to dominance, power, and status) was examined in association with nonverbal behaviors that included facial behavior, gaze, interpersonal distance, body movement, touch, vocal behaviors, posed encoding skill, and others. Results were separately summarized for people's beliefs (perceptions) about the relation of verticality to nonverbal behavior and for actual relations between verticality and nonverbal behavior. Beliefs/perceptions were stronger and much more prevalent than were actual verticality effects. Perceived and actual relations were positively correlated across behaviors. Heterogeneity was great, suggesting that verticality is not a psychologically uniform construct in regard to nonverbal behavior. Finally, comparison of the verticality effects to those that have been documented for gender in relation to nonverbal behavior revealed only a limited degree of parallelism.",
"title": ""
},
{
"docid": "3cfcbf940acc364bb07f01c7e46a0cbe",
"text": "Intraoral pigmentation is quite common and has numerous etiologies, ranging from exogenous to physiological to neoplastic. Many pigmented lesions of the oral cavity are associated with melanin pigment. The differential diagnosis of mucosal pigmented lesions includes hematomas, varices, and petechiae which may appear to be pigmented. Unlike cutaneous melanomas, oral melanomas are diagnosed late and have a poor prognosis regardless of depth of invasion. As such, the clinical presentation and treatment of intraoral melanoma will be discussed. Developing a differential diagnosis is imperative for a clinician faced with these lesions in order to appropriately treat the patient. This article will focus on the most common oral melanocytic lesions, along with mimics.",
"title": ""
},
{
"docid": "054c2e8fa9421c77939091e5adfc07e5",
"text": "Visualization is a powerful paradigm for exploratory data analysis. Visualizing large graphs, however, often results in excessive edges crossings and overlapping nodes. We propose a new scalable approach called FACETS that helps users adaptively explore large million-node graphs from a local perspective, guiding them to focus on nodes and neighborhoods that are most subjectively interesting to users. We contribute novel ideas to measure this interestingness in terms of how surprising a neighborhood is given the background distribution, as well as how well it matches what the user has chosen to explore. FACETS uses Jensen-Shannon divergence over information-theoretically optimized histograms to calculate the subjective user interest and surprise scores. Participants in a user study found FACETS easy to use, easy to learn, and exciting to use. Empirical runtime analyses demonstrated FACETS’s practical scalability on large real-world graphs with up to 5 million edges, returning results in fewer than 1.5 seconds.",
"title": ""
},
{
"docid": "a52d2a2c8fdff0bef64edc1a97b89c63",
"text": "This paper provides a review of recent developments in speech recognition research. The concept of sources of knowledge is introduced and the use of knowledge to generate and verify hypotheses is discussed. The difficulties that arise in the construction of different types of speech recognition systems are discussed and the structure and performance of several such systems is presented. Aspects of component subsystems at the acoustic, phonetic, syntactic, and semantic levels are presented. System organizations that are required for effective interaction and use of various component subsystems in the presence of error and ambiguity are discussed.",
"title": ""
},
{
"docid": "56ebcdaac49df0120f947c9f84e48efa",
"text": "While deep learning has achieved great success in computer vision and many other fields, currently it does not work very well on patient genomic data with the “big p, smallN” problem (i.e., a relatively small number of samples with high-dimensional features). In order to make deep learning work with a small amount of training data, we have to design new models that facilitate few-shot learning. Here we present the Affinity Network Model (AffinityNet), a data efficient deep learning model that can learn from a limited number of training examples and generalize well. The backbone of the AffinityNet model consists of stacked k-Nearest-Neighbor (kNN) attention pooling layers. The kNN attention pooling layer is a generalization of the Graph Attention Model (GAM), and can be applied to not only graphs but also any set of objects regardless of whether a graph is given or not. As a new deep learning module, kNN attention pooling layers can be plugged into any neural network model just like convolutional layers. As a simple special case of kNN attention pooling layer, feature attention layer can directly select important features that are useful for classification tasks. Experiments on both synthetic data and cancer genomic data from TCGA projects show that our AffinityNet model has better generalization power than conventional neural network models with little training data. We have implemented our method using PyTorch framework (https://pytorch.org).The code is freely available at https://github.com/BeautyOfWeb/AffinityNet.",
"title": ""
},
{
"docid": "ef142067a29f8662e36d68ee37c07bce",
"text": "The lack of assessment tools to analyze serious games and insufficient knowledge on their impact on players is a recurring critique in the field of game and media studies, education science and psychology. Although initial empirical studies on serious games usage deliver discussable results, numerous questions remain unacknowledged. In particular, questions regarding the quality of their formal conceptual design in relation to their purpose mostly stay uncharted. In the majority of cases the designers' good intentions justify incoherence and insufficiencies in their design. In addition, serious games are mainly assessed in terms of the quality of their content, not in terms of their intention-based design. This paper argues that analyzing a game's formal conceptual design, its elements, and their relation to each other based on the game's purpose is a constructive first step in assessing serious games. By outlining the background of the Serious Game Design Assessment Framework and exemplifying its use, a constructive structure to examine purpose-based games is introduced. To demonstrate how to assess the formal conceptual design of serious games we applied the SGDA Framework to the online games \"Sweatshop\" (2011) and \"ICED\" (2008).",
"title": ""
}
] |
scidocsrr
|
5f9ad4acf3ded44acc1c3ea3b6cf3c21
|
Mining Software Engineering Data from GitHub
|
[
{
"docid": "9c096af9f77a93a54b452e90cb52a166",
"text": "GitHub, one of the most popular social coding platforms, is the platform of reference when mining Open Source repositories to learn from past experiences. In the last years, a number of research papers have been published reporting findings based on data mined from GitHub. As the community continues to deepen in its understanding of software engineering thanks to the analysis performed on this platform, we believe it is worthwhile to reflect how research papers have addressed the task of mining GitHub repositories over the last years. In this regard, we present a meta-analysis of 93 research papers which addresses three main dimensions of those papers: i) the empirical methods employed, ii) the datasets they used and iii) the limitations reported. Results of our meta-analysis show some concerns regarding the dataset collection process and size, the low level of replicability, poor sampling techniques, lack of longitudinal studies and scarce variety of methodologies.",
"title": ""
},
{
"docid": "bac117da7b07fff75cf039165fc4e57e",
"text": "The advent of distributed version control systems has led to the development of a new paradigm for distributed software development; instead of pushing changes to a central repository, developers pull them from other repositories and merge them locally. Various code hosting sites, notably Github, have tapped on the opportunity to facilitate pull-based development by offering workflow support tools, such as code reviewing systems and integrated issue trackers. In this work, we explore how pull-based software development works, first on the GHTorrent corpus and then on a carefully selected sample of 291 projects. We find that the pull request model offers fast turnaround, increased opportunities for community engagement and decreased time to incorporate contributions. We show that a relatively small number of factors affect both the decision to merge a pull request and the time to process it. We also examine the reasons for pull request rejection and find that technical ones are only a small minority.",
"title": ""
},
{
"docid": "0153774b49121d8735cc3d33df69fc00",
"text": "A common requirement of many empirical software engineering studies is the acquisition and curation of data from software repositories. During the last few years, GitHub has emerged as a popular project hosting, mirroring and collaboration platform. GitHub provides an extensive rest api, which enables researchers to retrieve both the commits to the projects' repositories and events generated through user actions on project resources. GHTorrent aims to create a scalable off line mirror of GitHub's event streams and persistent data, and offer it to the research community as a service. In this paper, we present the project's design and initial implementation and demonstrate how the provided datasets can be queried and processed.",
"title": ""
}
] |
[
{
"docid": "b5f9535fb63cae3d115e1e5bded4795c",
"text": "This study uses a hostage negotiation setting to demonstrate how a team of strategic police officers can utilize specific coping strategies to minimize uncertainty at different stages of their decision-making in order to foster resilient decision-making to effectively manage a high-risk critical incident. The presented model extends the existing research on coping with uncertainty by (1) applying the RAWFS heuristic (Lipshitz and Strauss in Organ Behav Human Decis Process 69:149–163, 1997) of individual decision-making under uncertainty to a team critical incident decision-making domain; (2) testing the use of various coping strategies during “in situ” team decision-making by using a live simulated hostage negotiation exercise; and (3) including an additional coping strategy (“reflection-in-action”; Schön in The reflective practitioner: how professionals think in action. Temple Smith, London, 1983) that aids naturalistic team decision-making. The data for this study were derived from a videoed strategic command meeting held within a simulated live hostage training event; these video data were coded along three themes: (1) decision phase; (2) uncertainty management strategy; and (3) decision implemented or omitted. Results illustrate that, when assessing dynamic and high-risk situations, teams of police officers cope with uncertainty by relying on “reduction” strategies to seek additional information and iteratively update these assessments using “reflection-in-action” (Schön 1983) based on previous experience. They subsequently progress to a plan formulation phase and use “assumption-based reasoning” techniques in order to mentally simulate their intended courses of action (Klein et al. 2007), and identify a preferred formulated strategy through “weighing the pros and cons” of each option. In the unlikely event that uncertainty persists to the plan execution phase, it is managed by “reduction” in the form of relying on plans and standard operating procedures or by “forestalling” and intentionally deferring the decision while contingency planning for worst-case scenarios.",
"title": ""
},
{
"docid": "b899a5effd239f1548128786d5ae3a8f",
"text": "As fault diagnosis and prognosis systems in aerospace applications become more capable, the ability to utilize information supplied by them becomes increasingly important. While certain types of vehicle health data can be effectively processed and acted upon by crew or support personnel, others, due to their complexity or time constraints, require either automated or semi-automated reasoning. Prognostics-enabled Decision Making (PDM) is an emerging research area that aims to integrate prognostic health information and knowledge about the future operating conditions into the process of selecting subsequent actions for the system. The newly developed PDM algorithms require suitable software and hardware platforms for testing under realistic fault scenarios. The paper describes the development of such a platform, based on the K11 planetary rover prototype. A variety of injectable fault modes are being investigated for electrical, mechanical, and power subsystems of the testbed, along with methods for data collection and processing. In addition to the hardware platform, a software simulator with matching capabilities has been developed. The simulator allows for prototyping and initial validation of the algorithms prior to their deployment on the K11. The simulator is also available to the PDM algorithms to assist with the reasoning process. A reference set of diagnostic, prognostic, and decision making algorithms is also described, followed by an overview of the current test scenarios and the results of their execution on the simulator. Edward Balaban et.al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 United States License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.",
"title": ""
},
{
"docid": "c2c2ddb9a6e42edcc1c035636ec1c739",
"text": "As the interest in DevOps continues to grow, there is an increasing need for software organizations to understand how to adopt it successfully. This study has as objective to clarify the concept and provide insight into existing challenges of adopting DevOps. First, the existing literature is reviewed. A definition of DevOps is then formed based on the literature by breaking down the concept into its defining characteristics. We interview 13 subjects in a software company adopting DevOps and, finally, we present 11 impediments for the company’s DevOps adoption that were identified based on the interviews.",
"title": ""
},
{
"docid": "ed0d1e110347313285a6b478ff8875e3",
"text": "Data mining is an area of computer science with a huge prospective, which is the process of discovering or extracting information from large database or datasets. There are many different areas under Data Mining and one of them is Classification or the supervised learning. Classification also can be implemented through a number of different approaches or algorithms. We have conducted the comparison between three algorithms with help of WEKA (The Waikato Environment for Knowledge Analysis), which is an open source software. It contains different type's data mining algorithms. This paper explains discussion of Decision tree, Bayesian Network and K-Nearest Neighbor algorithms. Here, for comparing the result, we have used as parameters the correctly classified instances, incorrectly classified instances, time taken, kappa statistic, relative absolute error, and root relative squared error.",
"title": ""
},
{
"docid": "511342f43f7b5f546e72e8651ae4e313",
"text": "With the introduction of the Microsoft Kinect for Windows v2 (Kinect v2), an exciting new sensor is available to robotics and computer vision researchers. Similar to the original Kinect, the sensor is capable of acquiring accurate depth images at high rates. This is useful for robot navigation as dense and robust maps of the environment can be created. Opposed to the original Kinect working with the structured light technology, the Kinect v2 is based on the time-of-flight measurement principle and might also be used outdoors in sunlight. In this paper, we evaluate the application of the Kinect v2 depth sensor for mobile robot navigation. The results of calibrating the intrinsic camera parameters are presented and the minimal range of the depth sensor is examined. We analyze the data quality of the measurements for indoors and outdoors in overcast and direct sunlight situations. To this end, we introduce empirically derived noise models for the Kinect v2 sensor in both axial and lateral directions. The noise models take the measurement distance, the angle of the observed surface, and the sunlight incidence angle into account. These models can be used in post-processing to filter the Kinect v2 depth images for a variety of applications.",
"title": ""
},
{
"docid": "8d527a82ca678bb205c40749641efdd5",
"text": "Whereas deep neural networks were first mostly used for classification tasks, they are rapidly expanding in the realm of structured output problems, where the observed target is composed of multiple random variables that have a rich joint distribution, given the input. In this paper we focus on the case where the input also has a rich structure and the input and output structures are somehow related. We describe systems that learn to attend to different places in the input, for each element of the output, for a variety of tasks: machine translation, image caption generation, video clip description, and speech recognition. All these systems are based on a shared set of building blocks: gated recurrent neural networks and convolutional neural networks, along with trained attention mechanisms. We report on experimental results with these systems, showing impressively good performance and the advantage of the attention mechanism.",
"title": ""
},
{
"docid": "a38105bda456a970b75422df194ecd68",
"text": "Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point (1). Humans can do path integration based exclusively on visual (2-3), auditory (4), or inertial cues (5). However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate (6-7). In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones (5). Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see (3) for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator (8-9) with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s(2) peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.",
"title": ""
},
{
"docid": "61c2bbfbbc8d9d87e21e8a26f036471d",
"text": "The development of the Proximity based Mobile Social Networking (PMSN) has been growing exponentially with the adoption of smartphones and introduction of Wi-Fi hotspots in public and remote areas. Users present in the vicinity can interact with each other using the embedded technologies in their mobile devices such as GPS, Wi-Fi and Bluetooth. Due to its growing momentum, this new social networking has also aroused the interest of business people and advertisers. However, due to a lack of security in these networks, several privacy concerns were reported. Users are more reluctant to share their locations and to address this issue, some initial solutions to preserve location privacy were implemented. The aim of this paper is to present a clear categorization of the different privacy threats in PMSN. Moreover, the location privacy enforcement policies and techniques used to ensure privacy are outlined and some solutions employed in existent systems are presented and discussed. To the best of our knowledge, this is the first study done outlining several categories of PMSN privacy challenges and their solutions in this new type of social networking services. Finally, some privacy research challenges and future perspectives are proposed.",
"title": ""
},
{
"docid": "8663f7e820a9292fd82942819a0639d1",
"text": "In this paper, we present an appearance learning approach which is used to detect and track surgical robotic tools in laparoscopic sequences. By training a robust visual feature descriptor on low-level landmark features, we build a framework for fusing robot kinematics and 3D visual observations to track surgical tools over long periods of time across various types of environment. We demonstrate 3D tracking on multiple types of tool (with different overall appearances) as well as multiple tools simultaneously. We present experimental results using the da Vinci® surgical robot using a combination of both ex-vivo and in-vivo environments.",
"title": ""
},
{
"docid": "015da67991b6480433f889bd597abdb4",
"text": "Nowadays the requirement for developing a wheel chair control which is useful for the physically disabled person with Tetraplegia. This system involves the control of the wheel chair with the eye moment of the affected person. Statistics suggest that there are 230,000 cases of Tetraplegia in India. Our system here is to develop a wheelchair which make the lives of these people easier and instigate confidence to live in them. We know that a person who is affected by Tetraplegia can move their eyes alone to a certain extent which paves the idea for the development of our system. Here we have proposed the method for a device where a patient placed on the wheel chair looking in a straight line at the camera which is permanently fixed in the optics, is capable to move in a track by gazing in that way. When we change the direction, the camera signals are given using the mat lab script to the microcontroller. Depends on the path of the eye, the microcontroller controls the wheel chair in all direction and stops the movement. If there is any obstacle to be found before the wheel chair the sensor mind that and it stop and move in right direction immediately. The benefit of this system is too easily travel anywhere in any direction which is handled by physically disabled person with Tetraplegia.",
"title": ""
},
{
"docid": "3f5eed1f718e568dc3ba9abbcd6bfedd",
"text": "The automatic recognition of spontaneous emotions from speech is a challenging task. On the one hand, acoustic features need to be robust enough to capture the emotional content for various styles of speaking, and while on the other, machine learning algorithms need to be insensitive to outliers while being able to model the context. Whereas the latter has been tackled by the use of Long Short-Term Memory (LSTM) networks, the former is still under very active investigations, even though more than a decade of research has provided a large set of acoustic descriptors. In this paper, we propose a solution to the problem of `context-aware' emotional relevant feature extraction, by combining Convolutional Neural Networks (CNNs) with LSTM networks, in order to automatically learn the best representation of the speech signal directly from the raw time representation. In this novel work on the so-called end-to-end speech emotion recognition, we show that the use of the proposed topology significantly outperforms the traditional approaches based on signal processing techniques for the prediction of spontaneous and natural emotions on the RECOLA database.",
"title": ""
},
{
"docid": "369af16d8d6bcaaa22b1ef727768e5e3",
"text": "We catalogue available software solutions for non-rigid image registration to support scientists in selecting suitable tools for specific medical registration purposes. Registration tools were identified using non-systematic search in Pubmed, Web of Science, IEEE Xplore® Digital Library, Google Scholar, and through references in identified sources (n = 22). Exclusions are due to unavailability or inappropriateness. The remaining (n = 18) tools were classified by (i) access and technology, (ii) interfaces and application, (iii) living community, (iv) supported file formats, and (v) types of registration methodologies emphasizing the similarity measures implemented. Out of the 18 tools, (i) 12 are open source, 8 are released under a permissive free license, which imposes the least restrictions on the use and further development of the tool, 8 provide graphical processing unit (GPU) support; (ii) 7 are built on software platforms, 5 were developed for brain image registration; (iii) 6 are under active development but only 3 have had their last update in 2015 or 2016; (iv) 16 support the Analyze format, while 7 file formats can be read with only one of the tools; and (v) 6 provide multiple registration methods and 6 provide landmark-based registration methods. Based on open source, licensing, GPU support, active community, several file formats, algorithms, and similarity measures, the tools Elastics and Plastimatch are chosen for the platform ITK and without platform requirements, respectively. Researchers in medical image analysis already have a large choice of registration tools freely available. However, the most recently published algorithms may not be included in the tools, yet.",
"title": ""
},
{
"docid": "03826954a304a4d6bdb2c1f55bbe8001",
"text": "This paper gives an overview of the channel access methods of three wireless technologies that are likely to be used in the environment of vehicle networks: IEEE 802.15.4, IEEE 802.11 and Bluetooth. Researching the coexistence of IEEE 802.15.4 with IEEE 802.11 and Bluetooth, results of experiments conducted in a radio frequency anechoic chamber are presented. The power densities of the technologies on a single IEEE 802.15.4 channel are compared. It is shown that the pure existence of an IEEE 802.11 access point leads to collisions due to different timing scales. Furthermore, the packet drop rate caused by Bluetooth is analyzed and an estimation formula for it is given.",
"title": ""
},
{
"docid": "47c88bb234a6e21e8037a67e6dd2444f",
"text": "Lacking an operational theory to explain the organization and behaviour of matter in unicellular and multicellular organisms hinders progress in biology. Such a theory should address life cycles from ontogenesis to death. This theory would complement the theory of evolution that addresses phylogenesis, and would posit theoretical extensions to accepted physical principles and default states in order to grasp the living state of matter and define proper biological observables. Thus, we favour adopting the default state implicit in Darwin’s theory, namely, cell proliferation with variation plus motility, and a framing principle, namely, life phenomena manifest themselves as non-identical iterations of morphogenetic processes. From this perspective, organisms become a consequence of the inherent variability generated by proliferation, motility and self-organization. Morphogenesis would then be the result of the default state plus physical constraints, like gravity, and those present in living organisms, like muscular tension.",
"title": ""
},
{
"docid": "582db047ab7ad95931c8da401546697e",
"text": "OBJECTIVE\nTo develop and evaluate a liquid phase immunoassay for accurate determination of allergen-specific IgE (sIgE) as a useful tool in the diagnosis of allergy patients.\n\n\nDESIGN AND METHODS\nA fully automated, quantitative sIgE assay was developed for the ADVIA Centaur technology platform using a unique calibration method based on a recombinant reference allergen. Compared to most other IgE-assays, the assay employs a reverse sandwich architecture using monoclonal mouse anti-human IgE antibody covalently bound to paramagnetic particles in the solid phase and capturing the sample IgE. Bound sIgE reacts with liquid biotin-labeled allergen, which is detected as chemiluminescence using acridiniumester-labeled streptavidin.\n\n\nRESULTS\nThe ADVIA Centaur sIgE assay (Centaur assay) has exclusive reactivity to human IgE and performs with excellent linearity in the assay range 0.35-100 kU/L and high precision (imprecision within-run <2.6%, between-run <4.9%, and total imprecision <7.1%). The analytical sensitivity is <0.10 kU/L. Using Pharmacia CAP system FEIA (CAP) as a comparative method, positive/negative concordance was 94% at 0.35 kU/L cut-off, and the Centaur assay has a sensitivity of 90% and a specificity of 98%. Validation of the assay in a general population sample (The Copenhagen allergy study) revealed that sIgE was highly associated with a clinical diagnosis of inhalation allergy.\n\n\nCONCLUSIONS\nThe Centaur assay is an allergen-specific assay for measurement of IgE without interference from other types of immunoglobulins or nonspecific IgE. The assay performs with a linear reaction, high assay range, and good reproducibility. The assay correlates well with the CAP system and is in agreement with clinical diagnosis.",
"title": ""
},
{
"docid": "24416c57bcf10b0474758ed579161868",
"text": "In many image-related tasks, learning expressive and discriminative representations of images is essential, and deep learning has been studied for automating the learning of such representations. Some user-centric tasks, such as image recommendations, call for effective representations of not only images but also preferences and intents of users over images. Such representations are termed hybrid and addressed via a deep learning approach in this paper. We design a dual-net deep network, in which the two sub-networks map input images and preferences of users into a same latent semantic space, and then the distances between images and users in the latent space are calculated to make decisions. We further propose a comparative deep learning (CDL) method to train the deep network, using a pair of images compared against one user to learn the pattern of their relative distances. The CDL embraces much more training data than naive deep learning, and thus achieves superior performance than the latter, with no cost of increasing network complexity. Experimental results with real-world data sets for image recommendations have shown the proposed dual-net network and CDL greatly outperform other state-of-the-art image recommendation solutions.",
"title": ""
},
{
"docid": "54c6e02234ce1c0f188dcd0d5ee4f04c",
"text": "The World Wide Web is a vast resource for information. At the same time it is extremely distributed. A particular type of data such as restaurant lists may be scattered across thousands of independent information sources in many di erent formats. In this paper, we consider the problem of extracting a relation for such a data type from all of these sources automatically. We present a technique which exploits the duality between sets of patterns and relations to grow the target relation starting from a small sample. To test our technique we use it to extract a relation of (author,title) pairs from the World Wide Web.",
"title": ""
},
{
"docid": "a6b2dd2f7aa481f20d314b060985b079",
"text": "Bayesian Network has an advantage in dealing with uncertainty. But It is difficult to construct a scientific and rational Bayesian Network model in practice application. In order to solve this problem, a novel method for constructing Bayesian Network by integrating Failure Mode and Effect Analysis (FMEA) with Fault Tree Analysis (FTA) was proposed. Firstly, the structure matrix representations of FMEA, FTA and Bayesian Network were shown and a structure matrix integration algorithm was explained. Then, an approach for constructing Bayesian Network by obtaining information on node, structure and parameter from FMEA and FTA based on structure matrix was put forward. Finally, in order to verify the feasibility of the method, an illustrative example was given. This method can simplify the modeling process and improve the modeling efficiency for constructing Bayesian Network and promote the application of Bayesian Network in the system reliability and safety analysis.",
"title": ""
},
{
"docid": "5a62c276e7cce7c7a10109f3c3b1e401",
"text": "A miniature coplanar antenna on a perovskite substrate is analyzed and designed using short circuit technique. The overall dimensions are minimized to 0.09 λ × 0.09 λ. The antenna geometry, the design concept, as well as the simulated and the measured results are discussed in this paper.",
"title": ""
},
{
"docid": "b15bb888a11444f614b4e45317550830",
"text": "Transactional Memory (TM) is emerging as a promising technology to simplify parallel programming. While several TM systems have been proposed in the research literature, we are still missing the tools and workloads necessary to analyze and compare the proposals. Most TM systems have been evaluated using microbenchmarks, which may not be representative of any real-world behavior, or individual applications, which do not stress a wide range of execution scenarios. We introduce the Stanford Transactional Application for Multi-Processing (STAMP), a comprehensive benchmark suite for evaluating TM systems. STAMP includes eight applications and thirty variants of input parameters and data sets in order to represent several application domains and cover a wide range of transactional execution cases (frequent or rare use of transactions, large or small transactions, high or low contention, etc.). Moreover, STAMP is portable across many types of TM systems, including hardware, software, and hybrid systems. In this paper, we provide descriptions and a detailed characterization of the applications in STAMP. We also use the suite to evaluate six different TM systems, identify their shortcomings, and motivate further research on their performance characteristics.",
"title": ""
}
] |
scidocsrr
|
67881ff8335ae3b578765af409ddf979
|
Mastering 2048 with Delayed Temporal Coherence Learning, Multi-Stage Weight Promotion, Redundant Encoding and Carousel Shaping
|
[
{
"docid": "1dff8a1fae840411defec05db479040c",
"text": "This paper investigates the use of n-tuple systems as position value functions for the game of Othello. The architecture is described, and then evaluated for use with temporal difference learning. Performance is compared with prev iously developed weighted piece counters and multi-layer perceptrons. The n-tuple system is able to defeat the best performing of these after just five hundred ga mes of selfplay learning. The conclusion is that n-tuple networks learn faster and better than the other more conventional approaches.",
"title": ""
}
] |
[
{
"docid": "a880d38d37862b46dc638b9a7e45b6ee",
"text": "This paper presents the modeling, simulation, and analysis of the dynamic behavior of a fictitious 2 × 320 MW variable-speed pump-turbine power plant, including a hydraulic system, electrical equipment, rotating inertias, and control systems. The modeling of the hydraulic and electrical components of the power plant is presented. The dynamic performances of a control strategy in generating mode and one in pumping mode are investigated by the simulation of the complete models in the case of change of active power set points. Then, a pseudocontinuous model of the converters feeding the rotor circuits is described. Due to this simplification, the simulation time can be reduced drastically (approximately factor 60). A first validation of the simplified model of the converters is obtained by comparison of the simulated results coming from the simplified and complete models for different modes of operation of the power plant. Experimental results performed on a 2.2-kW low-power test bench are also compared with the simulated results coming from both complete and simplified models related to this case and confirm the validity of the proposed simplified approach for the converters.",
"title": ""
},
{
"docid": "50ec9d25a24e67481a4afc6a9519b83c",
"text": "Weakly supervised image segmentation is an important yet challenging task in image processing and pattern recognition fields. It is defined as: in the training stage, semantic labels are only at the image-level, without regard to their specific object/scene location within the image. Given a test image, the goal is to predict the semantics of every pixel/superpixel. In this paper, we propose a new weakly supervised image segmentation model, focusing on learning the semantic associations between superpixel sets (graphlets in this paper). In particular, we first extract graphlets from each image, where a graphlet is a small-sized graph measures the potential of multiple spatially neighboring superpixels (i.e., the probability of these superpixels sharing a common semantic label, such as the sky or the sea). To compare different-sized graphlets and to incorporate image-level labels, a manifold embedding algorithm is designed to transform all graphlets into equal-length feature vectors. Finally, we present a hierarchical Bayesian network to capture the semantic associations between postembedding graphlets, based on which the semantics of each superpixel is inferred accordingly. Experimental results demonstrate that: 1) our approach performs competitively compared with the state-of-the-art approaches on three public data sets and 2) considerable performance enhancement is achieved when using our approach on segmentation-based photo cropping and image categorization.",
"title": ""
},
{
"docid": "ed0b269f861775550edd83b1eb420190",
"text": "The continuous innovation process of the Information and Communication Technology (ICT) sector shape the way businesses redefine their business models. Though, current drivers of innovation processes focus solely on a technical dimension, while disregarding social and environmental drivers. However, examples like Nokia, Yahoo or Hewlett-Packard show that even though a profitable business model exists, a sound strategic innovation process is needed to remain profitable in the long term. A sustainable business model innovation demands the incorporation of all dimensions of the triple bottom line. Nevertheless, current management processes do not take the responsible steps to remain sustainable and keep being in denial of the evolutionary direction in which the markets develop, because the effects are not visible in short term. The implications are of substantial effect and can bring the foundation of the company’s business model in danger. This work evaluates the decision process that lets businesses decide in favor of un-sustainable changes and points out the barriers that prevent the development towards a sustainable business model that takes the new balance of forces into account.",
"title": ""
},
{
"docid": "98b1965e232cce186b9be4d7ce946329",
"text": "Currently existing dynamic models for a two-wheeled inverted pendulum mobile robot have some common mistakes. In order to find where the errors of the dynamic model are induced, Lagrangian method and Kane's method are compared in deriving the equation of motion. Numerical examples are given to illustrate the effect of the incorrect terms. Finally, a complete dynamic model is proposed without any error and missing terms.",
"title": ""
},
{
"docid": "673c0d74b0df4cfe698d1a7397fc1365",
"text": "The intense growth of Internet of Things (IoTs), its multidisciplinary nature and broadcasting communication pattern made it very challenging for research community/domain. Operating systems for IoTs plays vital role in this regard. Through this research contribution, the objective is to present an analytical study on the recent developments on operating systems specifically designed or fulfilled the needs of IoTs. Starting from study and advances in the field of IoTs with focus on existing operating systems specifically for IoTs. Finally the existing operating systems for IoTs are evaluated and compared on some set criteria and facts and findings are presented.",
"title": ""
},
{
"docid": "5980e6111c145db3e1bfc5f47df7ceaf",
"text": "Traffic signs are characterized by a wide variability in their visual appearance in real-world environments. For example, changes of illumination, varying weather conditions and partial occlusions impact the perception of road signs. In practice, a large number of different sign classes needs to be recognized with very high accuracy. Traffic signs have been designed to be easily readable for humans, who perform very well at this task. For computer systems, however, classifying traffic signs still seems to pose a challenging pattern recognition problem. Both image processing and machine learning algorithms are continuously refined to improve on this task. But little systematic comparison of such systems exist. What is the status quo? Do today's algorithms reach human performance? For assessing the performance of state-of-the-art machine learning algorithms, we present a publicly available traffic sign dataset with more than 50,000 images of German road signs in 43 classes. The data was considered in the second stage of the German Traffic Sign Recognition Benchmark held at IJCNN 2011. The results of this competition are reported and the best-performing algorithms are briefly described. Convolutional neural networks (CNNs) showed particularly high classification accuracies in the competition. We measured the performance of human subjects on the same data-and the CNNs outperformed the human test persons.",
"title": ""
},
{
"docid": "dbc3355eb2b88432a4bd21d42c090ef1",
"text": "With advancement of technology things are becoming simpler and easier for us. Automatic systems are being preferred over manual system. This unit talks about the basic definitions needed to understand the Project better and further defines the technical criteria to be implemented as a part of this project. Keywords-component; Automation, 8051 microcontroller, LDR, LED, ADC, Relays, LCD display, Sensors, Stepper motor",
"title": ""
},
{
"docid": "9ae6f2f858bf613760718688be947c55",
"text": "We propose a neural multi-document summarization (MDS) system that incorporates sentence relation graphs. We employ a Graph Convolutional Network (GCN) on the relation graphs, with sentence embeddings obtained from Recurrent Neural Networks as input node features. Through multiple layer-wise propagation, the GCN generates high-level hidden sentence features for salience estimation. We then use a greedy heuristic to extract salient sentences while avoiding redundancy. In our experiments on DUC 2004, we consider three types of sentence relation graphs and demonstrate the advantage of combining sentence relations in graphs with the representation power of deep neural networks. Our model improves upon traditional graph-based extractive approaches and the vanilla GRU sequence model with no graph, and it achieves competitive results against other state-of-the-art multidocument summarization systems.",
"title": ""
},
{
"docid": "1726729c32f43917802b902267769dda",
"text": "The creation of micro air vehicles (MAVs) of the same general sizes and weight as natural fliers has spawned renewed interest in flapping wing flight. With a wingspan of approximately 15 cm and a flight speed of a few meters per second, MAVs experience the same low Reynolds number (10–10) flight conditions as their biological counterparts. In this flow regime, rigid fixed wings drop dramatically in aerodynamic performance while flexible flapping wings gain efficacy and are the preferred propulsion method for small natural fliers. Researchers have long realized that steady-state aerodynamics does not properly capture the physical phenomena or forces present in flapping flight at this scale. Hence, unsteady flow mechanisms must dominate this regime. Furthermore, due to the low flight speeds, any disturbance such as gusts or wind will dramatically change the aerodynamic conditions around the MAV. In response, a suitable feedback control system and actuation technology must be developed so that the wing can maintain its aerodynamic efficiency in this extremely dynamic situation; one where the unsteady separated flow field and wing structure are tightly coupled and interact nonlinearly. For instance, birds and bats control their flexible wings with muscle tissue to successfully deal with rapid changes in the flow environment. Drawing from their example, perhaps MAVs can use lightweight actuators in conjunction with adaptive feedback control to shape the wing and achieve active flow control. This article first reviews the scaling laws and unsteady flow regime constraining both biological and man-made fliers. Then a summary of vortex dominated unsteady aerodynamics follows. Next, aeroelastic coupling and its effect on lift and thrust are discussed. Afterwards, flow control strategies found in nature and devised by man to deal with separated flows are examined. Recent work is also presented in using microelectromechanical systems (MEMS) actuators and angular speed variation to achieve active flow control for MAVs. Finally, an explanation for aerodynamic gains seen in flexible versus rigid membrane wings, derived from an unsteady three-dimensional computational fluid dynamics model with an integrated distributed control algorithm, is presented. r 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e59eec639d7104a5038eaaefa69edd95",
"text": "Learning the embedding for social media data has attracted extensive research interests as well as boomed a lot of applications, such as classification and link prediction. In this paper, we examine the scenario of a multimodal network with nodes containing multimodal contents and connected by heterogeneous relationships, such as social images containing multimodal contents (e.g., visual content and text description), and linked with various forms (e.g., in the same album or with the same tag). However, given the multimodal network, simply learning the embedding from the network structure or a subset of content results in sub-optimal representation. In this paper, we propose a novel deep embedding method, i.e., Attention-based Multi-view Variational Auto-Encoder (AMVAE), to incorporate both the link information and the multimodal contents for more effective and efficient embedding. Specifically, we adopt LSTM with attention model to learn the correlation between different data modalities, such as the correlation between visual regions and the specific words, to obtain the semantic embedding of the multimodal contents. Then, the link information and the semantic embedding are considered as two correlated views. A multi-view correlation learning based Variational Auto-Encoder (VAE) is proposed to learn the representation of each node, in which the embedding of link information and multimodal contents are integrated and mutually reinforced. Experiments on three real-world datasets demonstrate the superiority of the proposed model in two applications, i.e., multi-label classification and link prediction.",
"title": ""
},
{
"docid": "a9242c3fca5a8ffdf0e03776b8165074",
"text": "This paper presents inexpensive computer vision techniques allowing to measure the texture characteristics of woven fabric, such as weave repeat and yarn counts, and the surface roughness. First, we discuss the automatic recognition of weave pattern and the accurate measurement of yarn counts by analyzing fabric sample images. We propose a surface roughness indicator FDFFT, which is the 3-D surface fractal dimension measurement calculated from the 2-D fast Fourier transform of high-resolution 3-D surface scan. The proposed weave pattern recognition method was validated by using computer-simulated woven samples and real woven fabric images. All weave patterns of the tested fabric samples were successfully recognized, and computed yarn counts were consistent with the manual counts. The rotation invariance and scale invariance of FDFFT were validated with fractal Brownian images. Moreover, to evaluate the correctness of FDFFT, we provide a method of calculating standard roughness parameters from the 3-D fabric surface. According to the test results, we demonstrated that FDFFT is a fast and reliable parameter for fabric roughness measurement based on 3-D surface data.",
"title": ""
},
{
"docid": "24ecf1119592cc5496dc4994d463eabe",
"text": "To improve data availability and resilience MapReduce frameworks use file systems that replicate data uniformly. However, analysis of job logs from a large production cluster shows wide disparity in data popularity. Machines and racks storing popular content become bottlenecks; thereby increasing the completion times of jobs accessing this data even when there are machines with spare cycles in the cluster. To address this problem, we present Scarlett, a system that replicates blocks based on their popularity. By accurately predicting file popularity and working within hard bounds on additional storage, Scarlett causes minimal interference to running jobs. Trace driven simulations and experiments in two popular MapReduce frameworks (Hadoop, Dryad) show that Scarlett effectively alleviates hotspots and can speed up jobs by 20.2%.",
"title": ""
},
{
"docid": "105951b58d594fdb3a07e1adbb76dc5f",
"text": "The “Prediction by Partial Matching” (PPM) data compression algorithm developed by Cleary and Witten is capable of very high compression rates, encoding English text in as little as 2.2 bits/character. Here it is shown that the estimates made by Cleary and Witten of the resources required to implement the scheme can be revised to allow for a tractable and useful implementation. In particular, a variant is described that encodes and decodes at over 4 kbytes/s on a small workstation, and operates within a few hundred kilobytes of data space, but still obtains compression of about 2.4 bits/character on",
"title": ""
},
{
"docid": "c2c994664e3aecff1ccb8d8feaf860e9",
"text": "Hazard zones associated with LNG handling activities have been a major point of contention in recent terminal development applications. Debate has reflected primarily worst case scenarios and discussion of these. This paper presents results from a maximum credible event approach. A comparison of results from several models either run by the authors or reported in the literature is presented. While larger scale experimental trials will be necessary to reduce the uncertainty, in the interim a set of base cases are suggested covering both existing trials and credible and worst case events is proposed. This can assist users to assess the degree of conservatism present in quoted modeling approaches and model selections.",
"title": ""
},
{
"docid": "0a2be958c7323d3421304d1613421251",
"text": "Stock price forecasting has aroused great concern in research of economy, machine learning and other fields. Time series analysis methods are usually utilized to deal with this task. In this paper, we propose to combine news mining and time series analysis to forecast inter-day stock prices. News reports are automatically analyzed with text mining techniques, and then the mining results are used to improve the accuracy of time series analysis algorithms. The experimental result on a half year Chinese stock market data indicates that the proposed algorithm can help to improve the performance of normal time series analysis in stock price forecasting significantly. Moreover, the proposed algorithm also performs well in stock price trend forecasting.",
"title": ""
},
{
"docid": "e1ed9d36e7b84ce7dcc74ac5f684ea76",
"text": "As integrated circuits (ICs) continue to have an overwhelming presence in our digital information-dominated world, having trust in their manufacture and distribution mechanisms is crucial. However, with ever-shrinking transistor technologies, the cost of new fabrication facilities is becoming prohibitive, pushing industry to make greater use of potentially less reliable foreign sources for their IC supply. The 2008 Computer Security Awareness Week (CSAW) Embedded Systems Challenge at the Polytechnic Institute of NYU highlighted some of the vulnerabilities of the IC supply chain in the form of a hardware hacking challenge. This paper explores the design and implementation of our winning entry.",
"title": ""
},
{
"docid": "051fc43d9e32d8b9d8096838b53c47cb",
"text": "Median filtering is a cornerstone of modern image processing and is used extensively in smoothing and de-noising applications. The fastest commercial implementations (e.g. in Adobe® Photoshop® CS2) exhibit O(r) runtime in the radius of the filter, which limits their usefulness in realtime or resolution-independent contexts. We introduce a CPU-based, vectorizable O(log r) algorithm for median filtering, to our knowledge the most efficient yet developed. Our algorithm extends to images of any bit-depth, and can also be adapted to perform bilateral filtering. On 8-bit data our median filter outperforms Photoshop's implementation by up to a factor of fifty.",
"title": ""
},
{
"docid": "b41b14ed0091a06072629be78bec090b",
"text": "The 2-D orthogonal wavelet transform decomposes images into both spatial and spectrally local coefficients. The transformed coefficients were coded hierarchically and individually quantized in accordance with the local estimated noise sensitivity of the human visual system (HVS). The algorithm can be mapped easily onto VLSI. For the Miss America and Lena monochrome images, the technique gave high to acceptable quality reconstruction at compression ratios of 0.3-0.2 and 0.64-0.43 bits per pixel (bpp), respectively.",
"title": ""
},
{
"docid": "c55de58c07352373570ec7d46c5df03d",
"text": "Understanding human-object interactions is critical for extracting meaning from everyday visual scenes and requires integrating complex relationships between human pose and object identity into a new percept. To understand how the brain builds these representations, we conducted 2 fMRI experiments in which subjects viewed humans interacting with objects, noninteracting human-object pairs, and isolated humans and objects. A number of visual regions process features of human-object interactions, including object identity information in the lateral occipital complex (LOC) and parahippocampal place area (PPA), and human pose information in the extrastriate body area (EBA) and posterior superior temporal sulcus (pSTS). Representations of human-object interactions in some regions, such as the posterior PPA (retinotopic maps PHC1 and PHC2) are well predicted by a simple linear combination of the response to object and pose information. Other regions, however, especially pSTS, exhibit representations for human-object interaction categories that are not predicted by their individual components, indicating that they encode human-object interactions as more than the sum of their parts. These results reveal the distributed networks underlying the emergent representation of human-object interactions necessary for social perception.",
"title": ""
},
{
"docid": "be5b0dd659434e77ce47034a51fd2767",
"text": "Current obstacles in the study of social media marketing include dealing with massive data and real-time updates have motivated to contribute solutions that can be adopted for viral marketing. Since information diffusion and social networks are the core of viral marketing, this article aims to investigate the constellation of diffusion methods for viral marketing. Studies on diffusion methods for viral marketing have applied different computational methods, but a systematic investigation of these methods has limited. Most of the literature have focused on achieving objectives such as influence maximization or community detection. Therefore, this article aims to conduct an in-depth review of works related to diffusion for viral marketing. Viral marketing has applied to business-to-consumer transactions but has seen limited adoption in business-to-business transactions. The literature review reveals a lack of new diffusion methods, especially in dynamic and large-scale networks. It also offers insights into applying various mining methods for viral marketing. It discusses some of the challenges, limitations, and future research directions of information diffusion for viral marketing. The article also introduces a viral marketing information diffusion model. The proposed model attempts to solve the dynamicity and large-scale data of social networks by adopting incremental clustering and a stochastic differential equation for business-to-business transactions. Keywords—information diffusion; viral marketing; social media marketing; social networks",
"title": ""
}
] |
scidocsrr
|
a10a30da37c030f4a51a82b422fadcd7
|
Code Design for Short Blocks: A Survey
|
[
{
"docid": "545adbeb802c7f8a70390ecf424e7f58",
"text": "We describe a successive-cancellation list decoder for polar codes, which is a generalization of the classic successive-cancellation decoder of Arikan. In the proposed list decoder, up to L decoding paths are considered concurrently at each decoding stage. Simulation results show that the resulting performance is very close to that of a maximum-likelihood decoder, even for moderate values of L. Thus it appears that the proposed list decoder bridges the gap between successive-cancellation and maximum-likelihood decoding of polar codes. The specific list-decoding algorithm that achieves this performance doubles the number of decoding paths at each decoding step, and then uses a pruning procedure to discard all but the L “best” paths. In order to implement this algorithm, we introduce a natural pruning criterion that can be easily evaluated. Nevertheless, straightforward implementation still requires O(L · n2) time, which is in stark contrast with the O(n log n) complexity of the original successive-cancellation decoder. We utilize the structure of polar codes to overcome this problem. Specifically, we devise an efficient, numerically stable, implementation taking only O(L · n log n) time and O(L · n) space.",
"title": ""
}
] |
[
{
"docid": "0d8c38444954a0003117e7334195cb00",
"text": "Although mature technologies exist for acquiring images, geometry, and normals of small objects, they remain cumbersome and time-consuming for non-experts to employ on a large scale. In an archaeological setting, a practical acquisition system for routine use on every artifact and fragment would open new possibilities for archiving, analysis, and dissemination. We present an inexpensive system for acquiring all three types of information, and associated metadata, for small objects such as fragments of wall paintings. The acquisition system requires minimal supervision, so that a single, non-expert user can scan at least 10 fragments per hour. To achieve this performance, we introduce new algorithms to robustly and automatically align range scans, register 2-D scans to 3-D geometry, and compute normals from 2-D scans. As an illustrative application, we present a novel 3-D matching algorithm that efficiently searches for matching fragments using the scanned geometry.",
"title": ""
},
{
"docid": "f6fc0992624fd3b3e0ce7cc7fc411154",
"text": "Digital currencies are a globally spreading phenomenon that is frequently and also prominently addressed by media, venture capitalists, financial and governmental institutions alike. As exchange prices for Bitcoin have reached multiple peaks within 2013, we pose a prevailing and yet academically unaddressed question: What are users' intentions when changing their domestic into a digital currency? In particular, this paper aims at giving empirical insights on whether users’ interest regarding digital currencies is driven by its appeal as an asset or as a currency. Based on our evaluation, we find strong indications that especially uninformed users approaching digital currencies are not primarily interested in an alternative transaction system but seek to participate in an alternative investment vehicle.",
"title": ""
},
{
"docid": "fd1e327327068a1373e35270ef257c59",
"text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.",
"title": ""
},
{
"docid": "28530d3d388edc5d214a94d70ad7f2c3",
"text": "In next generation wireless mobile networks, network virtualization will become an important key technology. In this paper, we firstly propose a resource allocation scheme for enabling efficient resource allocation in wireless network virtualization. Then, we formulate the resource allocation strategy as an optimization problem, considering not only the revenue earned by serving end users of virtual networks, but also the cost of leasing infrastructure from infrastructure providers. In addition, we develop an efficient alternating direction method of multipliers (ADMM)-based distributed virtual resource allocation algorithm in virtualized wireless networks. Simulation results are presented to show the effectiveness of the proposed scheme.",
"title": ""
},
{
"docid": "26011dba6cc608e599f8393b2d2fc8be",
"text": "Connections between relations in relation extraction, which we call class ties, are common. In distantly supervised scenario, one entity tuple may have multiple relation facts. Exploiting class ties between relations of one entity tuple will be promising for distantly supervised relation extraction. However, previous models are not effective or ignore to model this property. In this work, to effectively leverage class ties, we propose to make joint relation extraction with a unified model that integrates convolutional neural network with a general pairwise ranking framework, in which two novel ranking loss functions are introduced. Additionally, an effective method is presented to relieve the impact of NR (not relation) for model training, which significantly boosts our model performance. Experiments on a widely used dataset show that leveraging class ties will enhance extraction and demonstrate that our model is effective to learn class ties. Our model outperforms baselines significantly, achieving state-of-the-art performance. The source code of this paper can be obtained from https://github.com/ yehaibuaa/DS_RE_DeepRanking.",
"title": ""
},
{
"docid": "012bcbc6b5e7b8aaafd03f100489961c",
"text": "DNA is an attractive medium to store digital information. Here we report a storage strategy, called DNA Fountain, that is highly robust and approaches the information capacity per nucleotide. Using our approach, we stored a full computer operating system, movie, and other files with a total of 2.14 × 106 bytes in DNA oligonucleotides and perfectly retrieved the information from a sequencing coverage equivalent to a single tile of Illumina sequencing. We also tested a process that can allow 2.18 × 1015 retrievals using the original DNA sample and were able to perfectly decode the data. Finally, we explored the limit of our architecture in terms of bytes per molecule and obtained a perfect retrieval from a density of 215 petabytes per gram of DNA, orders of magnitude higher than previous reports.",
"title": ""
},
{
"docid": "350cda71dae32245b45d96b5fdd37731",
"text": "In this work, we focus on cyclic codes over the ring F2+uF2+vF2+uvF2, which is not a finite chain ring. We use ideas from group rings and works of AbuAlrub et al. in (Des Codes Crypt 42:273–287, 2007) to characterize the ring (F2 + uF2 + vF2 + uvF2)/(x − 1) and cyclic codes of odd length. Some good binary codes are obtained as the images of cyclic codes over F2+uF2+vF2+uvF2 under two Gray maps that are defined. We also characterize the binary images of cyclic codes over F2 + uF2 + vF2 + uvF2 in general.",
"title": ""
},
{
"docid": "faf53f190fe226ce14f32f9d44d551b5",
"text": "We present a study of how Linux kernel developers respond to bug reports issued by a static analysis tool. We found that developers prefer to triage reports in younger, smaller, and more actively-maintained files ( §2), first address easy-to-fix bugs and defer difficult (but possibly critical) bugs ( §3), and triage bugs in batches rather than individually (§4). Also, although automated tools cannot find many types of bugs, they can be effective at directing developers’ attentions towards parts of the codebase that contain up to 3X more user-reported bugs ( §5). Our insights into developer attitudes towards static analysis tools allow us to make suggestions for improving their usability and effectiveness. We feel that it could be effective to run static analysis tools continuously while programming and before committing code, to rank reports so that those most likely to be triaged are shown to developers first, to show the easiest reports to new developers, to perform deeper analysis on more actively-maintained code, and to use reports as indirect indicators of code quality and importance.",
"title": ""
},
{
"docid": "4e6ff17d33aceaa63ec156fc90aed2ce",
"text": "Objective:\nThe aim of the present study was to translate and cross-culturally adapt the Functional Status Score for the intensive care unit (FSS-ICU) into Brazilian Portuguese.\n\n\nMethods:\nThis study consisted of the following steps: translation (performed by two independent translators), synthesis of the initial translation, back-translation (by two independent translators who were unaware of the original FSS-ICU), and testing to evaluate the target audience's understanding. An Expert Committee supervised all steps and was responsible for the modifications made throughout the process and the final translated version.\n\n\nResults:\nThe testing phase included two experienced physiotherapists who assessed a total of 30 critical care patients (mean FSS-ICU score = 25 ± 6). As the physiotherapists did not report any uncertainties or problems with interpretation affecting their performance, no additional adjustments were made to the Brazilian Portuguese version after the testing phase. Good interobserver reliability between the two assessors was obtained for each of the 5 FSS-ICU tasks and for the total FSS-ICU score (intraclass correlation coefficients ranged from 0.88 to 0.91).\n\n\nConclusion:\nThe adapted version of the FSS-ICU in Brazilian Portuguese was easy to understand and apply in an intensive care unit environment.",
"title": ""
},
{
"docid": "8e0badc0828019460da0017774c8b631",
"text": "To meet the explosive growth in traffic during the next twenty years, 5G systems using local area networks need to be developed. These systems will comprise of small cells and will use extreme cell densification. The use of millimeter wave (Mmwave) frequencies, in particular from 20 GHz to 90 GHz, will revolutionize wireless communications given the extreme amount of available bandwidth. However, the different propagation conditions and hardware constraints of Mmwave (e.g., the use of RF beamforming with very large arrays) require reconsidering the modulation methods for Mmwave compared to those used below 6 GHz. In this paper we present ray-tracing results, which, along with recent propagation measurements at Mmwave, all point to the fact that Mmwave frequencies are very appropriate for next generation, 5G, local area wireless communication systems. Next, we propose null cyclic prefix single carrier as the best candidate for Mmwave communications. Finally, systemlevel simulation results show that with the right access point deployment peak rates of over 15 Gbps are possible at Mmwave along with a cell edge experience in excess of 400 Mbps.",
"title": ""
},
{
"docid": "2bc86a02909f16ad0372a36dd92c954c",
"text": "Multi-view learning is an emerging direction in machine learning which considers learning with multiple views to improve the generalization performance. Multi-view learning is also known as data fusion or data integration from multiple feature sets. Since the last survey of multi-view machine learning in early 2013, multi-view learning has made great progress and developments in recent years, and is facing new challenges. This overview first reviews theoretical underpinnings to understand the properties and behaviors of multi-view learning. Then multi-view learning methods are described in terms of three classes to offer a neat categorization and organization. For each category, representative algorithms and newly proposed algorithms are presented. The main feature of this survey is that we provide comprehensive introduction for the recent developments of multi-view learning methods on the basis of coherence with early methods. We also attempt to identify promising venues and point out some specific challenges which can hopefully promote further research in this rapidly developing field. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "95333e4206a3b4c1a576f452c591421f",
"text": "Given a set of observations generated by an optimization process, the goal of inverse optimization is to determine likely parameters of that process. We cast inverse optimization as a form of deep learning. Our method, called deep inverse optimization, is to unroll an iterative optimization process and then use backpropagation to learn parameters that generate the observations. We demonstrate that by backpropagating through the interior point algorithm we can learn the coefficients determining the cost vector and the constraints, independently or jointly, for both non-parametric and parametric linear programs, starting from one or multiple observations. With this approach, inverse optimization can leverage concepts and algorithms from deep learning.",
"title": ""
},
{
"docid": "c74b93fff768f024b921fac7f192102d",
"text": "Motivated by information-theoretic considerations, we pr opose a signalling scheme, unitary spacetime modulation, for multiple-antenna communication links. This modulati on s ideally suited for Rayleigh fast-fading environments, since it does not require the rec iv r to know or learn the propagation coefficients. Unitary space-time modulation uses constellations of T M space-time signals f `; ` = 1; : : : ; Lg, whereT represents the coherence interval during which the fading i s approximately constant, and M < T is the number of transmitter antennas. The columns of each ` are orthonormal. When the receiver does not know the propagation coefficients, which between pa irs of transmitter and receiver antennas are modeled as statistically independent, this modulation per forms very well either when the SNR is high or whenT M . We design some multiple-antenna signal constellations and simulate their effectiveness as measured by bit error probability with maximum likelihood decoding. We demonstrate that two antennas have a 6 dB diversity gain over one antenna at 15 dB SNR. Index Terms —Multi-element antenna arrays, wireless communications, channel coding, fading channels, transmitter and receiver diversity, space-time modu lation",
"title": ""
},
{
"docid": "7fc6ffb547bc7a96e360773ce04b2687",
"text": "Most probabilistic inference algorithms are specified and processed on a propositional level. In the last decade, many proposals for algorithms accepting first-order specifications have been presented, but in the inference stage they still operate on a mostly propositional representation level. [Poole, 2003] presented a method to perform inference directly on the first-order level, but this method is limited to special cases. In this paper we present the first exact inference algorithm that operates directly on a first-order level, and that can be applied to any first-order model (specified in a language that generalizes undirected graphical models). Our experiments show superior performance in comparison with propositional exact inference.",
"title": ""
},
{
"docid": "5b149ce093d0e546a3e99f92ef1608a0",
"text": "Smartphones have been becoming ubiquitous and mobile users are increasingly relying on them to store and handle personal information. However, recent studies also reveal the disturbing fact that users’ personal information is put at risk by (rogue) smartphone applications. Existing solutions exhibit limitations in their capabilities in taming these privacy-violating smartphone applications. In this paper, we argue for the need of a new privacy mode in smartphones. The privacy mode can empower users to flexibly control in a fine-grained manner what kinds of personal information will be accessible to an application. Also, the granted access can be dynamically adjusted at runtime in a fine-grained manner to better suit a user’s needs in various scenarios (e.g., in a different time or location). We have developed a system called TISSA that implements such a privacy mode on Android. The evaluation with more than a dozen of information-leaking Android applications demonstrates its effectiveness and practicality. Furthermore, our evaluation shows that TISSA introduces negligible performance overhead.",
"title": ""
},
{
"docid": "cfebffcb4f0d082e7733c7c92c4a1700",
"text": "While attacks on information systems have for most practical purposes binary outcomes (information was manipulated/eavesdropped, or not), attacks manipulating the sensor or control signals of Industrial Control Systems (ICS) can be tuned by the attacker to cause a continuous spectrum in damages. Attackers that want to remain undetected can attempt to hide their manipulation of the system by following closely the expected behavior of the system, while injecting just enough false information at each time step to achieve their goals. In this work, we study if attack-detection can limit the impact of such stealthy attacks. We start with a comprehensive review of related work on attack detection schemes in the security and control systems community. We then show that many of those works use detection schemes that are not limiting the impact of stealthy attacks. We propose a new metric to measure the impact of stealthy attacks and how they relate to our selection on an upper bound on false alarms. We finally show that the impact of such attacks can be mitigated in several cases by the proper combination and configuration of detection schemes. We demonstrate the effectiveness of our algorithms through simulations and experiments using real ICS testbeds and real ICS systems.",
"title": ""
},
{
"docid": "8324dc0dfcfb845739a22fb9321d5482",
"text": "In this paper, we study deep generative models for effective unsupervised learning. We propose VGAN, which works by minimizing a variational lower bound of the negative log likelihood (NLL) of an energy based model (EBM), where the model density p(x) is approximated by a variational distribution q(x) that is easy to sample from. The training of VGAN takes a two step procedure: given p(x), q(x) is updated to maximize the lower bound; p(x) is then updated one step with samples drawn from q(x) to decrease the lower bound. VGAN is inspired by the generative adversarial networks (GANs), where p(x) corresponds to the discriminator and q(x) corresponds to the generator, but with several notable differences. We hence name our model variational GANs (VGANs). VGAN provides a practical solution to training deep EBMs in high dimensional space, by eliminating the need of MCMC sampling. From this view, we are also able to identify causes to the difficulty of training GANs and propose viable solutions. 1",
"title": ""
},
{
"docid": "73d4a47d4aba600b4a3bcad6f7f3588f",
"text": "Humans can easily perform tasks that use vision and language jointly, such as describing a scene and answering questions about objects in the scene and how they are related. Image captioning and visual question & answer are two popular research tasks that have emerged from advances in deep learning and the availability of datasets that specifically address these problems. However recent work has shown that deep learning based solutions to these tasks are just as brittle as solutions for only vision or only natural language tasks. Image captioning is vulnerable to adversarial perturbations; novel objects, which are not described in training data, and contextual biases in training data can degrade performance in surprising ways. For these reasons, it is important to find ways in which general-purpose knowledge can guide connectionist models. We investigate challenges to integrate existing ontologies and knowledge bases with deep learning solutions, and possible approaches for overcoming such challenges. We focus on geo-referenced data such as geo-tagged images and videos that capture outdoor scenery. Geo-knowledge bases are domain specific knowledge bases that contain concepts and relations that describe geographic objects. This work proposes to increase the robustness of automatic scene description and inference by leveraging geo-knowledge bases along with the strengths of deep learning for visual object detection and classification.",
"title": ""
},
{
"docid": "5aee510b62d8792a38044fc8c68a57e4",
"text": "In this paper we present a novel method for jointly extracting beats and downbeats from audio signals. A recurrent neural network operating directly on magnitude spectrograms is used to model the metrical structure of the audio signals at multiple levels and provides an output feature that clearly distinguishes between beats and downbeats. A dynamic Bayesian network is then used to model bars of variable length and align the predicted beat and downbeat positions to the global best solution. We find that the proposed model achieves state-of-the-art performance on a wide range of different musical genres and styles.",
"title": ""
},
{
"docid": "8f5ca5819dd28c686da78332add76fb0",
"text": "The emerging Service-Oriented Computing (SOC) paradigm promises to enable businesses and organizations to collaborate in an unprecedented way by means of standard web services. To support rapid and dynamic composition of services in this paradigm, web services that meet requesters' functional requirements must be able to be located and bounded dynamically from a large and constantly changing number of service providers based on their Quality of Service (QoS). In order to enable quality-driven web service selection, we need an open, fair, dynamic and secure framework to evaluate the QoS of a vast number of web services. The fair computation and enforcing of QoS of web services should have minimal overhead but yet able to achieve sufficient trust by both service requesters and providers. In this paper, we presented our open, fair and dynamic QoS computation model for web services selection through implementation of and experimentation with a QoS registry in a hypothetical phone service provisioning market place application.",
"title": ""
}
] |
scidocsrr
|
05cd06306a27dba16ad576c9db016016
|
Raindrop detection and removal using salient visual features
|
[
{
"docid": "f0c25bb609bc6946b558bcd0ccdaee22",
"text": "A biologically motivated computational model of bottom-up visual selective attention was used to examine the degree to which stimulus salience guides the allocation of attention. Human eye movements were recorded while participants viewed a series of digitized images of complex natural and artificial scenes. Stimulus dependence of attention, as measured by the correlation between computed stimulus salience and fixation locations, was found to be significantly greater than that expected by chance alone and furthermore was greatest for eye movements that immediately follow stimulus onset. The ability to guide attention of three modeled stimulus features (color, intensity and orientation) was examined and found to vary with image type. Additionally, the effect of the drop in visual sensitivity as a function of eccentricity on stimulus salience was examined, modeled, and shown to be an important determiner of attentional allocation. Overall, the results indicate that stimulus-driven, bottom-up mechanisms contribute significantly to attentional guidance under natural viewing conditions.",
"title": ""
}
] |
[
{
"docid": "adb02577e7fba530c2406fbf53571d14",
"text": "Event-related potentials (ERPs) recorded from the human scalp can provide important information about how the human brain normally processes information and about how this processing may go awry in neurological or psychiatric disorders. Scientists using or studying ERPs must strive to overcome the many technical problems that can occur in the recording and analysis of these potentials. The methods and the results of these ERP studies must be published in a way that allows other scientists to understand exactly what was done so that they can, if necessary, replicate the experiments. The data must then be analyzed and presented in a way that allows different studies to be compared readily. This paper presents guidelines for recording ERPs and criteria for publishing the results.",
"title": ""
},
{
"docid": "007a42bdf781074a2d00d792d32df312",
"text": "This paper presents a new approach for road lane classification using an onboard camera. Initially, lane boundaries are detected using a linear-parabolic lane model, and an automatic on-the-fly camera calibration procedure is applied. Then, an adaptive smoothing scheme is applied to reduce noise while keeping close edges separated, and pairs of local maxima-minima of the gradient are used as cues to identify lane markings. Finally, a Bayesian classifier based on mixtures of Gaussians is applied to classify the lane markings present at each frame of a video sequence as dashed, solid, dashed solid, solid dashed, or double solid. Experimental results indicate an overall accuracy of over 96% using a variety of video sequences acquired with different devices and resolutions.",
"title": ""
},
{
"docid": "c56517951ae6f3713543377ca25862fe",
"text": "Over the last ten years, social media has become an integral facet of modern society. In particular, image-based social networking sites such as Instagram have become increasingly popular among adolescents and young adults. However, despite this proliferation of use, the literature remains divided regarding the potential impacts of social media, particularly in regards to image-based platforms. The present study sought to analyze the relationship between social media usage patterns and its subsequent effects on user self-esteem and well-being. However, the study’s results show that, despite the existing literature, intensity of Instagram use serves as a mediating variable in this relationship. The study’s results show that it is intensity of use, not usage patterns, that determine user outcomes. Finally, the results show that users who engage with Instagram more intensely exhibit higher levels of self-esteem and well-being than users who do not use the application intensely.",
"title": ""
},
{
"docid": "7670affb6d1c1f6a59b544d24dc4d34d",
"text": "During the past years the Cloud Computing offer has exponentially grown, with new Cloud providers, platforms and services being introduced in the IT market. The extreme variety of services, often providing non uniform and incompatible interfaces, makes it hard for customers to decide how to develop, or even worse to migrate, their own application into the Cloud. This situation can only get worse when customers want to exploit services from different providers, because of the portability and interoperability issues that often arise. In this paper we propose a uniform, integrated, machine-readable, semantic representation of cloud services, patterns, appliances and their compositions. Our approach aims at supporting the development of new applications for the Cloud environment, using semantic models and automatic reasoning to enhance potability and interoperability when multiple platforms are involved. In particular, the proposed reasoning procedure allows to: perform automatic discovery of Cloud services and Appliances; map between agnostic and vendor dependent Cloud Patterns and Services; automatically enrich the semantic knowledge base.",
"title": ""
},
{
"docid": "f4380a5acaba5b534d13e1a4f09afe4f",
"text": "Several approaches to automatic speech summarization are discussed below, using the ICSI Meetings corpus. We contrast feature-based approaches using prosodic and lexical features with maximal marginal relevance and latent semantic analysis approaches to summarization. While the latter two techniques are borrowed directly from the field of text summarization, feature-based approaches using prosodic information are able to utilize characteristics unique to speech data. We also investigate how the summarization results might deteriorate when carried out on ASR output as opposed to manual transcripts. All of the summaries are of an extractive variety, and are compared using the software ROUGE.",
"title": ""
},
{
"docid": "9948ebbd2253021e3af53534619c5094",
"text": "This paper presents a novel method to simultaneously estimate the clothed and naked 3D shapes of a person. The method needs only a single photograph of a person wearing clothing. Firstly, we learn a deformable model of human clothed body shapes from a database. Then, given an input image, the deformable model is initialized with a few user-specified 2D joints and contours of the person. And the correspondence between 3D shape and 2D contours is established automatically. Finally, we optimize the parameters of the deformable model in an iterative way, and then obtain the clothed and naked 3D shapes of the person simultaneously. The experimental results on real images demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "bd556eb27562a3b5ca9097dbe1097609",
"text": "The record layer is the main bridge between TLS applications and internal sub-protocols. Its core functionality is an elaborate form of authenticated encryption: streams of messages for each sub-protocol (handshake, alert, and application data) are fragmented, multiplexed, and encrypted with optional padding to hide their lengths. Conversely, the sub-protocols may provide fresh keys or signal stream termination to the record layer. Compared to prior versions, TLS 1.3 discards obsolete schemes in favor of a common construction for Authenticated Encryption with Associated Data (AEAD), instantiated with algorithms such as AES-GCM and ChaCha20-Poly1305. It differs from TLS 1.2 in its use of padding, associated data and nonces. It also encrypts the content-type used to multiplex between sub-protocols. New protocol features such as early application data (0-RTT and 0.5-RTT) and late handshake messages require additional keys and a more general model of stateful encryption. We build and verify a reference implementation of the TLS record layer and its cryptographic algorithms in F*, a dependently typed language where security and functional guarantees can be specified as pre-and post-conditions. We reduce the high-level security of the record layer to cryptographic assumptions on its ciphers. Each step in the reduction is verified by typing an F* module, for each step that involves a cryptographic assumption, this module precisely captures the corresponding game. We first verify the functional correctness and injectivity properties of our implementations of one-time MAC algorithms (Poly1305 and GHASH) and provide a generic proof of their security given these two properties. We show the security of a generic AEAD construction built from any secure one-time MAC and PRF. We extend AEAD, first to stream encryption, then to length-hiding, multiplexed encryption. Finally, we build a security model of the record layer against an adversary that controls the TLS sub-protocols. We compute concrete security bounds for the AES_128_GCM, AES_256_GCM, and CHACHA20_POLY1305 ciphersuites, and derive recommended limits on sent data before re-keying. We plug our implementation of the record layer into the miTLS library, confirm that they interoperate with Chrome and Firefox, and report initial performance results. Combining our functional correctness, security, and experimental results, we conclude that the new TLS record layer (as described in RFCs and cryptographic standards) is provably secure, and we provide its first verified implementation.",
"title": ""
},
{
"docid": "8a634e7bf127f2a90227c7502df58af0",
"text": "A convex channel surface with Si0.8Ge0.2 is proposed to enhance the retention time of a capacitorless DRAM Generation 2 type of capacitorless DRAM cell. This structure provides a physical well together with an electrostatic barrier to more effectively store holes and thereby achieve larger sensing margin as well as retention time. The advantages of this new cell design as compared with the planar cell design are assessed via twodimensional device simulations. The results indicate that the convex heterojunction channel design is very promising for future capacitorless DRAM. Keywords-Capacitorless DRAM; Retention Time; Convex Channel; Silicon Germanium;",
"title": ""
},
{
"docid": "18377326a8c12b527c641173da866284",
"text": "This paper presents a 4-channel analog front-end (AFE) for Electromyogram (EMG) acquisition systems. Each input channel consists of a chopper-stabilized instrumentation amplifier (IA) and a low-pass filer (LPF). A 15-bit analog-to-digital converter (ADC) with a buffer amplifier is shared with four input channels through multiplexer. An incremental ADC with a 1.5-bit second-order feed-forward topology is employed to achieve 15-bit resolution. The prototype AFE is fabricated in a 0.18 μm CMOS process with an active die area of 1.5 mm2. It achieves 3.2 μVrms input referred noise with a gain of 40 dB and a cutoff frequency of 500 Hz for LPF while consuming 3.713 mW from a 1.8V supply.",
"title": ""
},
{
"docid": "787979d6c1786f110ff7a47f09b82907",
"text": "Imbalance settlement markets are managed by the system operators and provide a mechanism for settling the inevitable discrepancies between contractual agreements and physical delivery. In European power markets, settlements schemes are mainly based on heuristic penalties. These arrangements have disadvantages: First, they do not provide transparency about the cost of the reserve capacity that the system operator may have obtained ahead of time, nor about the cost of the balancing energy that is actually deployed. Second, they can be gamed if market participants use the imbalance settlement as an opportunity for market arbitrage, for example if market participants use balancing energy to avoid higher costs through regular trade on illiquid energy markets. Third, current practice hinders the market-based integration of renewable energy and the provision of financial incentives for demand response through rigid penalty rules. In this paper we try to remedy these disadvantages by proposing an imbalance settlement procedure with an incentive compatible cost allocation scheme for reserve capacity and deployed energy. Incentive compatible means that market participants voluntarily and truthfully state their valuation of ancillary services. We show that this approach guarantees revenue sufficiency for the system operator and provides financial incentives for balance responsible parties to keep imbalances close to zero.",
"title": ""
},
{
"docid": "c330e97f4c7c3478670e55991ac2293c",
"text": "The MoveLab was an educational research intervention centering on a community of African American and Hispanic girls as they began to transform their self-concept in relation to computing and dance while creating technology enhanced dance performances. Students within underrepresented populations in computing often do not perceive the identity of a computer scientist as aligning with their interests or value system, leading to rejection of opportunities to participate within the discipline. To engage diverse populations in computing, we need to better understand how to support students in navigating conflicts between identities with computing and their personal interest and values. Using the construct of self-concept, we observed students in the workshop creating both congruence and dissension between their self-concept and computing. We found that creating multiple roles for participation, fostering a socially supportive community, and integrating student values within the curriculum led to students forming congruence between their self-concept and the disciplines of computing and dance.",
"title": ""
},
{
"docid": "d72bb787f20a08e70d5f0294551907d7",
"text": "In this paper we present a novel strategy, DragPushing, for improving the performance of text classifiers. The strategy is generic and takes advantage of training errors to successively refine the classification model of a base classifier. We describe how it is applied to generate two new classification algorithms; a Refined Centroid Classifier and a Refined Naïve Bayes Classifier. We present an extensive experimental evaluation of both algorithms on three English collections and one Chinese corpus. The results indicate that in each case, the refined classifiers achieve significant performance improvement over the base classifiers used. Furthermore, the performance of the Refined Centroid Classifier implemented is comparable, if not better, to that of state-of-the-art support vector machine (SVM)-based classifier, but offers a much lower computational cost.",
"title": ""
},
{
"docid": "8eb907b00933dfa59c95b919dd0579e9",
"text": "Human eye gaze is a strong candidate to create a new application area based on human-computer interaction. To implement a really practical gaze-based interaction system, gaze detection must be realized without placing any restriction on the user's behavior or comfort. This paper describes a gaze tracking system that offers freehead, simple personal calibration. It does not require the user wear anything on her head, and she can move her head freely. Personal calibration takes only a very short time; the user is asked to look at two markers on the screen. An experiment shows that the accuracy of the implemented system is about 1.0 degrees (view angle).",
"title": ""
},
{
"docid": "d8df668e4f80c356165d816ee454ab5f",
"text": "Despite the advances of the electronic technologies in e-learning, a consolidated evaluation methodology for e-learning applications does not yet exist. The goal of e-learning is to offer the users the possibility to become skillful and acquire knowledge on a new domain. The evaluation of educational software must consider its pedagogic effectiveness as well as its usability. The design of its interface should take into account the way students learn and also provide good usability so that student's interactions with the software are as natural and intuitive as possible. In this paper, we present the results obtained from a first phase of observation and analysis of the interactions of people with e-learning applications. The aim is to provide a methodology for evaluating such applications.",
"title": ""
},
{
"docid": "2aae53713324b297f0e145ef8d808ce9",
"text": "In this paper some theoretical and (potentially) practical aspects of quantum computing are considered. Using the tools of transcendental number theory it is demonstrated that quantum Turing machines (QTM) with rational amplitudes are sufficient to define the class of bounded error quantum polynomial time (BQP) introduced by Bernstein and Vazirani [Proc. 25th ACM Symposium on Theory of Computation, 1993, pp. 11–20, SIAM J. Comput., 26 (1997), pp. 1411–1473]. On the other hand, if quantum Turing machines are allowed unrestricted amplitudes (i.e., arbitrary complex amplitudes), then the corresponding BQP class has uncountable cardinality and contains sets of all Turing degrees. In contrast, allowing unrestricted amplitudes does not increase the power of computation for error-free quantum polynomial time (EQP). Moreover, with unrestricted amplitudes, BQP is not equal to EQP. The relationship between quantum complexity classes and classical complexity classes is also investigated. It is shown that when quantum Turing machines are restricted to have transition amplitudes which are algebraic numbers, BQP, EQP, and nondeterministic quantum polynomial time (NQP) are all contained in PP, hence in P#P and PSPACE. A potentially practical issue of designing “machine independent” quantum programs is also addressed. A single (“almost universal”) quantum algorithm based on Shor’s method for factoring integers is developed which would run correctly on almost all quantum computers, even if the underlying unitary transformations are unknown to the programmer and the device builder.",
"title": ""
},
{
"docid": "c0d3c14e792a02a9ad57745b31b84be6",
"text": "INTRODUCTION\nCritically ill patients are characterized by increased loss of muscle mass, partially attributed to sepsis and multiple organ failure, as well as immobilization. Recent studies have shown that electrical muscle stimulation (EMS) may be an alternative to active exercise in chronic obstructive pulmonary disease (COPD) and chronic heart failure (CHF) patients with myopathy. The aim of our study was to investigate the EMS effects on muscle mass preservation of critically ill patients with the use of ultrasonography (US).\n\n\nMETHODS\nForty-nine critically ill patients (age: 59 +/- 21 years) with an APACHE II admission score >or=13 were randomly assigned after stratification upon admission to receive daily EMS sessions of both lower extremities (EMS-group) or to the control group (control group). Muscle mass was evaluated with US, by measuring the cross sectional diameter (CSD) of the vastus intermedius and the rectus femoris of the quadriceps muscle.\n\n\nRESULTS\nTwenty-six patients were finally evaluated. Right rectus femoris and right vastus intermedius CSD decreased in both groups (EMS group: from 1.42 +/- 0.48 to 1.31 +/- 0.45 cm, P = 0.001 control group: from 1.59 +/- 0.53 to 1.37 +/- 0.5 cm, P = 0.002; EMS group: from 0.91 +/- 0.39 to 0.81 +/- 0.38 cm, P = 0.001 control group: from 1.40 +/- 0.64 to 1.11 +/- 0.56 cm, P = 0.004, respectively). However, the CSD of the right rectus femoris decreased significantly less in the EMS group (-0.11 +/- 0.06 cm, -8 +/- 3.9%) as compared to the control group (-0.21 +/- 0.10 cm, -13.9 +/- 6.4%; P < 0.05) and the CSD of the right vastus intermedius decreased significantly less in the EMS group (-0.10 +/- 0.05 cm, -12.5 +/- 7.4%) as compared to the control group (-0.29 +/- 0.28 cm, -21.5 +/- 15.3%; P < 0.05).\n\n\nCONCLUSIONS\nEMS is well tolerated and seems to preserve the muscle mass of critically ill patients. The potential use of EMS as a preventive and rehabilitation tool in ICU patients with polyneuromyopathy needs to be further investigated.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov: NCT00882830.",
"title": ""
},
{
"docid": "d38113a02a3f3c97ba044ec06515ffc1",
"text": "We present a real-time monocular vision based range measurement method for Simultaneous Localization and Mapping (SLAM) for an Autonomous Micro Aerial Vehicle (MAV) with significantly constrained payload. Our navigation strategy assumes a GPS denied manmade environment, whose indoor architecture is represented via corner based feature points obtained through a monocular camera. We experiment on a case study mission of vision based path-finding through a conventional maze of corridors in a large building.",
"title": ""
},
{
"docid": "5259c7d1c7b05050596f6667aa262e11",
"text": "We propose a novel approach to automatic detection and tracking of people taking different poses in cluttered and dynamic environments using a single RGB-D camera. The original RGB-D pixels are transformed to a novel point ensemble image (PEI), and we demonstrate that human detection and tracking in 3D space can be performed very effectively with this new representation. The detector in the first phase quickly locates human physiquewise plausible candidates, which are then further carefully filtered in a supervised learning and classification second phase. Joint statistics of color and height are computed for data association to generate final 3D motion trajectories of tracked individuals. Qualitative and quantitative experimental results obtained on the publicly available office dataset, mobile camera dataset and the real-world clothing store dataset we created show very promising results. © 2014 Elsevier B.V. All rights reserved. d T b r a e w c t e i c a i c p p g w e h",
"title": ""
},
{
"docid": "db570f8ff8d714dc2964a9d9b7032bf4",
"text": "Pain related to the osseous thoracolumbar spine is common in the equine athlete, with minimal information available regarding soft tissue pathology. The aims of this study were to describe the anatomy of the equine SSL and ISL (supraspinous and interspinous ligaments) in detail and to assess the innervation of the ligaments and their myofascial attachments including the thoracolumbar fascia. Ten equine thoracolumbar spines (T15-L1) were dissected to define structure and anatomy of the SSL, ISL and adjacent myofascial attachments. Morphological evaluation included histology, electron microscopy and immunohistochemistry (S100 and Substance P) of the SSL, ISL, adjacent fascial attachments, connective tissue and musculature. The anatomical study demonstrated that the SSL and ISL tissues merge with the adjacent myofascia. The ISL has a crossing fibre arrangement consisting of four ligamentous layers with adipose tissue axially. A high proportion of single nerve fibres were detected in the SSL (mean = 2.08 fibres/mm2 ) and ISL (mean = 0.75 fibres/mm2 ), with the larger nerves located between the ligamentous and muscular tissue. The oblique crossing arrangement of the fibres of the ISL likely functions to resist distractive and rotational forces, therefore stabilizing the equine thoracolumbar spine. The dense sensory innervation within the SSL and ISL could explain the severe pain experienced by some horses with impinging dorsal spinous processes. Documentation of the nervous supply of the soft tissues associated with the dorsal spinous processes is a key step towards improving our understanding of equine back pain.",
"title": ""
},
{
"docid": "8e80d8be3b8ccbc4b8b6b6a0dde4136f",
"text": "When an event occurs, it attracts attention of information sources to publish related documents along its lifespan. The task of event detection is to automatically identify events and their related documents from a document stream, which is a set of chronologically ordered documents collected from various information sources. Generally, each event has a distinct activeness development so that its status changes continuously during its lifespan. When an event is active, there are a lot of related documents from various information sources. In contrast when it is inactive, there are very few documents, but they are focused. Previous works on event detection did not consider the characteristics of the event's activeness, and used rigid thresholds for event detection. We propose a concept called life profile, modeled by a hidden Markov model, to model the activeness trends of events. In addition, a general event detection framework, LIPED, which utilizes the learned life profiles and the burst-and-diverse characteristic to adjust the event detection thresholds adaptively, can be incorporated into existing event detection methods. Based on the official TDT corpus and contest rules, the evaluation results show that existing detection methods that incorporate LIPED achieve better performance in the cost and F1 metrics, than without.",
"title": ""
}
] |
scidocsrr
|
ae416d1b78cdd87c029d2c31cd8fc870
|
Faster Kernel Ridge Regression Using Sketching and Preconditioning
|
[
{
"docid": "c89e01779d2df46b3e51c0ce12b2b536",
"text": "We consider supervised learning problems within the positive-definite kernel framework, such as kernel ridge regression, kernel logistic regression or the support vector machine. With kernels leading to infinite-dimensional feature spaces, a common practical limiting difficulty is the necessity of computing the kernel matrix, which most frequently leads to algorithms with running time at least quadratic in the number of observations n, i.e., O(n). Low-rank approximations of the kernel matrix are often considered as they allow the reduction of running time complexities to O(pn), where p is the rank of the approximation. The practicality of such methods thus depends on the required rank p. In this paper, we show that in the context of kernel ridge regression, for approximations based on a random subset of columns of the original kernel matrix, the rank p may be chosen to be linear in the degrees of freedom associated with the problem, a quantity which is classically used in the statistical analysis of such methods, and is often seen as the implicit number of parameters of non-parametric estimators. This result enables simple algorithms that have sub-quadratic running time complexity, but provably exhibit the same predictive performance than existing algorithms, for any given problem instance, and not only for worst-case situations.",
"title": ""
}
] |
[
{
"docid": "493a98882df5ec6b4d6e03313896d487",
"text": "The aim of the study was to compare the color change produced by tray-delivered carbamide peroxide [CP] versus hydrogen peroxide products [HP] for at-home bleaching through a systematic review and meta-analysis. MEDLINE via PubMeb, Scopus, Web of Science, Latin American and Caribbean Health Sciences Literature database (LILACS), Brazilian Library in Dentistry (BBO), and Cochrane Library and Grey literature were searched without restrictions. The abstracts of the International Association for Dental Research (IADR) and unpublished and ongoing trial registries were also searched. Dissertations and theses were explored using the ProQuest Dissertations and Periodicos Capes Theses databases. We included randomized clinical trials that compared tray-delivered CP versus HP for at-home dental bleaching. The color change in shade guide units (SGU) and ΔE were the primary outcomes, and tooth sensitivity and gingival irritation were the secondary outcomes. The risk of bias tool of the Cochrane Collaboration was used for quality assessment. After duplicate removal, 1379 articles were identified. However, only eight studies were considered to be at “low” risk of bias in the key domains of the risk bias tool and they were included in the analysis. For ΔE, the standardized mean difference was −0.45 (95 % CI −0.69 to −0.21), which favored tray-delivered CP products (p < 0.001). The color change in ΔSGU (p = 0.70), tooth sensitivity (p = 0.83), and gingival irritation (p = 0.62) were not significantly different between groups. Tray-delivered CP gels showed a slightly better whitening efficacy than HP-based products in terms of ΔE, but they were similar in terms of ΔSGU. Both whitening systems demonstrated equal level of gingival irritation and tooth sensitivity. Tray-delivered CP gels have a slightly better whitening efficacy than HP-based products in terms of ΔE. This should be interpreted with caution as the data of ΔSGU did not show statistical difference between the products.",
"title": ""
},
{
"docid": "5e14acfc68e8cb1ae7ea9b34eba420e0",
"text": "Education University of California, Berkeley (2008-2013) Ph.D. in Computer Science Thesis: Surface Web Semantics for Structured Natural Language Processing Advisor: Dan Klein. Committee members: Dan Klein, Marti Hearst, Line Mikkelsen, Nelson Morgan University of California, Berkeley (2012) Master of Science (M.S.) in Computer Science Thesis: An All-Fragments Grammar for Simple and Accurate Parsing Advisor: Dan Klein Indian Institute of Technology, Kanpur (2004-2008) Bachelor of Technology (B.Tech.) in Computer Science and Engineering GPA: 3.96/4.00 (Institute and Department Rank 2) Cornell University (Summer 2007) CS490 (Independent Research and Reading) GPA: 4.00/4.00 Advisors: Lillian Lee, Claire Cardie",
"title": ""
},
{
"docid": "7f8f8810b411aea411cdf496cd0929b6",
"text": "In 2015, Reddit closed several subreddits-foremost among them r/fatpeoplehate and r/CoonTown-due to violations of Reddit's anti-harassment policy. However, the effectiveness of banning as a moderation approach remains unclear: banning might diminish hateful behavior, or it may relocate such behavior to different parts of the site. We study the ban of r/fatpeoplehate and r/CoonTown in terms of its effect on both participating users and affected subreddits. Working from over 100M Reddit posts and comments, we generate hate speech lexicons to examine variations in hate speech usage via causal inference methods. We find that the ban worked for Reddit. More accounts than expected discontinued using the site; those that stayed drastically decreased their hate speech usage-by at least 80%. Though many subreddits saw an influx of r/fatpeoplehate and r/CoonTown \"migrants,\" those subreddits saw no significant changes in hate speech usage. In other words, other subreddits did not inherit the problem. We conclude by reflecting on the apparent success of the ban, discussing implications for online moderation, Reddit and internet communities more broadly.",
"title": ""
},
{
"docid": "eed3f46ca78b6fbbb235fecf71d28f47",
"text": "The popularity of location-based social networks available on mobile devices means that large, rich datasets that contain a mixture of behavioral (users visiting venues), social (links between users), and spatial (distances between venues) information are available for mobile location recommendation systems. However, these datasets greatly differ from those used in other online recommender systems, where users explicitly rate items: it remains unclear as to how they capture user preferences as well as how they can be leveraged for accurate recommendation. This paper seeks to bridge this gap with a three-fold contribution. First, we examine how venue discovery behavior characterizes the large check-in datasets from two different location-based social services, Foursquare and Go Walla: by using large-scale datasets containing both user check-ins and social ties, our analysis reveals that, across 11 cities, between 60% and 80% of users' visits are in venues that were not visited in the previous 30 days. We then show that, by making constraining assumptions about user mobility, state-of-the-art filtering algorithms, including latent space models, do not produce high quality recommendations. Finally, we propose a new model based on personalized random walks over a user-place graph that, by seamlessly combining social network and venue visit frequency data, obtains between 5 and 18% improvement over other models. Our results pave the way to a new approach for place recommendation in location-based social systems.",
"title": ""
},
{
"docid": "348115a5dddbc2bcdcf5552b711e82c0",
"text": "Enterococci are Gram-positive, catalase-negative, non-spore-forming, facultative anaerobic bacteria, which usually inhabit the alimentary tract of humans in addition to being isolated from environmental and animal sources. They are able to survive a range of stresses and hostile environments, including those of extreme temperature (5-65 degrees C), pH (4.5-10.0) and high NaCl concentration, enabling them to colonize a wide range of niches. Virulence factors of enterococci include the extracellular protein Esp and aggregation substances (Agg), both of which aid in colonization of the host. The nosocomial pathogenicity of enterococci has emerged in recent years, as well as increasing resistance to glycopeptide antibiotics. Understanding the ecology, epidemiology and virulence of Enterococcus species is important for limiting urinary tract infections, hepatobiliary sepsis, endocarditis, surgical wound infection, bacteraemia and neonatal sepsis, and also stemming the further development of antibiotic resistance.",
"title": ""
},
{
"docid": "4af6330c9f82b11286a2af54d2f0db22",
"text": "Here, we describe an electrospun mat of poly(vinyl alcohol) (PVA) and graphene oxide (GO) as a novel solid-state electrolyte matrix, which offers better performance retention upon drying after infiltrated with aqueous electrolyte. The PVA-GO mat overcomes the major issue of conventional PVA-based electrolytes, which is the ionic conductivity decay upon drying. After exposure to 45 ± 5% relative humidity at 25 °C for 1 month, its conductivity decay is limited to 38.4%, whereas that of pure PVA mat is as high as 84.0%. This mainly attributes to the hygroscopic nature of GO and the unique nanofiber structure within the mat. Monolithic supercapacitors have been derived directly on the mat via a well-developed laser scribing process. The as-prepared supercapacitor offers an areal capacitance of 9.9 mF cm-2 at 40 mV s-1 even after 1 month of aging under ambient conditions, with a high device-based volumetric energy density of 0.13 mWh cm-3 and a power density of 2.48 W cm-3, demonstrating great promises as a more stable power supply for wearable electronics.",
"title": ""
},
{
"docid": "49148d621dcda718ec5ca761d3485240",
"text": "Understanding and modifying the effects of arbitrary illumination on human faces in a realistic manner is a challenging problem both for face synthesis and recognition. Recent research demonstrates that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace using spherical harmonics representation. Morphable models are statistical ensembles of facial properties such as shape and texture. In this paper, we integrate spherical harmonics into the morphable model framework, by proposing a 3D spherical harmonic basis morphable model (SHBMM) and demonstrate that any face under arbitrary unknown lighting can be simply represented by three low-dimensional vectors: shape parameters, spherical harmonic basis parameters and illumination coefficients. We show that, with our SHBMM, given one single image under arbitrary unknown lighting, we can remove the illumination effects from the image (face \"delighting\") and synthesize new images under different illumination conditions (face \"re-lighting\"). Furthermore, we demonstrate that cast shadows can be detected and subsequently removed by using the image error between the input image and the corresponding rendered image. We also propose two illumination invariant face recognition methods based on the recovered SHBMM parameters and the de-lit images respectively. Experimental results show that using only a single image of a face under unknown lighting, we can achieve high recognition rates and generate photorealistic images of the face under a wide range of illumination conditions, including multiple sources of illumination.",
"title": ""
},
{
"docid": "6767096adc28681387c77a68a3468b10",
"text": "This study investigates fifty small and medium enterprises by using a survey approach to find out the key factors that are determinants to EDI adoption. Based upon the existing model, the study uses six factors grouped into three categories, namely organizational, environmental and technological aspects. The findings indicate that factors such as perceived benefits government support and management support are significant determinants of EDI adoption. The remaining factors like organizational culture, motivation to use EDI and task variety remain insignificant. Based upon the analysis of data, recommendations are made.",
"title": ""
},
{
"docid": "89263084f29469d1c363da55c600a971",
"text": "Today when there are more than 1 billion Android users all over the world, it shows that its popularity has no equal. These days mobile phones have become so intrusive in our daily lives that when they needed can give huge amount of information to forensic examiners. Till the date of writing this paper there are many papers citing the need of mobile device forensic and ways of getting the vital artifacts through mobile devices for different purposes. With vast options of popular and less popular forensic tools and techniques available today, this papers aims to bring them together under a comparative study so that this paper could serve as a starting point for several android users, future forensic examiners and investigators. During our survey we found scarcity for papers on tools for android forensic. In this paper we have analyzed different tools and techniques used in android forensic and at the end tabulated the results and findings.",
"title": ""
},
{
"docid": "ab646615d167986e393f5ecb3e5bd1d6",
"text": "Inverse dynamics controllers and operational space controllers have proved to be very efficient for compliant control of fully actuated robots such as fixed base manipulators. However legged robots such as humanoids are inherently different as they are underactuated and subject to switching external contact constraints. Recently several methods have been proposed to create inverse dynamics controllers and operational space controllers for these robots. In an attempt to compare these different approaches, we develop a general framework for inverse dynamics control and show that these methods lead to very similar controllers. We are then able to greatly simplify recent whole-body controllers based on operational space approaches using kinematic projections, bringing them closer to efficient practical implementations. We also generalize these controllers such that they can be optimal under an arbitrary quadratic cost in the commands.",
"title": ""
},
{
"docid": "abf46384ca8e5ae59936ac4ed0505c65",
"text": "We design and fabricate wearable whole hand 3D targets complete with four fingerprints and one thumb print for evaluating multi-finger capture devices, e.g., slap and contactless readers. We project 2D calibration patterns onto 3D finger surfaces pertaining to each of the four fingers and the thumb to synthetically generate electronic 3D whole hand targets. A state-of-the-art 3D printer is then used to fabricate physical 3D hand targets with printing materials that are similar in hardness and elasticity to the human skin and are optically compatible for imaging with a variety of fingerprint readers. We demonstrate that the physical 3D whole hand targets can be imaged using three commercial (500/1000 ppi) Appendix F certified slap fingerprint readers and a contactless reader. We further show that the features present in the 2D calibration patterns (e.g. ridge structure) are replicated with high fidelity on both the electronically generated and physically fabricated 3D hand targets. Results of evaluation experiments for the three slap readers and the contactless reader using the generated whole hand targets are also presented.",
"title": ""
},
{
"docid": "4f3177b303b559f341b7917683114257",
"text": "We investigate the integration of a planning mechanism into sequence-to-sequence models using attention. We develop a model which can plan ahead in the future when it computes its alignments between input and output sequences, constructing a matrix of proposed future alignments and a commitment vector that governs whether to follow or recompute the plan. This mechanism is inspired by the recently proposed strategic attentive reader and writer (STRAW) model for Reinforcement Learning. Our proposed model is end-to-end trainable using primarily differentiable operations. We show that it outperforms a strong baseline on character-level translation tasks from WMT’15, the algorithmic task of finding Eulerian circuits of graphs, and question generation from the text. Our analysis demonstrates that the model computes qualitatively intuitive alignments, converges faster than the baselines, and achieves superior performance with fewer parameters.",
"title": ""
},
{
"docid": "67cd0b0caa271c60737f82cf2dc42c1c",
"text": "We unify recent neural approaches to one-shot learning with older ideas of associative memory in a model for metalearning. Our model learns jointly to represent data and to bind class labels to representations in a single shot. It builds representations via slow weights, learned across tasks through SGD, while fast weights constructed by a Hebbian learning rule implement one-shot binding for each new task. On the Omniglot, Mini-ImageNet, and Penn Treebank one-shot learning benchmarks, our model achieves state-of-the-art results.",
"title": ""
},
{
"docid": "a7765d68c277dbc712376a46a377d5d4",
"text": "The trend of currency rates can be predicted with supporting from supervised machine learning in the transaction systems such as support vector machine. Not only representing models in use of machine learning techniques in learning, the support vector machine (SVM) model also is implemented with actual FoRex transactions. This might help automatically to make the transaction decisions of Bid/Ask in Foreign Exchange Market by using Expert Advisor (Robotics). The experimental results show the advantages of use SVM compared to the transactions without use SVM ones.",
"title": ""
},
{
"docid": "734840224154ef88cdb196671fd3f3f8",
"text": "Tiny face detection aims to find faces with high degrees of variability in scale, resolution and occlusion in cluttered scenes. Due to the very little information available on tiny faces, it is not sufficient to detect them merely based on the information presented inside the tiny bounding boxes or their context. In this paper, we propose to exploit the semantic similarity among all predicted targets in each image to boost current face detectors. To this end, we present a novel framework to model semantic similarity as pairwise constraints within the metric learning scheme, and then refine our predictions with the semantic similarity by utilizing the graph cut techniques. Experiments conducted on three widely-used benchmark datasets have demonstrated the improvement over the-state-of-the-arts gained by applying this idea.",
"title": ""
},
{
"docid": "40c90bf58aae856c7c72bac573069173",
"text": "Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (distill & transfer learning). Instead of sharing parameters between the different workers, we propose to share a “distilled” policy that captures common behaviour across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust to hyperparameter settings and more stable—attributes that are critical in deep reinforcement learning.",
"title": ""
},
{
"docid": "2ecdf4a4d7d21ca30f3204506a91c22c",
"text": "Because of the transition from analog to digital technologies, content owners are seeking technologies for the protection of copyrighted multimedia content. Encryption and watermarking are two major tools that can be used to prevent unauthorized consumption and duplication. In this paper, we generalize an idea in a recent paper that embeds a binary pattern in the form of a binary image in the LL and HH bands at the second level of Discrete Wavelet Transform (DWT) decomposition. Our generalization includes all four bands (LL, HL, LH, and HH), and a comparison of embedding a watermark at first and second level decompositions. We tested the proposed algorithm against fifteen attacks. Embedding the watermark in lower frequencies is robust to a group of attacks, and embedding the watermark in higher frequencies is robust to another set of attacks. Only for rewatermarking and collusion attacks, the watermarks extracted from all four bands are identical. Our experiments indicate that first level decomposition appear advantageous for two reasons: The area for watermark embedding is maximized, and the extracted watermarks are more textured with better visual quality.",
"title": ""
},
{
"docid": "e6e86f903da872b89b1043c4df9a41d6",
"text": "With the emergence of Web 2.0 technology and the expansion of on-line social networks, current Internet users have the ability to add their reviews, ratings and opinions on social media and on commercial and news web sites. Sentiment analysis aims to classify these reviews reviews in an automatic way. In the literature, there are numerous approaches proposed for automatic sentiment analysis for different language contexts. Each language has its own properties that makes the sentiment analysis more challenging. In this regard, this work presents a comprehensive survey of existing Arabic sentiment analysis studies, and covers the various approaches and techniques proposed in the literature. Moreover, we highlight the main difficulties and challenges of Arabic sentiment analysis, and the proposed techniques in literature to overcome these barriers.",
"title": ""
},
{
"docid": "d518f1b11f2d0fd29dcef991afe17d17",
"text": "Applications must be able to synchronize accesses to operating system resources in order to ensure correctness in the face of concurrency and system failures. System transactions allow the programmer to specify updates to heterogeneous system resources with the OS guaranteeing atomicity, consistency, isolation, and durability (ACID). System transactions efficiently and cleanly solve persistent concurrency problems that are difficult to address with other techniques. For example, system transactions eliminate security vulnerabilities in the file system that are caused by time-of-check-to-time-of-use (TOCTTOU) race conditions. System transactions enable an unsuccessful software installation to roll back without disturbing concurrent, independent updates to the file system.\n This paper describes TxOS, a variant of Linux 2.6.22 that implements system transactions. TxOS uses new implementation techniques to provide fast, serializable transactions with strong isolation and fairness between system transactions and non-transactional activity. The prototype demonstrates that a mature OS running on commodity hardware can provide system transactions at a reasonable performance cost. For instance, a transactional installation of OpenSSH incurs only 10% overhead, and a non-transactional compilation of Linux incurs negligible overhead on TxOS. By making transactions a central OS abstraction, TxOS enables new transactional services. For example, one developer prototyped a transactional ext3 file system in less than one month.",
"title": ""
},
{
"docid": "7279065640e6f2b7aab7a6e91118e0d5",
"text": "Erythrocyte injury such as osmotic shock, oxidative stress or energy depletion stimulates the formation of prostaglandin E2 through activation of cyclooxygenase which in turn activates a Ca2+ permeable cation channel. Increasing cytosolic Ca2+ concentrations activate Ca2+ sensitive K+ channels leading to hyperpolarization, subsequent loss of KCl and (further) cell shrinkage. Ca2+ further stimulates a scramblase shifting phosphatidylserine from the inner to the outer cell membrane. The scramblase is sensitized for the effects of Ca2+ by ceramide which is formed by a sphingomyelinase following several stressors including osmotic shock. The sphingomyelinase is activated by platelet activating factor PAF which is released by activation of phospholipase A2. Phosphatidylserine at the erythrocyte surface is recognised by macrophages which engulf and degrade the affected cells. Moreover, phosphatidylserine exposing erythrocytes may adhere to the vascular wall and thus interfere with microcirculation. Erythrocyte shrinkage and phosphatidylserine exposure ('eryptosis') mimic features of apoptosis in nucleated cells which however, involves several mechanisms lacking in erythrocytes. In kidney medulla, exposure time is usually too short to induce eryptosis despite high osmolarity. Beyond that high Cl- concentrations inhibit the cation channel and high urea concentrations the sphingomyelinase. Eryptosis is inhibited by erythropoietin which thus extends the life span of circulating erythrocytes. Several conditions trigger premature eryptosis thus favouring the development of anemia. On the other hand, eryptosis may be a mechanism of defective erythrocytes to escape hemolysis. Beyond their significance for erythrocyte survival and death the mechanisms involved in 'eryptosis' may similarly contribute to apoptosis of nucleated cells.",
"title": ""
}
] |
scidocsrr
|
8610e595825f8f7b9afa57141cbf2f16
|
Correlation of Digital Evidences in Forensic Investigation through Semantic Technologies
|
[
{
"docid": "4e287f84cb11a17ec2c8c4c73ef235dd",
"text": "We have developed a program called |fiwalk| which produces detailedXML describing all of the partitions and files on a hard drive or diskimage, as well as any extractable metadata from the document filesthemselves. We show how it is relatively simple to create automateddisk forensic applications using a Python module we have written thatreads |fiwalk|'s XML files. Finally, we present threeapplications using this system: a program to generate maps ofdisk images; an image redaction program; and a data transfer kioskwhich uses forensic tools to allow the migration of data from portablestorage devices without risk of infection from hostile software thatthe portable device may contain.",
"title": ""
},
{
"docid": "62be3597e792abecc4afa44903edc9aa",
"text": "Digital forensic tools are being developed at a brisk pace in response to the ever increasing variety of forensic targets. Most tools are created for specific tasks – filesystem analysis, memory analysis, network analysis, etc. – and make little effort to interoperate with one another. This makes it difficult and extremely time-consuming for an investigator to build a wider view of the state of the system under investigation. In this work, we present FACE, a framework for automatic evidence discovery and correlation from a variety of forensic targets. Our prototype implementation demonstrates the integrated analysis and correlation of a disk image, memory image, network capture, and configuration log files. The results of this analysis are presented as a coherent view of the state of a target system, allowing investigators to quickly understand it. We also present an advanced open-source memory analysis tool, ramparser, for the automated analysis of Linux systems. a 2008 Digital Forensic Research Workshop. Published by Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "099f8791628965844b96602aebed90f8",
"text": "Termites have colonized many habitats and are among the most abundant animals in tropical ecosystems, which they modify considerably through their actions. The timing of their rise in abundance and of the dispersal events that gave rise to modern termite lineages is not well understood. To shed light on termite origins and diversification, we sequenced the mitochondrial genome of 48 termite species and combined them with 18 previously sequenced termite mitochondrial genomes for phylogenetic and molecular clock analyses using multiple fossil calibrations. The 66 genomes represent most major clades of termites. Unlike previous phylogenetic studies based on fewer molecular data, our phylogenetic tree is fully resolved for the lower termites. The phylogenetic positions of Macrotermitinae and Apicotermitinae are also resolved as the basal groups in the higher termites, but in the crown termitid groups, including Termitinae + Syntermitinae + Nasutitermitinae + Cubitermitinae, the position of some nodes remains uncertain. Our molecular clock tree indicates that the lineages leading to termites and Cryptocercus roaches diverged 170 Ma (153-196 Ma 95% confidence interval [CI]), that modern Termitidae arose 54 Ma (46-66 Ma 95% CI), and that the crown termitid group arose 40 Ma (35-49 Ma 95% CI). This indicates that the distribution of basal termite clades was influenced by the final stages of the breakup of Pangaea. Our inference of ancestral geographic ranges shows that the Termitidae, which includes more than 75% of extant termite species, most likely originated in Africa or Asia, and acquired their pantropical distribution after a series of dispersal and subsequent diversification events.",
"title": ""
},
{
"docid": "733e3b25a53a7dc537df94a4cb5e473f",
"text": "Brain activity associated with attention sustained on the task of safe driving has received considerable attention recently in many neurophysiological studies. Those investigations have also accurately estimated shifts in drivers' levels of arousal, fatigue, and vigilance, as evidenced by variations in their task performance, by evaluating electroencephalographic (EEG) changes. However, monitoring the neurophysiological activities of automobile drivers poses a major measurement challenge when using a laboratory-oriented biosensor technology. This work presents a novel dry EEG sensor based mobile wireless EEG system (referred to herein as Mindo) to monitor in real time a driver's vigilance status in order to link the fluctuation of driving performance with changes in brain activities. The proposed Mindo system incorporates the use of a wireless and wearable EEG device to record EEG signals from hairy regions of the driver conveniently. Additionally, the proposed system can process EEG recordings and translate them into the vigilance level. The study compares the system performance between different regression models. Moreover, the proposed system is implemented using JAVA programming language as a mobile application for online analysis. A case study involving 15 study participants assigned a 90 min sustained-attention driving task in an immersive virtual driving environment demonstrates the reliability of the proposed system. Consistent with previous studies, power spectral analysis results confirm that the EEG activities correlate well with the variations in vigilance. Furthermore, the proposed system demonstrated the feasibility of predicting the driver's vigilance in real time.",
"title": ""
},
{
"docid": "c952999472fc639fdd0ce786d4c7f4d7",
"text": "...................................................................................................................... 1",
"title": ""
},
{
"docid": "400f6485f06cf2e66afb9a9d5bd19f4d",
"text": "The performance of in-memory based data analytic frameworks such as Spark is significantly affected by how data is partitioned. This is because the partitioning effectively determines task granularity and parallelism. Moreover, different phases of a workload execution can have different optimal partitions. However, in the current implementations, the tuning knobs controlling the partitioning are either configured statically or involve a cumbersome programmatic process for affecting changes at runtime. In this paper, we propose CHOPPER, a system for automatically determining the optimal number of partitions for each phase of a workload and dynamically changing the partition scheme during workload execution. CHOPPER monitors the task execution and DAG scheduling information to determine the optimal level of parallelism. CHOPPER repartitions data as needed to ensure efficient task granularity, avoids data skew, and reduces shuffle traffic. Thus, CHOPPER allows users to write applications without having to hand-tune for optimal parallelism. Experimental results show that CHOPPER effectively improves workload performance by up to 35.2% compared to standard Spark setup.",
"title": ""
},
{
"docid": "77c7f144c63df9022434313cfe2e5290",
"text": "Today the prevalence of online banking is enormous. People prefer to accomplish their financial transactions through the online banking services offered by their banks. This method of accessing is more convenient, quicker and secured. Banks are also encouraging their customers to opt for this mode of e-banking facilities since that result in cost savings for the banks and there is better customer satisfaction. An important aspect of online banking is the precise authentication of users before allowing them to access their accounts. Typically this is done by asking the customers to enter their unique login id and password combination. The success of this authentication relies on the ability of customers to maintain the secrecy of their passwords. Since the customer login to the banking portals normally occur in public environments, the passwords are prone to key logging attacks. To avoid this, virtual keyboards are provided. But virtual keyboards are vulnerable to shoulder surfing based attacks. In this paper, a secured virtual keyboard scheme that withstands such attacks is proposed. Elaborate user studies carried out on the proposed scheme have testified the security and the usability of the proposed approach.",
"title": ""
},
{
"docid": "30c96eb397b515f6b3e4d05c071413d1",
"text": "Thin-film solar cells have the potential to significantly decrease the cost of photovoltaics. Light trapping is particularly critical in such thin-film crystalline silicon solar cells in order to increase light absorption and hence cell efficiency. In this article we investigate the suitability of localized surface plasmons on silver nanoparticles for enhancing the absorbance of silicon solar cells. We find that surface plasmons can increase the spectral response of thin-film cells over almost the entire solar spectrum. At wavelengths close to the band gap of Si we observe a significant enhancement of the absorption for both thin-film and wafer-based structures. We report a sevenfold enhancement for wafer-based cells at =1200 nm and up to 16-fold enhancement at =1050 nm for 1.25 m thin silicon-on-insulator SOI cells, and compare the results with a theoretical dipole-waveguide model. We also report a close to 12-fold enhancement in the electroluminescence from ultrathin SOI light-emitting diodes and investigate the effect of varying the particle size on that enhancement. © 2007 American Institute of Physics. DOI: 10.1063/1.2734885",
"title": ""
},
{
"docid": "c8ca57db545f2d1f70f3640651bb3e79",
"text": "sprightly style and is interesting from cover to cover. The comments, critiques, and summaries that accompany the chapters are very helpful in crystalizing the ideas and answering questions that may arise, particularly to the self-learner. The transparency in the presentation of the material in the book equips the reader to proceed quickly to a wealth of problems included at the end of each chapter. These problems ranging from elementary to research-level are very valuable in that a solid working knowledge of the invariant imbedding techniques is acquired as well as good insight in attacking problems in various applied areas. Furthermore, a useful selection of references is given at the end of each chapter. This book may not appeal to those mathematicians who are interested primarily in the sophistication of mathematical theory, because the authors have deliberately avoided all pseudo-sophistication in attaining transparency of exposition. Precisely for the same reason the majority of the intended readers who are applications-oriented and are eager to use the techniques quickly in their own fields will welcome and appreciate the efforts put into writing this book. From a purely mathematical point of view, some of the invariant imbedding results may be considered to be generalizations of the classical theory of first-order partial differential equations, and a part of the analysis of invariant imbedding is still at a somewhat heuristic stage despite successes in many computational applications. However, those who are concerned with mathematical rigor will find opportunities to explore the foundations of the invariant imbedding method. In conclusion, let me quote the following: \"What is the best method to obtain the solution to a problem'? The answer is, any way that works.\" (Richard P. Feyman, Engineering and Science, March 1965, Vol. XXVIII, no. 6, p. 9.) In this well-written book, Bellman and Wing have indeed accomplished the task of introducing the simplicity of the invariant imbedding method to tackle various problems of interest to engineers, physicists, applied mathematicians, and numerical analysts.",
"title": ""
},
{
"docid": "5da9811fb60b5f6334e05ba71902ddfd",
"text": "In this paper, a numerical TRL calibration technique is used to accurately extract the equivalent circuit parameters of post-wall iris and input/output coupling structure which are used for the design of directly-coupled substrate integrated waveguide (SIW) filter with the first/last SIW cavities directly excited by 50 Ω microstrip line. On the basis of this dimensional design process, the entire procedure of filter design can meet all of the design specifications without resort to any time-consuming tuning and optimization. A K-band 5th-degree SIW filter with relative bandwidth of 6% was designed and fabricated by low-cost PCB process on Rogers RT/duroid 5880. Measured results which agree well with simulated results validate the accurate dimensional synthesis procedure.",
"title": ""
},
{
"docid": "414766f92683470af0b01edcd2ed6e62",
"text": "The world in which we work is changing. Information and communication technologies transform the work environment, providing the flexibility of when and where to work. The New Way of Working (NWOW) is a relatively new phenomenon that provides the context for these developments. It consists of three distinct pillars that are referred to as Bricks, Bytes and Behaviour. These pillars formed the basis for the development of the NWOW Analysis Monitor that enables organisations to determine their current level of NWOW adoption and provides guidance for future initiatives in adopting NWOW practices. The level of adoption is determined from both the manager’s and employees’ perspective as they might have a different perception and/or expectations regarding NWOW. The development of the multi-level NWOW Analysis Monitor is based on the Design Science Research approach. The monitor has been evaluated in two cases, forming two iterations in the design science research cycle. It has proved to be a useful assessment tool for organisations in the process of implementing NWOW. In future research the NWOW Analysis Monitor will be used in quantitative research on the effects of the implementation of NWOW on the organisation and its performance.",
"title": ""
},
{
"docid": "412b616f4fcb9399c8220c542ecac83e",
"text": "Image cropping aims at improving the aesthetic quality of images by adjusting their composition. Most weakly supervised cropping methods (without bounding box supervision) rely on the sliding window mechanism. The sliding window mechanism requires fixed aspect ratios and limits the cropping region with arbitrary size. Moreover, the sliding window method usually produces tens of thousands of windows on the input image which is very time-consuming. Motivated by these challenges, we firstly formulate the aesthetic image cropping as a sequential decision-making process and propose a weakly supervised Aesthetics Aware Reinforcement Learning (A2-RL) framework to address this problem. Particularly, the proposed method develops an aesthetics aware reward function which especially benefits image cropping. Similar to human's decision making, we use a comprehensive state representation including both the current observation and the historical experience. We train the agent using the actor-critic architecture in an end-to-end manner. The agent is evaluated on several popular unseen cropping datasets. Experiment results show that our method achieves the state-of-the-art performance with much fewer candidate windows and much less time compared with previous weakly supervised methods.",
"title": ""
},
{
"docid": "36c9f77ccda6a8563f8588575e3398cc",
"text": "Physical database design is important for query performance in a shared-nothing parallel database system, in which data is horizontally partitioned among multiple independent nodes. We seek to automate the process of data partitioning. Given a workload of SQL statements, we seek to determine automatically how to partition the base data across multiple nodes to achieve overall optimal (or close to optimal) performance for that workload. Previous attempts use heuristic rules to make those decisions. These approaches fail to consider all of the interdependent aspects of query performance typically modeled by today's sophisticated query optimizers.We present a comprehensive solution to the problem that has been tightly integrated with the optimizer of a commercial shared-nothing parallel database system. Our approach uses the query optimizer itself both to recommend candidate partitions for each table that will benefit each query in the workload, and to evaluate various combinations of these candidates. We compare a rank-based enumeration method with a random-based one. Our experimental results show that the former is more effective.",
"title": ""
},
{
"docid": "bbee52ebe65b2f7b8d0356a3fbdb80bf",
"text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.",
"title": ""
},
{
"docid": "66720892b48188c10d05937367dbd25e",
"text": "In wireless sensor network (WSN) [1], energy efficiency is one of the very important issues. Protocols in WSNs are broadly classified as Hierarchical, Flat and Location Based routing protocols. Hierarchical routing is used to perform efficient routing in WSN. Here we concentrate on Hierarchical Routing protocols, different types of Hierarchical routing protocols, and PEGASIS (Power-Efficient Gathering in Sensor Information Systems) [2, 3] based routing",
"title": ""
},
{
"docid": "ff9b5d96b762b2baacf4bf19348c614b",
"text": "Drought stress is a major factor in reduce growth, development and production of plants. Stress was applied with polyethylene glycol (PEG) 6000 and water potentials were: zero (control), -0.15 (PEG 10%), -0.49 (PEG 20%), -1.03 (PEG 30%) and -1.76 (PEG40%) MPa. The solutes accumulation of two maize (Zea mays L.) cultivars -704 and 301were determined after drought stress. In our experiments, a higher amount of soluble sugars and a lower amount of starch were found under stress. Soluble sugars concentration increased (from 1.18 to 1.90 times) in roots and shoots of both varieties when the studied varieties were subjected to drought stress, but starch content were significantly (p<0.05) decreased (from 16 to 84%) in both varieties. This suggests that sugars play an important role in Osmotic Adjustment (OA) in maize. The free proline level also increased (from 1.56 to 3.13 times) in response to drought stress and the increase in 704 var. was higher than 301 var. It seems to proline may play a role in minimizing the damage caused by dehydration. Increase of proline content in shoots was higher than roots, but increase of soluble sugar content and decrease of starch content in roots was higher than shoots.",
"title": ""
},
{
"docid": "b75a98afb4408f1b79e94cc59ce17cc9",
"text": "The concepts of sustainability and reusability have great importance in engineering education. In this context, metadata provides reusability and the effective use of Learning Objects (LOs). In addition, searching the huge LO Repository with metadata requires too much time. If the selection criteria do not exactly match the metadata values, it is not possible to find the most appropriate LO. When this situation arises, the multi-criteria decision making (MCDM) method can meet the requirements. In this study, the SDUNESA software was developed and this software allows for the selection of a suitable LO from the repository by using an analytical hierarchy process MCDM method. This web-based SDUNESA software is also used to store, share and select a suitable LO in the repository. To meet these features, the SDUNESA software contains Web 2.0 technologies such as AJAX, XML and SOA Web Services. The SDUNESA software was especially developed for computer engineering education. Instructors can use this software to select LOs with defined criteria. The parameters of the web-based SDUNESA learning object selection software that use the AHP method are defined under the computer education priorities. The obtained results show that the AHP method selects the most reliable learning object that meets the criteria.",
"title": ""
},
{
"docid": "570e03101ae116e2f20ab6337061ec3f",
"text": "This study explored the potential for using seed cake from hemp (Cannabis sativa L.) as a protein feed for dairy cows. The aim was to evaluate the effects of increasing the proportion of hempseed cake (HC) in the diet on milk production and milk composition. Forty Swedish Red dairy cows were involved in a 5-week dose-response feeding trial. The cows were allocated randomly to one of four experimental diets containing on average 494 g/kg of grass silage and 506 g/kg of concentrate on a dry matter (DM) basis. Diets containing 0 g (HC0), 143 g (HC14), 233 g (HC23) or 318 g (HC32) HC/kg DM were achieved by replacing an increasing proportion of compound pellets with cold-pressed HC. Increasing the proportion of HC resulted in dietary crude protein (CP) concentrations ranging from 126 for HC0 to 195 g CP/kg DM for HC32. Further effects on the composition of the diet with increasing proportions of HC were higher fat and NDF and lower starch concentrations. There were no linear or quadratic effects on DM intake, but increasing the proportion of HC in the diet resulted in linear increases in fat and NDF intake, as well as CP intake (P < 0.001), and a linear decrease in starch intake (P < 0.001). The proportion of HC had significant quadratic effects on the yields of milk, energy-corrected milk (ECM) and milk protein, fat and lactose. The curvilinear response of all yield parameters indicated maximum production from cows fed diet HC14. Increasing the proportion of HC resulted in linear decreases in both milk protein and milk fat concentration (P = 0.005 and P = 0.017, respectively), a linear increase in milk urea (P < 0.001), and a linear decrease in CP efficiency (milk protein/CP intake; P < 0.001). In conclusion, the HC14 diet, corresponding to a dietary CP concentration of 157 g/kg DM, resulted in the maximum yields of milk and ECM by dairy cows in this study.",
"title": ""
},
{
"docid": "1841f11b5c2b2e4a59a47ea6707dc1c6",
"text": "We develop a causal inference approach to recommender systems. Observational recommendation data contains two sources of information: which items each user decided to look at and which of those items each user liked. We assume these two types of information come from differentmodels—the exposure data comes from a model by which users discover items to consider; the click data comes from a model by which users decide which items they like. Traditionally, recommender systems use the click data alone (or ratings data) to infer the user preferences. But this inference is biased by the exposure data, i.e., that users do not consider each item independently at random. We use causal inference to correct for this bias. On real-world data, we demonstrate that causal inference for recommender systems leads to improved generalization to new data.",
"title": ""
},
{
"docid": "9f46ec6dad4a1ebeeabb38f77ad4b1d7",
"text": "This paper proposes a fast and reliable method for anomaly detection and localization in video data showing crowded scenes. Time-efficient anomaly localization is an ongoing challenge and subject of this paper. We propose a cubic-patch-based method, characterised by a cascade of classifiers, which makes use of an advanced feature-learning approach. Our cascade of classifiers has two main stages. First, a light but deep 3D auto-encoder is used for early identification of “many” normal cubic patches. This deep network operates on small cubic patches as being the first stage, before carefully resizing the remaining candidates of interest, and evaluating those at the second stage using a more complex and deeper 3D convolutional neural network (CNN). We divide the deep auto-encoder and the CNN into multiple sub-stages, which operate as cascaded classifiers. Shallow layers of the cascaded deep networks (designed as Gaussian classifiers, acting as weak single-class classifiers) detect “simple” normal patches, such as background patches and more complex normal patches, are detected at deeper layers. It is shown that the proposed novel technique (a cascade of two cascaded classifiers) performs comparable to current top-performing detection and localization methods on standard benchmarks, but outperforms those in general with respect to required computation time.",
"title": ""
},
{
"docid": "a4e122d0b827d25bea48d41487437d74",
"text": "We introduce UniAuth, a set of mechanisms for streamlining authentication to devices and web services. With UniAuth, a user first authenticates himself to his UniAuth client, typically his smartphone or wearable device. His client can then authenticate to other services on his behalf. In this paper, we focus on exploring the user experiences with an early iPhone prototype called Knock x Knock. To manage a variety of accounts securely in a usable way, Knock x Knock incorporates features not supported in existing password managers, such as tiered and location-aware lock control, authentication to laptops via knocking, and storing credentials locally while working with laptops seamlessly. In two field studies, 19 participants used Knock x Knock for one to three weeks with their own devices and accounts. Our participants were highly positive about Knock x Knock, demonstrating the desirability of our approach. We also discuss interesting edge cases and design implications.",
"title": ""
},
{
"docid": "62dd6d0d18b6bce52a7546d32b8a8c12",
"text": "In this paper we replicate and advance Macy and Flache’s (2002; Proc. Natl. Acad. Sci. USA, 99, 7229–7236) work on the dynamics of reinforcement learning in 2×2 (2-player 2-strategy) social dilemmas. In particular, we formalise the solution concepts they describe, provide analytical results on the dynamics of their model for any 2×2 game, and discuss the robustness of their results to occasional mistakes made by players in choosing their actions (i.e. trembling hands). It is shown here that the dynamics of their model are strongly dependent on the speed at which players learn. With high learning rates the system quickly reaches its asymptotic behaviour; on the other hand, when learning rates are low, two distinctively different dynamic regimes can be clearly observed before the system settles down forever. Similarly, it is shown that the inclusion of small quantities of randomness in the players’ learning algorithm can change the dynamics of the model dramatically.",
"title": ""
}
] |
scidocsrr
|
e40826b05fcfa1dcefdd4b62c5fe6e8f
|
Security and privacy challenges in industrial Internet of Things
|
[
{
"docid": "11ed7e0742ddb579efe6e1da258b0d3c",
"text": "Supervisory Control and Data Acquisition(SCADA) systems are deeply ingrained in the fabric of critical infrastructure sectors. These computerized real-time process control systems, over geographically dispersed continuous distribution operations, are increasingly subject to serious damage and disruption by cyber means due to their standardization and connectivity to other networks. However, SCADA systems generally have little protection from the escalating cyber threats. In order to understand the potential danger and to protect SCADA systems, in this paper, we highlight their difference from standard IT systems and present a set of security property goals. Furthermore, we focus on systematically identifying and classifying likely cyber attacks including cyber-induced cyber-physical attack son SCADA systems. Determined by the impact on control performance of SCADA systems, the attack categorization criteria highlights commonalities and important features of such attacks that define unique challenges posed to securing SCADA systems versus traditional Information Technology(IT) systems.",
"title": ""
},
{
"docid": "223a7496c24dcf121408ac3bba3ad4e5",
"text": "Process control and SCADA systems, with their reliance on proprietary networks and hardware, have long been considered immune to the network attacks that have wreaked so much havoc on corporate information systems. Unfortunately, new research indicates this complacency is misplaced – the move to open standards such as Ethernet, TCP/IP and web technologies is letting hackers take advantage of the control industry’s ignorance. This paper summarizes the incident information collected in the BCIT Industrial Security Incident Database (ISID), describes a number of events that directly impacted process control systems and identifies the lessons that can be learned from these security events.",
"title": ""
},
{
"docid": "2e8333674a0b9c782aa3796b6475bdf7",
"text": "As embedded systems are more than ever present in our society, their security is becoming an increasingly important issue. However, based on the results of many recent analyses of individual firmware images, embedded systems acquired a reputation of being insecure. Despite these facts, we still lack a global understanding of embedded systems’ security as well as the tools and techniques needed to support such general claims. In this paper we present the first public, large-scale analysis of firmware images. In particular, we unpacked 32 thousand firmware images into 1.7 million individual files, which we then statically analyzed. We leverage this large-scale analysis to bring new insights on the security of embedded devices and to underline and detail several important challenges that need to be addressed in future research. We also show the main benefits of looking at many different devices at the same time and of linking our results with other large-scale datasets such as the ZMap’s HTTPS survey. In summary, without performing sophisticated static analysis, we discovered a total of 38 previously unknown vulnerabilities in over 693 firmware images. Moreover, by correlating similar files inside apparently unrelated firmware images, we were able to extend some of those vulnerabilities to over 123 different products. We also confirmed that some of these vulnerabilities altogether are affecting at least 140K devices accessible over the Internet. It would not have been possible to achieve these results without an analysis at such wide scale. We believe that this project, which we plan to provide as a firmware unpacking and analysis web service, will help shed some light on the security of embedded devices. http://firmware.re",
"title": ""
}
] |
[
{
"docid": "71164831cb7376d92461f1cfd95c9244",
"text": "Blood coagulation and complement pathways are two important natural defense systems. The high affinity interaction between the anticoagulant vitamin K-dependent protein S and the complement regulator C4b-binding protein (C4BP) is a direct physical link between the two systems. In human plasma, ~70% of total protein S circulates in complex with C4BP; the remaining is free. The anticoagulant activity of protein S is mainly expressed by the free form, although the protein S-C4BP complex has recently been shown to have some anticoagulant activity. The high affinity binding of protein S to C4BP provides C4BP with the ability to bind to negatively charged phospholipid membranes, which serves the purpose of localizing complement regulatory activity close to the membrane. Even though C4BP does not directly affect the coagulation system, it still influences the regulation of blood coagulation through its interaction with protein S. This is particularly important in states of inherited deficiency of protein S where the tight binding of protein S to C4BP results in a pronounced and selective drop in concentration of free protein S, whereas the concentration of protein S in complex with C4BP remains relatively unchanged. This review summarizes the current knowledge on C4BP with respect to its association with thrombosis and hemostasis.",
"title": ""
},
{
"docid": "9ec7b122117acf691f3bee6105deeb81",
"text": "We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.",
"title": ""
},
{
"docid": "82f029ebcca0216bccfdb21ab13ac593",
"text": "Presently, middleware technologies abound for the Internet-of-Things (IoT), directed at hiding the complexity of underlying technologies and easing the use and management of IoT resources. The middleware solutions of today are capable technologies, which provide much advanced services and that are built using superior architectural models, they however fail short in some important aspects: existing middleware do not properly activate the link between diverse applications with much different monitoring purposes and many disparate sensing networks that are of heterogeneous nature and geographically dispersed. Then, current middleware are unfit to provide some system-wide global arrangement (intelligence, routing, data delivery) emerging from the behaviors of the constituent nodes, rather than from the coordination of single elements, i.e. self-organization. This paper presents the SIMPLE self-organized and intelligent middleware platform. SIMPLE middleware innovates from current state-of-research exactly by exhibiting self-organization properties, a focus on data-dissemination using multi-level subscriptions processing and a tiered networking approach able to cope with many disparate, widespread and heterogeneous sensing networks (e.g. WSN). In this way, the SIMLE middleware is provided as robust zero-configuration technology, with no central dependable system, immune to failures, and able to efficiently deliver the right data at the right time, to needing applications.",
"title": ""
},
{
"docid": "3ce03df4e5faa4132b2e791833549525",
"text": "Cardiac left ventricle (LV) quantification is among the most clinically important tasks for identification and diagnosis of cardiac diseases, yet still a challenge due to the high variability of cardiac structure and the complexity of temporal dynamics. Full quantification, i.e., to simultaneously quantify all LV indices including two areas (cavity and myocardium), six regional wall thicknesses (RWT), three LV dimensions, and one cardiac phase, is even more challenging since the uncertain relatedness intra and inter each type of indices may hinder the learning procedure from better convergence and generalization. In this paper, we propose a newly-designed multitask learning network (FullLVNet), which is constituted by a deep convolution neural network (CNN) for expressive feature embedding of cardiac structure; two followed parallel recurrent neural network (RNN) modules for temporal dynamic modeling; and four linear models for the final estimation. During the final estimation, both intraand inter-task relatedness are modeled to enforce improvement of generalization: (1) respecting intra-task relatedness, group lasso is applied to each of the regression tasks for sparse and common feature selection and consistent prediction; (2) respecting inter-task relatedness, three phase-guided constraints are proposed to penalize violation of the temporal behavior of the obtained LV indices. Experiments on MR sequences of 145 subjects show that FullLVNet achieves high accurate prediction with our intraand inter-task relatedness, leading to MAE of 190 mm, 1.41 mm, 2.68 mm for average areas, RWT, dimensions and error rate of 10.4% for the phase classification. This endows our method a great potential in comprehensive clinical assessment of global, regional and dynamic cardiac function.",
"title": ""
},
{
"docid": "9042faed1193b7bc4c31f2bc239c5d89",
"text": "Hand gesture recognition for human computer interaction is an area of active research in computer vision and machine learning. The primary goal of gesture recognition research is to create a system, which can identify specific human gestures and use them to convey information or for device control. This paper presents a comparative study of four classification algorithms for static hand gesture classification using two different hand features data sets. The approach used consists in identifying hand pixels in each frame, extract features and use those features to recognize a specific hand pose. The results obtained proved that the ANN had a very good performance and that the feature selection and data preparation is an important phase in the all process, when using low-resolution images like the ones obtained with the camera in the current work.",
"title": ""
},
{
"docid": "032589c39e258890e29196ca013a3e22",
"text": "We describe Charm++, an object oriented portable parallel programming language based on Cff. Its design philosophy, implementation, sample applications and their performance on various parallel machines are described. Charm++ is an explicitly parallel language consisting of Cft with a few extensions. It provides a clear separation between sequential and parallel objects. The execution model of Charm++ is message driven, thus helping one write programs that are latencytolerant. The language supports multiple inheritance, dynamic binding, overloading, strong typing, and reuse for parallel objects. Charm++ provides specific modes for sharing information between parallel objects. Extensive dynamic load balancing strategies are provided. It is based on the Charm parallel programming system, and its runtime system implementation reuses most of the runtime system for Charm.",
"title": ""
},
{
"docid": "d979fdf75f2e555fa591a2e49d985d0e",
"text": "Motion Coordination for VTOL Unmanned Aerial Vehicles develops new control design techniques for the distributed coordination of a team of autonomous unmanned aerial vehicles. In particular, it provides new control design approaches for the attitude synchronization of a formation of rigid body systems. In addition, by integrating new control design techniques with some concepts from nonlinear control theory and multi-agent systems, it presents a new theoretical framework for the formation control of a class of under-actuated aerial vehicles capable of vertical take-off and landing.",
"title": ""
},
{
"docid": "e143eb298fff97f8f58cc52caa945640",
"text": "Supervised domain adaptation—where a large generic corpus and a smaller indomain corpus are both available for training—is a challenge for neural machine translation (NMT). Standard practice is to train a generic model and use it to initialize a second model, then continue training the second model on in-domain data to produce an in-domain model. We add an auxiliary term to the training objective during continued training that minimizes the cross entropy between the indomain model’s output word distribution and that of the out-of-domain model to prevent the model’s output from differing too much from the original out-ofdomain model. We perform experiments on EMEA (descriptions of medicines) and TED (rehearsed presentations), initialized from a general domain (WMT) model. Our method shows improvements over standard continued training by up to 1.5 BLEU.",
"title": ""
},
{
"docid": "321049dbe0d9bae5545de3d8d7048e01",
"text": "ShopTalk, a proof-of-concept system designed to assist individuals with visual impairments with finding shelved products in grocery stores, is built on the assumption that simple verbal route directions and layout descriptions can be used to leverage the O&M skills of independent visually impaired travelers to enable them to navigate the store and retrieve shelved products. This paper introduces ShopTalk and summarizes experiments performed in a real-world supermarket.",
"title": ""
},
{
"docid": "48b6f2cb0c9fd50619f08c433ea40068",
"text": "The medicinal value of cannabis (marijuana) is well documented in the medical literature. Cannabinoids, the active ingredients in cannabis, have many distinct pharmacological properties. These include analgesic, anti-emetic, anti-oxidative, neuroprotective and anti-inflammatory activity, as well as modulation of glial cells and tumor growth regulation. Concurrent with all these advances in the understanding of the physiological and pharmacological mechanisms of cannabis, there is a strong need for developing rational guidelines for dosing. This paper will review the known chemistry and pharmacology of cannabis and, on that basis, discuss rational guidelines for dosing.",
"title": ""
},
{
"docid": "3072c5458a075e6643a7679ccceb1417",
"text": "A novel interleaved flyback converter with leakage energy recycled is proposed. The proposed converter is combined with dual-switch dual-transformer flyback topology. Two clamping diodes are used to reduce the voltage stress on power switches to the input voltage level and also to recycle leakage inductance energy to the input voltage and capacitor. Besides, the interleaved control is implemented to reduce the output current ripple. In addition, the voltage on the primary windings is reduced to the half of the input voltage and thus reducing the turns ratio of transformers to improve efficiency. The operating principle and the steady state analysis of the proposed converter are discussed in detail. Finally, an experimental prototype is implemented with 400V input voltage, 24V/300W output to verify the feasibility of the proposed converter. The experimental results reveals that the highest efficiency of the proposed converter is 94.42%, the full load efficiency is 92.7%, and the 10% load efficiency is 92.61%.",
"title": ""
},
{
"docid": "aee5eb38d6cbcb67de709a30dd37c29a",
"text": "Correct disassembly of the HIV-1 capsid shell, called uncoating, is increasingly recognised as central for multiple steps during retroviral replication. However, the timing, localisation and mechanism of uncoating are poorly understood and progress in this area is hampered by difficulties in measuring the process. Previous work suggested that uncoating occurs soon after entry of the viral core into the cell, but recent studies report later uncoating, at or in the nucleus. Furthermore, inhibiting reverse transcription delays uncoating, linking these processes. Here, we have used a combined approach of experimental interrogation of viral mutants and mathematical modelling to investigate the timing of uncoating with respect to reverse transcription. By developing a minimal, testable, model and employing multiple uncoating assays to overcome the disadvantages of each single assay, we find that uncoating is not concomitant with the initiation of reverse transcription. Instead, uncoating appears to be triggered once reverse transcription reaches a certain stage, namely shortly after first strand transfer. Using multiple approaches, we have identified a point during reverse transcription that induces uncoating of the HIV-1 CA shell. We propose that uncoating initiates after the first strand transfer of reverse transcription.",
"title": ""
},
{
"docid": "5596f6d7ebe828f4d6f5ab4d94131b1d",
"text": "A successful quality model is indispensable in a rich variety of multimedia applications, e.g., image classification and video summarization. Conventional approaches have developed many features to assess media quality at both low-level and high-level. However, they cannot reflect the process of human visual cortex in media perception. It is generally accepted that an ideal quality model should be biologically plausible, i.e., capable of mimicking human gaze shifting as well as the complicated visual cognition. In this paper, we propose a biologically inspired quality model, focusing on interpreting how humans perceive visually and semantically important regions in an image (or a video clip). Particularly, we first extract local descriptors (graphlets in this work) from an image/frame. They are projected onto the perceptual space, which is built upon a set of low-level and high-level visual features. Then, an active learning algorithm is utilized to select graphlets that are both visually and semantically salient. The algorithm is based on the observation that each graphlet can be linearly reconstructed by its surrounding ones, and spatially nearer ones make a greater contribution. In this way, both the local and global geometric properties of an image/frame can be encoded in the selection process. These selected graphlets are linked into a so-called biological viewing path (BVP) to simulate human visual perception. Finally, the quality of an image or a video clip is predicted by a probabilistic model. Experiments shown that 1) the predicted BVPs are over 90% consistent with real human gaze shifting paths on average; and 2) our quality model outperforms many of its competitors remarkably.",
"title": ""
},
{
"docid": "5dee244ee673909c3ba3d3d174a7bf83",
"text": "Fingerprint has remained a very vital index for human recognition. In the field of security, series of Automatic Fingerprint Identification Systems (AFIS) have been developed. One of the indices for evaluating the contributions of these systems to the enforcement of security is the degree with which they appropriately verify or identify input fingerprints. This degree is generally determined by the quality of the fingerprint images and the efficiency of the algorithm. In this paper, some of the sub-models of an existing mathematical algorithm for the fingerprint image enhancement were modified to obtain new and improved versions. The new versions consist of different mathematical models for fingerprint image segmentation, normalization, ridge orientation estimation, ridge frequency estimation, Gabor filtering, binarization and thinning. The implementation was carried out in an environment characterized by Window Vista Home Basic operating system as platform and Matrix Laboratory (MatLab) as frontend engine. Synthetic images as well as real fingerprints obtained from the FVC2004 fingerprint database DB3 set A were used to test the adequacy of the modified sub-models and the resulting algorithm. The results show that the modified sub-models perform well with significant improvement over the original versions. The results also show the necessity of each level of the enhancement. KeywordAFIS; Pattern recognition; pattern matching; fingerprint; minutiae; image enhancement.",
"title": ""
},
{
"docid": "f68161697aed6d12598b0b9e34aeae68",
"text": "Automation in agriculture comes into play to increase productivity, quality and economic growth of the country. Fruit grading is an important process for producers which affects the fruits quality evaluation and export market. Although the grading and sorting can be done by the human, but it is slow, labor intensive, error prone and tedious. Hence, there is a need of an intelligent fruit grading system. In recent years, researchers had developed numerous algorithms for fruit sorting using computer vision. Color, textural and morphological features are the most commonly used to identify the diseases, maturity and class of the fruits. Subsequently, these features are used to train soft computing technique network. In this paper, use of image processing in agriculture has been reviewed so as to provide an insight to the use of vision based systems highlighting their advantages and disadvantages.",
"title": ""
},
{
"docid": "53272bf6e5a466a361987feaad09a9e2",
"text": "Biomechanical energy harvesting is a feasible solution for powering wearable sensors by directly driving electronics or acting as wearable self-powered sensors. A wearable insole that not only can harvest energy from foot pressure during walking but also can serve as a self-powered human motion recognition sensor is reported. The insole is designed as a sandwich structure consisting of two wavy silica gel film separated by a flexible piezoelectric foil stave, which has higher performance compared with conventional piezoelectric harvesters with cantilever structure. The energy harvesting insole is capable of driving some common electronics by scavenging energy from human walking. Moreover, it can be used to recognize human motion as the waveforms it generates change when people are in different locomotion modes. It is demonstrated that different types of human motion such as walking and running are clearly classified by the insole without any external power source. This work not only expands the applications of piezoelectric energy harvesters for wearable power supplies and self-powered sensors, but also provides possible approaches for wearable self-powered human motion monitoring that is of great importance in many fields such as rehabilitation and sports science.",
"title": ""
},
{
"docid": "19ea9b23f8757804c23c21293834ff3f",
"text": "We try to address the problem of document layout understanding using a simple algorithm which generalizes across multiple domains while training on just few examples per domain. We approach this problem via supervised object detection method and propose a methodology to overcome the requirement of large datasets. We use the concept of transfer learning by pre-training our object detector on a simple artificial (source) dataset and fine-tuning it on a tiny domain specific (target) dataset. We show that this methodology works for multiple domains with training samples as less as 10 documents. We demonstrate the effect of each component of the methodology in the end result and show the superiority of this methodology over simple object detectors.",
"title": ""
},
{
"docid": "35f8b54ee1fbf153cb483fc4639102a5",
"text": "This research studies the risk prediction of hospital readmissions using metaheuristic and data mining approaches. This is a critical issue in the U.S. healthcare system because a large percentage of preventable hospital readmissions derive from a low quality of care during patients’ stays in the hospital as well as poor arrangement of the discharge process. To reduce the number of hospital readmissions, the Centers for Medicare and Medicaid Services has launched a readmission penalty program in which hospitals receive reduced reimbursement for high readmission rates for Medicare beneficiaries. In the current practice, patient readmission risk is widely assessed by evaluating a LACE score including length of stay (L), acuity level of admission (A), comorbidity condition (C), and use of emergency rooms (E). However, the LACE threshold classifying highand low-risk readmitted patients is set up by clinic practitioners based on specific circumstances and experiences. This research proposed various data mining approaches to identify the risk group of a particular patient, including neural network model, random forest (RF) algorithm, and the hybrid model of swarm intelligence heuristic and support vector machine (SVM). The proposed neural network algorithm, the RF and the SVM classifiers are used to model patients’ characteristics, such as their ages, insurance payers, medication risks, etc. Experiments are conducted to compare the performance of the proposed models with previous research. Experimental results indicate that the proposed prediction SVM model with particle swarm parameter tuning outperforms other algorithms and achieves 78.4% on overall prediction accuracy, 97.3% on sensitivity. The high sensitivity shows its strength in correctly identifying readmitted patients. The outcome of this research will help reduce overall hospital readmission rates and allow hospitals to utilize their resources more efficiently to enhance interventions for high-risk patients. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "97fa48d92c4a1b9d2bab250d5383173c",
"text": "This paper presents a new type of axial flux motor, the yokeless and segmented armature (YASA) topology. The YASA motor has no stator yoke, a high fill factor and short end windings which all increase torque density and efficiency of the machine. Thus, the topology is highly suited for high performance applications. The LIFEcar project is aimed at producing the world's first hydrogen sports car, and the first YASA motors have been developed specifically for the vehicle. The stator segments have been made using powdered iron material which enables the machine to be run up to 300 Hz. The iron in the stator of the YASA motor is dramatically reduced when compared to other axial flux motors, typically by 50%, causing an overall increase in torque density of around 20%. A detailed Finite Element analysis (FEA) analysis of the YASA machine is presented and it is shown that the motor has a peak efficiency of over 95%.",
"title": ""
},
{
"docid": "3b1d73691176ada154bab7716c6e776c",
"text": "Purpose – The purpose of this paper is to investigate the factors that affect the adoption of cloud computing by firms belonging to the high-tech industry. The eight factors examined in this study are relative advantage, complexity, compatibility, top management support, firm size, technology readiness, competitive pressure, and trading partner pressure. Design/methodology/approach – A questionnaire-based survey was used to collect data from 111 firms belonging to the high-tech industry in Taiwan. Relevant hypotheses were derived and tested by logistic regression analysis. Findings – The findings revealed that relative advantage, top management support, firm size, competitive pressure, and trading partner pressure characteristics have a significant effect on the adoption of cloud computing. Research limitations/implications – The research was conducted in the high-tech industry, which may limit the generalisability of the findings. Practical implications – The findings offer cloud computing service providers with a better understanding of what affects cloud computing adoption characteristics, with relevant insight on current promotions. Originality/value – The research contributes to the application of new technology cloud computing adoption in the high-tech industry through the use of a wide range of variables. The findings also help firms consider their information technologies investments when implementing cloud computing.",
"title": ""
}
] |
scidocsrr
|
7b1c47592df06d9a830d3b8ac204710f
|
Batteries and battery management systems for electric vehicles
|
[
{
"docid": "c85c3ef7100714d6d08f726aa8768bb9",
"text": "An adaptive Kalman filter algorithm is adopted to estimate the state of charge (SOC) of a lithium-ion battery for application in electric vehicles (EVs). Generally, the Kalman filter algorithm is selected to dynamically estimate the SOC. However, it easily causes divergence due to the uncertainty of the battery model and system noise. To obtain a better convergent and robust result, an adaptive Kalman filter algorithm that can greatly improve the dependence of the traditional filter algorithm on the battery model is employed. In this paper, the typical characteristics of the lithium-ion battery are analyzed by experiment, such as hysteresis, polarization, Coulomb efficiency, etc. In addition, an improved Thevenin battery model is achieved by adding an extra RC branch to the Thevenin model, and model parameters are identified by using the extended Kalman filter (EKF) algorithm. Further, an adaptive EKF (AEKF) algorithm is adopted to the SOC estimation of the lithium-ion battery. Finally, the proposed method is evaluated by experiments with federal urban driving schedules. The proposed SOC estimation using AEKF is more accurate and reliable than that using EKF. The comparison shows that the maximum SOC estimation error decreases from 14.96% to 2.54% and that the mean SOC estimation error reduces from 3.19% to 1.06%.",
"title": ""
}
] |
[
{
"docid": "d59d49083f896c01e8b8649f3a35b4c1",
"text": "This paper presents a wideband FMCW MIMO radar sensor capable of working in the frequency range between 120 GHz and 140 GHz. The sensor is based on a radar chipset fabricated in SiGe technology and uses a MIMO approach to improve the angular resolution. The MIMO operation is implemented by time domain multiplexing of the transmitters. The radar is capable of producing 2D images by using FFT processing and a delay-and-sum beamformer. This paper presents the overall radar system design together with the image reconstruction algorithms as well as first imaging results.",
"title": ""
},
{
"docid": "3c08e42ad9e6a2f2e7a29a187d8a791e",
"text": "An integrated single-inductor dual-output boost converter is presented. This converter adopts time-multiplexing control in providing two independent supply voltages (3.0 and 3.6 V) using only one 1H off-chip inductor and a single control loop. This converter is analyzed and compared with existing counterparts in the aspects of integration, architecture, control scheme, and system stability. Implementation of the power stage, the controller, and the peripheral functional blocks is discussed. The design was fabricated with a standard 0.5m CMOS n-well process. At an oscillator frequency of 1 MHz, the power conversion efficiency reaches 88.4% at a total output power of 350 mW. This topology can be extended to have multiple outputs and can be applied to buck, flyback, and other kinds of converters.",
"title": ""
},
{
"docid": "c64d5309c8f1e2254144215377b366b1",
"text": "Since the initial comparison of Seitz et al. [48], the accuracy of dense multiview stereovision methods has been increasing steadily. A number of limitations, however, make most of these methods not suitable to outdoor scenes taken under uncontrolled imaging conditions. The present work consists of a complete dense multiview stereo pipeline which circumvents these limitations, being able to handle large-scale scenes without sacrificing accuracy. Highly detailed reconstructions are produced within very reasonable time thanks to two key stages in our pipeline: a minimum s-t cut optimization over an adaptive domain that robustly and efficiently filters a quasidense point cloud from outliers and reconstructs an initial surface by integrating visibility constraints, followed by a mesh-based variational refinement that captures small details, smartly handling photo-consistency, regularization, and adaptive resolution. The pipeline has been tested over a wide range of scenes: from classic compact objects taken in a laboratory setting, to outdoor architectural scenes, landscapes, and cultural heritage sites. The accuracy of its reconstructions has also been measured on the dense multiview benchmark proposed by Strecha et al. [59], showing the results to compare more than favorably with the current state-of-the-art methods.",
"title": ""
},
{
"docid": "a4d294547c92296a2ea3222dc8d92afe",
"text": "Energy theft is a very common problem in countries like India where consumers of energy are increasing consistently as the population increases. Utilities in electricity system are destroying the amounts of revenue each year due to energy theft. The newly designed AMR used for energy measurements reveal the concept and working of new automated power metering system but this increased the Electricity theft forms administrative losses because of not regular interval checkout at the consumer's residence. It is quite impossible to check and solve out theft by going every customer's door to door. In this paper, a new procedure is followed based on MICROCONTROLLER Atmega328P to detect and control the energy meter from power theft and solve it by remotely disconnect and reconnecting the service (line) of a particular consumer. An SMS will be sent automatically to the utility central server through GSM module whenever unauthorized activities detected and a separate message will send back to the microcontroller in order to disconnect the unauthorized supply. A unique method is implemented by interspersed the GSM feature into smart meters with Solid state relay to deal with the non-technical losses, billing difficulties, and voltage fluctuation complication.",
"title": ""
},
{
"docid": "b5e811e4ae761c185c6e545729df5743",
"text": "Sleep assessment is of great importance in the diagnosis and treatment of sleep disorders. In clinical practice this is typically performed based on polysomnography recordings and manual sleep staging by experts. This procedure has the disadvantages that the measurements are cumbersome, may have a negative influence on the sleep, and the clinical assessment is labor intensive. Addressing the latter, there has recently been encouraging progress in the field of automatic sleep staging [1]. Furthermore, a minimally obtrusive method for recording EEG from electrodes in the ear (ear-EEG) has recently been proposed [2]. The objective of this study was to investigate the feasibility of automatic sleep stage classification based on ear-EEG. This paper presents a preliminary study based on recordings from a total of 18 subjects. Sleep scoring was performed by a clinical expert based on frontal, central and occipital region EEG, as well as EOG and EMG. 5 subjects were excluded from the study because of alpha wave contamination. In one subject the standard polysomnography was supplemented by ear-EEG. A single EEG channel sleep stage classifier was implemented using the same features and the same classifier as proposed in [1]. The performance of the single channel sleep classifier based on the scalp recordings showed an 85.7 % agreement with the manual expert scoring through 10-fold inter-subject cross validation, while the performance of the ear-EEG recordings was based on a 10-fold intra-subject cross validation and showed an 82 % agreement with the manual scoring. These results suggest that automatic sleep stage classification based on ear-EEG recordings may provide similar performance as compared to single channel scalp EEG sleep stage classification. Thereby ear-EEG may be a feasible technology for future minimal intrusive sleep stage classification.",
"title": ""
},
{
"docid": "a9d7826ccc665c036de72caceebc32a9",
"text": "Current topic models often suffer from discovering topics not matching human intuition, unnatural switching of topics within documents and high computational demands. We address these shortcomings by proposing a topic model and an inference algorithm based on automatically identifying characteristic keywords for topics.",
"title": ""
},
{
"docid": "a916ebc65d96a9f0bee8edbe6c360d38",
"text": "Technology must work for human race and improve the way help reaches a person in distress in the shortest possible time. In a developing nation like India, with the advancement in the transportation technology and rise in the total number of vehicles, road accidents are increasing at an alarming rate. If an accident occurs, the victim's survival rate increases when you give immediate medical assistance. You can give medical assistance to an accident victim only when you know the exact location of the accident. This paper presents an inexpensive but intelligent framework that can identify and report an accident for two-wheelers. This paper targets two-wheelers because the mortality ratio is highest in two-wheeler accidents in India. This framework includes a microcontroller-based low-cost Accident Detection Unit (ADU) that contains a GPS positioning system and a GSM modem to sense and generate accidental events to a centralized server. The ADU calculates acceleration along with ground clearance of the vehicle to identify the accidental situation. On detecting an accident, ADU sends accident detection parameters, GPS coordinates, and the current time to the Accident Detection Server (ADS). ADS maintain information on the movement of the vehicle according to the historical data, current data, and the rules that you configure in the system. If an accident occurs, ADS notifies the emergency services and the preconfigured mobile numbers for the vehicle that contains this unit.",
"title": ""
},
{
"docid": "8499953a543d16f321c2fd97b1edd7a4",
"text": "The purpose of this phenomenological study was to identify commonly occurring factors in filicide-suicide offenders, to describe this phenomenon better, and ultimately to enhance prevention of child murder. Thirty families' files from a county coroner's office were reviewed for commonly occurring factors in cases of filicide-suicide. Parental motives for filicide-suicide included altruistic and acutely psychotic motives. Twice as many fathers as mothers committed filicide-suicide during the study period, and older children were more often victims than infants. Records indicated that parents frequently showed evidence of depression or psychosis and had prior mental health care. The data support the hypothesis that traditional risk factors for violence appear different from commonly occurring factors in filicide-suicide. This descriptive study represents a step toward understanding filicide-suicide risk.",
"title": ""
},
{
"docid": "203312195c3df688a594d0c05be72b5a",
"text": "Convolutional Neural Networks (CNNs) have been recently introduced in the domain of session-based next item recommendation. An ordered collection of past items the user has interacted with in a session (or sequence) are embedded into a 2-dimensional latent matrix, and treated as an image. The convolution and pooling operations are then applied to the mapped item embeddings. In this paper, we first examine the typical session-based CNN recommender and show that both the generative model and network architecture are suboptimal when modeling long-range dependencies in the item sequence. To address the issues, we introduce a simple, but very effective generative model that is capable of learning high-level representation from both short- and long-range item dependencies. The network architecture of the proposed model is formed of a stack of holed convolutional layers, which can efficiently increase the receptive fields without relying on the pooling operation. Another contribution is the effective use of residual block structure in recommender systems, which can ease the optimization for much deeper networks. The proposed generative model attains state-of-the-art accuracy with less training time in the next item recommendation task. It accordingly can be used as a powerful recommendation baseline to beat in future, especially when there are long sequences of user feedback.",
"title": ""
},
{
"docid": "e702ce3922c5b0efff89d59782d1f4da",
"text": "BACKGROUND\nDeep learning (DL) is a representation learning approach ideally suited for image analysis challenges in digital pathology (DP). The variety of image analysis tasks in the context of DP includes detection and counting (e.g., mitotic events), segmentation (e.g., nuclei), and tissue classification (e.g., cancerous vs. non-cancerous). Unfortunately, issues with slide preparation, variations in staining and scanning across sites, and vendor platforms, as well as biological variance, such as the presentation of different grades of disease, make these image analysis tasks particularly challenging. Traditional approaches, wherein domain-specific cues are manually identified and developed into task-specific \"handcrafted\" features, can require extensive tuning to accommodate these variances. However, DL takes a more domain agnostic approach combining both feature discovery and implementation to maximally discriminate between the classes of interest. While DL approaches have performed well in a few DP related image analysis tasks, such as detection and tissue classification, the currently available open source tools and tutorials do not provide guidance on challenges such as (a) selecting appropriate magnification, (b) managing errors in annotations in the training (or learning) dataset, and (c) identifying a suitable training set containing information rich exemplars. These foundational concepts, which are needed to successfully translate the DL paradigm to DP tasks, are non-trivial for (i) DL experts with minimal digital histology experience, and (ii) DP and image processing experts with minimal DL experience, to derive on their own, thus meriting a dedicated tutorial.\n\n\nAIMS\nThis paper investigates these concepts through seven unique DP tasks as use cases to elucidate techniques needed to produce comparable, and in many cases, superior to results from the state-of-the-art hand-crafted feature-based classification approaches.\n\n\nRESULTS\nSpecifically, in this tutorial on DL for DP image analysis, we show how an open source framework (Caffe), with a singular network architecture, can be used to address: (a) nuclei segmentation (F-score of 0.83 across 12,000 nuclei), (b) epithelium segmentation (F-score of 0.84 across 1735 regions), (c) tubule segmentation (F-score of 0.83 from 795 tubules), (d) lymphocyte detection (F-score of 0.90 across 3064 lymphocytes), (e) mitosis detection (F-score of 0.53 across 550 mitotic events), (f) invasive ductal carcinoma detection (F-score of 0.7648 on 50 k testing patches), and (g) lymphoma classification (classification accuracy of 0.97 across 374 images).\n\n\nCONCLUSION\nThis paper represents the largest comprehensive study of DL approaches in DP to date, with over 1200 DP images used during evaluation. The supplemental online material that accompanies this paper consists of step-by-step instructions for the usage of the supplied source code, trained models, and input data.",
"title": ""
},
{
"docid": "7f54157faf8041436174fa865d0f54a8",
"text": "The goal of robot learning from demonstra tion is to have a robot learn from watching a demonstration of the task to be performed In our approach to learning from demon stration the robot learns a reward function from the demonstration and a task model from repeated attempts to perform the task A policy is computed based on the learned reward function and task model Lessons learned from an implementation on an an thropomorphic robot arm using a pendulum swing up task include simply mimicking demonstrated motions is not adequate to per form this task a task planner can use a learned model and reward function to com pute an appropriate policy this model based planning process supports rapid learn ing both parametric and nonparametric models can be learned and used and in corporating a task level direct learning com ponent which is non model based in addi tion to the model based planner is useful in compensating for structural modeling errors and slow model learning",
"title": ""
},
{
"docid": "3f2d9b5257896a4469b7e1c18f1d4e41",
"text": "Data envelopment analysis (DEA) is a method for measuring the efficiency of peer decision making units (DMUs). Recently DEA has been extended to examine the efficiency of two-stage processes, where all the outputs from the first stage are intermediate measures that make up the inputs to the second stage. The resulting two-stage DEA model provides not only an overall efficiency score for the entire process, but as well yields an efficiency score for each of the individual stages. Due to the existence of intermediate measures, the usual procedure of adjusting the inputs or outputs by the efficiency scores, as in the standard DEA approach, does not necessarily yield a frontier projection. The current paper develops an approach for determining the frontier points for inefficient DMUs within the framework of two-stage DEA. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "aadc952471ecd67d0c0731fa5a375872",
"text": "As the aircraft industry is moving towards the all electric and More Electric Aircraft (MEA), there is increase demand for electrical power in the aircraft. The trend in the aircraft industry is to replace hydraulic and pneumatic systems with electrical systems achieving more comfort and monitoring features. Moreover, the structure of MEA distribution system improves aircraft maintainability, reliability, flight safety and efficiency. Detailed descriptions of the modern MEA generation and distribution systems as well as the power converters and load types are explained and outlined. MEA electrical distribution systems are mainly in the form of multi-converter power electronic system.",
"title": ""
},
{
"docid": "67ca6efda7f90024cc9ae50ebb4181b7",
"text": "Nowadays data growth is directly proportional to time and it is a major challenge to store the data in an organised fashion. Document clustering is the solution for organising relevant documents together. In this paper, a web clustering algorithm namely WDC-KABC is proposed to cluster the web documents effectively. The proposed algorithm uses the features of both K-means and Artificial Bee Colony (ABC) clustering algorithm. In this paper, ABC algorithm is employed as the global search optimizer and K-means is used for refining the solutions. Thus, the quality of the cluster is improved. The performance of WDC-KABC is analysed with four different datasets (webkb, wap, rec0 and 7sectors). The proposed algorithm is compared with existing algorithms such as K-means, Particle Swarm Optimization, Hybrid of Particle Swarm Optimization and K-means and Ant Colony Optimization. The experimental results of WDC-KABC are satisfactory, in terms of precision, recall, f-measure, accuracy and error rate.",
"title": ""
},
{
"docid": "e830098f9c045d376177e6d2644d4a06",
"text": "OBJECTIVE\nTo determine whether acetyl-L-carnitine (ALC), a metabolite necessary for energy metabolism and essential fatty acid anabolism, might help attention-deficit/hyperactivity disorder (ADHD). Trials in Down's syndrome, migraine, and Alzheimer's disease showed benefit for attention. A preliminary trial in ADHD using L-carnitine reported significant benefit.\n\n\nMETHOD\nA multi-site 16-week pilot study randomized 112 children (83 boys, 29 girls) age 5-12 with systematically diagnosed ADHD to placebo or ALC in weight-based doses from 500 to 1500 mg b.i.d. The 2001 revisions of the Conners' parent and teacher scales (including DSM-IV ADHD symptoms) were administered at baseline, 8, 12, and 16 weeks. Analyses were ANOVA of change from baseline to 16 weeks with treatment, center, and treatment-by-center interaction as independent variables.\n\n\nRESULTS\nThe primary intent-to-treat analysis, of 9 DSM-IV teacher-rated inattentive symptoms, was not significant. However, secondary analyses were interesting. There was significant (p = 0.02) moderation by subtype: superiority of ALC over placebo in the inattentive type, with an opposite tendency in combined type. There was also a geographic effect (p = 0.047). Side effects were negligible; electrocardiograms, lab work, and physical exam unremarkable.\n\n\nCONCLUSION\nALC appears safe, but with no effect on the overall ADHD population (especially combined type). It deserves further exploration for possible benefit specifically in the inattentive type.",
"title": ""
},
{
"docid": "b0c91e6f8d1d6d41693800e1253b414f",
"text": "Tightly coupling GNSS pseudorange and Doppler measurements with other sensors is known to increase the accuracy and consistency of positioning information. Nowadays, high-accuracy geo-referenced lane marking maps are seen as key information sources in autonomous vehicle navigation. When an exteroceptive sensor such as a video camera or a lidar is used to detect them, lane markings provide positioning information which can be merged with GNSS data. In this paper, measurements from a forward-looking video camera are merged with raw GNSS pseudoranges and Dopplers on visible satellites. To create a localization system that provides pose estimates with high availability, dead reckoning sensors are also integrated. The data fusion problem is then formulated as sequential filtering. A reduced-order state space modeling of the observation problem is proposed to give a real-time system that is easy to implement. A Kalman filter with measured input and correlated noises is developed using a suitable error model of the GNSS pseudoranges. Our experimental results show that this tightly coupled approach performs better, in terms of accuracy and consistency, than a loosely coupled method using GNSS fixes as inputs.",
"title": ""
},
{
"docid": "a4a56e0647849c22b48e7e5dc3f3049b",
"text": "The paper describes a 2D sound source mapping system for a mobile robot. We developed a multiple sound sources localization method for a mobile robot with a 32 channel concentric microphone array. The system can separate multiple moving sound sources using direction localization. Directional localization and separation of different pressure sound sources is achieved using the delay and sum beam forming (DSBF) and the frequency band selection (FBS) algorithm. Sound sources were mapped by using a wheeled robot equipped with the microphone array. The robot localizes sounds direction on the move and estimates sound sources position using triangulation. Assuming the movement of sound sources, the system set a time limit and uses only the last few seconds data. By using the random sample consensus (RANSAC) algorithm for position estimation, we achieved 2D multiple sound source mapping from time limited data with high accuracy. Also, moving sound source separation is experimentally demonstrated with segments of the DSBF enhanced signal derived from the localization process",
"title": ""
},
{
"docid": "880a0dc7a717d9d68761232516b150b5",
"text": "A longstanding vision in distributed systems is to build reliable systems from unreliable components. An enticing formulation of this vision is Byzantine Fault-Tolerant (BFT) state machine replication, in which a group of servers collectively act as a correct server even if some of the servers misbehave or malfunction in arbitrary (“Byzantine”) ways. Despite this promise, practitioners hesitate to deploy BFT systems, at least partly because of the perception that BFT must impose high overheads.\n In this article, we present Zyzzyva, a protocol that uses speculation to reduce the cost of BFT replication. In Zyzzyva, replicas reply to a client's request without first running an expensive three-phase commit protocol to agree on the order to process requests. Instead, they optimistically adopt the order proposed by a primary server, process the request, and reply immediately to the client. If the primary is faulty, replicas can become temporarily inconsistent with one another, but clients detect inconsistencies, help correct replicas converge on a single total ordering of requests, and only rely on responses that are consistent with this total order. This approach allows Zyzzyva to reduce replication overheads to near their theoretical minima and to achieve throughputs of tens of thousands of requests per second, making BFT replication practical for a broad range of demanding services.",
"title": ""
},
{
"docid": "391fa43f4b5843ec905b38cb81de8116",
"text": "In this paper, we propose a new method for segmenting and summarizing music based on its structure analysis. To do that, we first extract timbre feature from acoustic music signal and construct a self-similarity matrix that shows similarities among the features within music clip. We then determine candidate boundaries for music segmentation by tracking standard deviation in the matrix. Similar segments such as repetition in music clip are clustered and merged. In this way, each music clip can be represented by a sequence of states where each state represents a music segment with similar feature. We assume that the longest segment of a music clip represents the music and hence use it as a summary of the music clip. We show the performance of our proposed method through experiments.",
"title": ""
},
{
"docid": "ff0d27f1ba24321dedfc01cee017a23a",
"text": "In Mexico, local empirical knowledge about medicinal properties of plants is the basis for their use as home remedies. It is generally accepted by many people in Mexico and elsewhere in the world that beneficial medicinal effects can be obtained by ingesting plant products. In this review, we focus on the potential pharmacologic bases for herbal plant efficacy, but we also raise concerns about the safety of these agents, which have not been fully assessed. Although numerous randomized clinical trials of herbal medicines have been published and systematic reviews and meta-analyses of these studies are available, generalizations about the efficacy and safety of herbal medicines are clearly not possible. Recent publications have also highlighted the unintended consequences of herbal product use, including morbidity and mortality. It has been found that many phytochemicals have pharmacokinetic or pharmacodynamic interactions with drugs. The present review is limited to some herbal medicines that are native or cultivated in Mexico and that have significant use. We discuss the cultural uses, phytochemistry, pharmacological, and toxicological properties of the following plant species: nopal (Opuntia ficus), peppermint (Mentha piperita), chaparral (Larrea divaricata), dandlion (Taraxacum officinale), mullein (Verbascum densiflorum), chamomile (Matricaria recutita), nettle or stinging nettle (Urtica dioica), passionflower (Passiflora incarnata), linden flower (Tilia europea), and aloe (Aloe vera). We conclude that our knowledge of the therapeutic benefits and risks of some herbal medicines used in Mexico is still limited and efforts to elucidate them should be intensified.",
"title": ""
}
] |
scidocsrr
|
e1a4c49e5082815f5220b2cb40d0fa94
|
Phase Retrieval by Linear Algebra
|
[
{
"docid": "5d527ad4493860a8d96283a5c58c3979",
"text": "Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. More than four decades after it was first proposed, the seminal error reduction algorithm of Gerchberg and Saxton and Fienup is still the popular choice for solving many variants of this problem. The algorithm is based on alternating minimization; i.e., it alternates between estimating the missing phase information, and the candidate solution. Despite its wide usage in practice, no global convergence guarantees for this algorithm are known. In this paper, we show that a (resampling) variant of this approach converges geometrically to the solution of one such problem-finding a vector x from y, A, where y = |ATx| and |z| denotes a vector of element-wise magnitudes of z-under the assumption that A is Gaussian. Empirically, we demonstrate that alternating minimization performs similar to recently proposed convex techniques for this problem (which are based on “lifting” to a convex matrix problem) in sample complexity and robustness to noise. However, it is much more efficient and can scale to large problems. Analytically, for a resampling version of alternating minimization, we show geometric convergence to the solution, and sample complexity that is off by log factors from obvious lower bounds. We also establish close to optimal scaling for the case when the unknown vector is sparse. Our work represents the first theoretical guarantee for alternating minimization (albeit with resampling) for any variant of phase retrieval problems in the non-convex setting.",
"title": ""
}
] |
[
{
"docid": "3a322129019eed67686018404366fe0b",
"text": "Scientists and casual users need better ways to query RDF databases or Linked Open Data. Using the SPARQL query language requires not only mastering its syntax and semantics but also understanding the RDF data model, the ontology used, and URIs for entities of interest. Natural language query systems are a powerful approach, but current techniques are brittle in addressing the ambiguity and complexity of natural language and require expensive labor to supply the extensive domain knowledge they need. We introduce a compromise in which users give a graphical \"skeleton\" for a query and annotates it with freely chosen words, phrases and entity names. We describe a framework for interpreting these \"schema-agnostic queries\" over open domain RDF data that automatically translates them to SPARQL queries. The framework uses semantic textual similarity to find mapping candidates and uses statistical approaches to learn domain knowledge for disambiguation, thus avoiding expensive human efforts required by natural language interface systems. We demonstrate the feasibility of the approach with an implementation that performs well in an evaluation on DBpedia data.",
"title": ""
},
{
"docid": "29e5f1dfc38c48f5296d9dde3dbc3172",
"text": "Low-cost smartphone adapters can bring virtual reality to the masses, but input is typically limited to using head tracking, which makes it difficult to perform complex tasks like navigation. Walking-in-place (WIP) offers a natural and immersive form of virtual locomotion that can reduce simulation sickness. We present VR-Drop; an immersive puzzle game that illustrates the use of WIP for virtual locomotion. Our WIP implementation doesn't require any instrumentation as it is implemented using a smartphone's inertial sensors. VR-Drop demonstrates that WIP can significantly increase VR input options and allows for a deep and immersive VR experience.",
"title": ""
},
{
"docid": "473aadc8d69632f810901d6360dd2b0c",
"text": "One of the challenges in developing real-world autonomous robots is the need for integrating and rigorously testing high-level scripting, motion planning, perception, and control algorithms. For this purpose, we introduce an open-source cross-platform software architecture called OpenRAVE, the Open Robotics and Animation Virtual Environment. OpenRAVE is targeted for real-world autonomous robot applications, and includes a seamless integration of 3-D simulation, visualization, planning, scripting and control. A plugin architecture allows users to easily write custom controllers or extend functionality. With OpenRAVE plugins, any planning algorithm, robot controller, or sensing subsystem can be distributed and dynamically loaded at run-time, which frees developers from struggling with monolithic code-bases. Users of OpenRAVE can concentrate on the development of planning and scripting aspects of a problem without having to explicitly manage the details of robot kinematics and dynamics, collision detection, world updates, and robot control. The OpenRAVE architecture provides a flexible interface that can be used in conjunction with other popular robotics packages such as Player and ROS because it is focused on autonomous motion planning and high-level scripting rather than low-level control and message protocols. OpenRAVE also supports a powerful network scripting environment which makes it simple to control and monitor robots and change execution flow during run-time. One of the key advantages of open component architectures is that they enable the robotics research community to easily share and compare algorithms.",
"title": ""
},
{
"docid": "32c5bbc07cba1aac769ee618e000a4a5",
"text": "In this paper we present Jimple, a 3-address intermediate representation that has been designed to simplify analysis and transformation of Java bytecode. We motivate the need for a new intermediate representation by illustrating several difficulties with optimizing the stack-based Java bytecode directly. In general, these difficulties are due to the fact that bytecode instructions affect an expression stack, and thus have implicit uses and definitions of stack locations. We propose Jimple as an alternative representation, in which each statement refers explicitly to the variables it uses. We provide both the definition of Jimple and a complete procedure for translating from Java bytecode to Jimple. This definition and translation have been implemented using Java, and finally we show how this implementation forms the heart of the Sable research projects.",
"title": ""
},
{
"docid": "1adae998cd412cd64d72e1cd03181cc4",
"text": "Cloud research to date has lacked data on the characteristics of the production virtual machine (VM) workloads of large cloud providers. A thorough understanding of these characteristics can inform the providers' resource management systems, e.g. VM scheduler, power manager, server health manager. In this paper, we first introduce an extensive characterization of Microsoft Azure's VM workload, including distributions of the VMs' lifetime, deployment size, and resource consumption. We then show that certain VM behaviors are fairly consistent over multiple lifetimes, i.e. history is an accurate predictor of future behavior. Based on this observation, we next introduce Resource Central (RC), a system that collects VM telemetry, learns these behaviors offline, and provides predictions online to various resource managers via a general client-side library. As an example of RC's online use, we modify Azure's VM scheduler to leverage predictions in oversubscribing servers (with oversubscribable VM types), while retaining high VM performance. Using real VM traces, we then show that the prediction-informed schedules increase utilization and prevent physical resource exhaustion. We conclude that providers can exploit their workloads' characteristics and machine learning to improve resource management substantially.",
"title": ""
},
{
"docid": "70e5b3af4496ccae2523ed1cdf1d57a2",
"text": "Modern languages for shared-memory parallelism are moving from a bulk-synchronous Single Program Multiple Data (SPMD) execution model to lightweight Task Parallel execution models for improved productivity. This shift is intended to encourage programmers to express the ideal parallelism in an application at a fine granularity that is natural for the underlying domain, while delegating to the compiler and runtime system the job of extracting coarser-grained useful parallelism for a given target system. A simple and important example of this separation of concerns between ideal and useful parallelism can be found in chunking of parallel loops, where the programmer expresses ideal parallelism by declaring all iterations of a loop to be parallel and the implementation exploits useful parallelism by executing iterations of the loop in sequential chunks.\n Though chunking of parallel loops has been used as a standard transformation for several years, it poses some interesting challenges when the parallel loop may directly or indirectly (via procedure calls) perform synchronization operations such as barrier, signal or wait statements. In such cases, a straightforward transformation that attempts to execute a chunk of loops in sequence in a single thread may violate the semantics of the original parallel program. In this paper, we address the problem of chunking parallel loops that may contain synchronization operations. We present a transformation framework that uses a combination of transformations from past work (e.g., loop strip-mining, interchange, distribution, unswitching) to obtain an equivalent set of parallel loops that chunk together statements from multiple iterations while preserving the semantics of the original parallel program. These transformations result in reduced synchronization and scheduling overheads, thereby improving performance and scalability. Our experimental results for 11 benchmark programs on an UltraSPARC II multicore processor showed a geometric mean speedup of 0.52x for the unchunked case and 9.59x for automatic chunking using the techniques described in this paper. This wide gap underscores the importance of using these techniques in future compiler and runtime systems for programming models with lightweight parallelism.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "13642d5d73a58a1336790f74a3f0eac7",
"text": "Fifty-eight patients received an Osteonics constrained acetabular implant for recurrent instability (46), girdlestone reimplant (8), correction of leg lengthening (3), and periprosthetic fracture (1). The constrained liner was inserted into a cementless shell (49), cemented into a pre-existing cementless shell (6), cemented into a cage (2), and cemented directly into the acetabular bone (1). Eight patients (13.8%) required reoperation for failure of the constrained implant. Type I failure (bone-prosthesis interface) occurred in 3 cases. Two cementless shells became loose, and in 1 patient, the constrained liner was cemented into an acetabular cage, which then failed by pivoting laterally about the superior fixation screws. Type II failure (liner locking mechanism) occurred in 2 cases. Type III failure (femoral head locking mechanism) occurred in 3 patients. Seven of the 8 failures occurred in patients with recurrent instability. Constrained liners are an effective method for treatment during revision total hip arthroplasty but should be used in select cases only.",
"title": ""
},
{
"docid": "25c2212a923038644fa93bba0dd9d7b8",
"text": "Qualitative research aims to address questions concerned with developing an understanding of the meaning and experience dimensions of humans' lives and social worlds. Central to good qualitative research is whether the research participants' subjective meanings, actions and social contexts, as understood by them, are illuminated. This paper aims to provide beginning researchers, and those unfamiliar with qualitative research, with an orientation to the principles that inform the evaluation of the design, conduct, findings and interpretation of qualitative research. It orients the reader to two philosophical perspectives, the interpretive and critical research paradigms, which underpin both the qualitative research methodologies most often used in mental health research, and how qualitative research is evaluated. Criteria for evaluating quality are interconnected with standards for ethics in qualitative research. They include principles for good practice in the conduct of qualitative research, and for trustworthiness in the interpretation of qualitative data. The paper reviews these criteria, and discusses how they may be used to evaluate qualitative research presented in research reports. These principles also offer some guidance about the conduct of sound qualitative research for the beginner qualitative researcher.",
"title": ""
},
{
"docid": "44402fdc3c9f2c6efaf77a00035f38ad",
"text": "A multi-objective optimization strategy to find optimal designs of composite multi-rim flywheel rotors is presented. Flywheel energy storage systems have been expanding into applications such as rail and automotive transportation, where the construction volume is limited. Common flywheel rotor optimization approaches for these applications are single-objective, aiming to increase the stored energy or stored energy density. The proposed multi-objective optimization offers more information for decision-makers optimizing three objectives separately: stored energy, cost and productivity. A novel approach to model the manufacturing of multi-rim composite rotors facilitates the consideration of manufacturing cost and time within the optimization. An analytical stress calculation for multi-rim rotors is used, which also takes interference fits and residual stresses into account. Constrained by a failure prediction based on the Maximum Strength, Maximum Strain and Tsai-Wu criterion, the discrete and nonlinear optimization was solved. A hybrid optimization strategy is presented that combines a genetic algorithm with a local improvement executed by a sequential quadratic program. The problem was solved for two rotor geometries used for light rail transit applications showing similar design results as in industry.",
"title": ""
},
{
"docid": "621840a3c2637841b9da1e74c99e98f1",
"text": "Topic modeling is a type of statistical model for discovering the latent “topics” that occur in a collection of documents through machine learning. Currently, latent Dirichlet allocation (LDA) is a popular and common modeling approach. In this paper, we investigate methods, including LDA and its extensions, for separating a set of scientific publications into several clusters. To evaluate the results, we generate a collection of documents that contain academic papers from several different fields and see whether papers in the same field will be clustered together. We explore potential scientometric applications of such text analysis capabilities.",
"title": ""
},
{
"docid": "8f7d2c365f6272a7e681a48b500299c7",
"text": "In today's world, opinions and reviews accessible to us are one of the most critical factors in formulating our views and influencing the success of a brand, product or service. With the advent and growth of social media in the world, stakeholders often take to expressing their opinions on popular social media, namely Twitter. While Twitter data is extremely informative, it presents a challenge for analysis because of its humongous and disorganized nature. This paper is a thorough effort to dive into the novel domain of performing sentiment analysis of people's opinions regarding top colleges in India. Besides taking additional preprocessing measures like the expansion of net lingo and removal of duplicate tweets, a probabilistic model based on Bayes' theorem was used for spelling correction, which is overlooked in other research studies. This paper also highlights a comparison between the results obtained by exploiting the following machine learning algorithms: Naïve Bayes and Support Vector Machine and an Artificial Neural Network model: Multilayer Perceptron. Furthermore, a contrast has been presented between four different kernels of SVM: RBF, linear, polynomial and sigmoid.",
"title": ""
},
{
"docid": "b56eea3d49108733fb6ac7938a782222",
"text": "Orbital angular momentum (OAM), which describes the “phase twist” (helical phase pattern) of light beams, has recently gained interest due to its potential applications in many diverse areas. Particularly promising is the use of OAM for optical communications since: (i) coaxially propagating OAM beams with different azimuthal OAM states are mutually orthogonal, (ii) inter-beam crosstalk can be minimized, and (iii) the beams can be efficiently multiplexed and demultiplexed. As a result, multiple OAM states could be used as different carriers for multiplexing and transmitting multiple data streams, thereby potentially increasing the system capacity. In this paper, we review recent progress in OAM beam generation/detection, multiplexing/demultiplexing, and its potential applications in different scenarios including free-space optical communications, fiber-optic communications, and RF communications. Technical challenges and perspectives of OAM beams are also discussed. © 2015 Optical Society of America",
"title": ""
},
{
"docid": "aeec3b7e79225355a5a6ff10f9c3e4ea",
"text": "BACKGROUND\nCritically ill patients frequently suffer muscle weakness whilst in critical care. Ultrasound can reliably track loss of muscle size, but also quantifies the arrangement of the muscle fascicles, known as the muscle architecture. We sought to measure both pennation angle and fascicle length, as well as tracking changes in muscle thickness in a population of critically ill patients.\n\n\nMETHODS\nOn days 1, 5 and 10 after admission to critical care, muscle thickness was measured in ventilated critically ill patients using bedside ultrasound. Elbow flexor compartment, medial head of gastrocnemius and vastus lateralis muscle were investigated. In the lower limb, we determined the pennation angle to derive the fascicle length.\n\n\nRESULTS\nWe recruited and scanned 22 patients on day 1 after admission to critical care, 16 were re-scanned on day 5 and 9 on day 10. We found no changes to the size of the elbow flexor compartment over 10 days of admission. In the gastrocnemius, there were no significant changes to muscle thickness or pennation angle over 5 or 10 days. In the vastus lateralis, we found significant losses in both muscle thickness and pennation angle on day 5, but found that fascicle length is unchanged. Loss of muscle on day 5 was related to decreases in pennation angle. In both lower limb muscles, a positive relationship was observed between the pennation angle on day 1, and the percentage of angle lost by days 5 and 10.\n\n\nDISCUSSION\nMuscle loss in critically ill patients preferentially affects the lower limb, possibly due to the lower limb becoming prone to disuse atrophy. Muscle architecture of the thigh changes in the first 5 days of admission, in particular, we have demonstrated a correlation between muscle thickness and pennation angle. It is hypothesised that weakness in the lower limb occurs through loss of force generation via a reduced pennation angle.\n\n\nCONCLUSION\nUsing ultrasound, we have been able to demonstrate that muscle thickness and architecture of vastus lateralis undergo rapid changes during the early phase of admission to a critical care environment.",
"title": ""
},
{
"docid": "11828571b57966958bd364947f41ad40",
"text": "A smart city is developed, deployed and maintained with the help of Internet of Things (IoT). The smart cities have become an emerging phenomena with rapid urban growth and boost in the field of information technology. However, the function and operation of a smart city is subject to the pivotal development of security architectures. The contribution made in this paper is twofold. Firstly, it aims to provide a detailed, categorized and comprehensive overview of the research on security problems and their existing solutions for smart cities. The categorization is based on several factors such as governance, socioeconomic and technological factors. This classification provides an easy and concise view of the security threats, vulnerabilities and available solutions for the respective technologies areas that are proposed over the period 2010-2015. Secondly, an IoT testbed for smart cities architecture, i.e., SmartSantander is also analyzed with respect to security threats and vulnerabilities to smart cities. The existing best practices regarding smart city security are discussed and analyzed with respect to their performance, which could be used by different stakeholders of the smart cities.",
"title": ""
},
{
"docid": "1dc615b299a8a63caa36cd8e36459323",
"text": "Domain adaptation manages to build an effective target classifier or regression model for unlabeled target data by utilizing the well-labeled source data but lying different distributions. Intuitively, to address domain shift problem, it is crucial to learn domain invariant features across domains, and most existing approaches have concentrated on it. However, they often do not directly constrain the learned features to be class discriminative for both source and target data, which is of vital importance for the final classification. Therefore, in this paper, we put forward a novel feature learning method for domain adaptation to construct both domain invariant and class discriminative representations, referred to as DICD. Specifically, DICD is to learn a latent feature space with important data properties preserved, which reduces the domain difference by jointly matching the marginal and class-conditional distributions of both domains, and simultaneously maximizes the inter-class dispersion and minimizes the intra-class scatter as much as possible. Experiments in this paper have demonstrated that the class discriminative properties will dramatically alleviate the cross-domain distribution inconsistency, which further boosts the classification performance. Moreover, we show that exploring both domain invariance and class discriminativeness of the learned representations can be integrated into one optimization framework, and the optimal solution can be derived effectively by solving a generalized eigen-decomposition problem. Comprehensive experiments on several visual cross-domain classification tasks verify that DICD can outperform the competitors significantly.",
"title": ""
},
{
"docid": "8224818f838fd238879dca0a4b5531c1",
"text": "Intelligence plays an important role in supporting military operations. In the course of military intelligence a vast amount of textual data in different languages needs to be analyzed. In addition to information provided by traditional military intelligence, nowadays the internet offers important resources of potential militarily relevant information. However, we are not able to manually handle this vast amount of data. The science of natural language processing (NLP) provides technology to efficiently handle this task, in particular by means of machine translation and text mining. In our research project ISAF-MT we created a statistical machine translation (SMT) system for Dari to German. In this paper we describe how NLP technologies and in particular SMT can be applied to different intelligence processes. We therefore argue that multilingual NLP technology can strongly support military operations.",
"title": ""
},
{
"docid": "c44f971f063f8594985a98beb897464a",
"text": "In recent years, multi-agent epistemic planning has received attention from both dynamic logic and planning communities. Existing implementations of multi-agent epistemic planning are based on compilation into classical planning and suffer from various limitations, such as generating only linear plans, restriction to public actions, and incapability to handle disjunctive beliefs. In this paper, we propose a general representation language for multi-agent epistemic planning where the initial KB and the goal, the preconditions and effects of actions can be arbitrary multi-agent epistemic formulas, and the solution is an action tree branching on sensing results. To support efficient reasoning in the multi-agent KD45 logic, we make use of a normal form called alternating cover disjunctive formulas (ACDFs). We propose basic revision and update algorithms for ACDFs. We also handle static propositional common knowledge, which we call constraints. Based on our reasoning, revision and update algorithms, adapting the PrAO algorithm for contingent planning from the literature, we implemented a multi-agent epistemic planner called MEPK. Our experimental results show the viability of our approach.",
"title": ""
},
{
"docid": "62e366f679745d410b72e2f5fb56be4d",
"text": "We present a novel depth image enhancement approach for RGB-D cameras such as the Kinect. Our approach employs optical flow of color images for refining the quality of corresponding depth images. We track every depth pixel over a sequence of frames in the temporal domain and use valid depth values of the same point for recovering missing and inaccurate information. We conduct experiments on different test datasets and present visually appealing results. Our method significantly reduces the temporal noise level and the flickering artifacts.",
"title": ""
},
{
"docid": "f1d096392288d06a481f6f856e8b4aba",
"text": "The ever-growing complexity of software systems coupled with their stringent availability requirements are challenging the manual management of software after its deployment. This has motivated the development of self-adaptive software systems. Self-adaptation endows a software system with the ability to satisfy certain objectives by automatically modifying its behavior at runtime. While many promising approaches for the construction of self-adaptive software systems have been developed, the majority of them ignore the uncertainty underlying the adaptation. This has been one of the key inhibitors to widespread adoption of self-adaption techniques in risk-averse real-world applications. Uncertainty in this setting is a vaguely understood term. In this paper, we characterize the sources of uncertainty in self-adaptive software system, and demonstrate its impact on the system’s ability to satisfy its objectives. We then provide an alternative notion of optimality that explicitly incorporates the uncertainty underlying the knowledge (models) used for decision making. We discuss the state-of-the-art for dealing with uncertainty in this setting, and conclude with a set of challenges, which provide a road map for future research.",
"title": ""
}
] |
scidocsrr
|
7f2e6374882f97ff74e43f0f3a3a2c4b
|
Automatic Annotation Suggestions and Custom Annotation Layers in WebAnno
|
[
{
"docid": "2ee1f7a56eba17b75217cca609452f20",
"text": "We describe the annotation of a new dataset for German Named Entity Recognition (NER). The need for this dataset is motivated by licensing issues and consistency issues of existing datasets. We describe our approach to creating annotation guidelines based on linguistic and semantic considerations, and how we iteratively refined and tested them in the early stages of annotation in order to arrive at the largest publicly available dataset for German NER, consisting of over 31,000 manually annotated sentences (over 591,000 tokens) from German Wikipedia and German online news. We provide a number of statistics on the dataset, which indicate its high quality, and discuss legal aspects of distributing the data as a compilation of citations. The data is released under the permissive CC-BY license, and will be fully available for download in September 2014 after it has been used for the GermEval 2014 shared task on NER. We further provide the full annotation guidelines and links to the annotation tool used for the creation of this resource.",
"title": ""
}
] |
[
{
"docid": "1572891f4c2ab064c6d6a164f546e7c1",
"text": "BACKGROUND Unexplained gastrointestinal (GI) symptoms and joint hypermobility (JHM) are common in the general population, the latter described as benign joint hypermobility syndrome (BJHS) when associated with musculo-skeletal symptoms. Despite overlapping clinical features, the prevalence of JHM or BJHS in patients with functional gastrointestinal disorders has not been examined. METHODS The incidence of JHM was evaluated in 129 new unselected tertiary referrals (97 female, age range 16-78 years) to a neurogastroenterology clinic using a validated 5-point questionnaire. A rheumatologist further evaluated 25 patients with JHM to determine the presence of BJHS. Groups with or without JHM were compared for presentation, symptoms and outcomes of relevant functional GI tests. KEY RESULTS Sixty-three (49%) patients had evidence of generalized JHM. An unknown aetiology for GI symptoms was significantly more frequent in patients with JHM than in those without (P < 0.0001). The rheumatologist confirmed the clinical impression of JHM in 23 of 25 patients, 17 (68%) of whom were diagnosed with BJHS. Patients with co-existent BJHS and GI symptoms experienced abdominal pain (81%), bloating (57%), nausea (57%), reflux symptoms (48%), vomiting (43%), constipation (38%) and diarrhoea (14%). Twelve of 17 patients presenting with upper GI symptoms had delayed gastric emptying. One case is described in detail. CONCLUSIONS & INFERENCES In a preliminary retrospective study, we have found a high incidence of JHM in patients referred to tertiary neurogastroenterology care with unexplained GI symptoms and in a proportion of these a diagnosis of BJHS is made. Symptoms and functional tests suggest GI dysmotility in a number of these patients. The possibility that a proportion of patients with unexplained GI symptoms and JHM may share a common pathophysiological disorder of connective tissue warrants further investigation.",
"title": ""
},
{
"docid": "3258be27b22be228d2eae17c91a20664",
"text": "In any non-deterministic environment, unexpected events can indicate true changes in the world (and require behavioural adaptation) or reflect chance occurrence (and must be discounted). Adaptive behaviour requires distinguishing these possibilities. We investigated how humans achieve this by integrating high-level information from instruction and experience. In a series of EEG experiments, instructions modulated the perceived informativeness of feedback: Participants performed a novel probabilistic reinforcement learning task, receiving instructions about reliability of feedback or volatility of the environment. Importantly, our designs de-confound informativeness from surprise, which typically co-vary. Behavioural results indicate that participants used instructions to adapt their behaviour faster to changes in the environment when instructions indicated that negative feedback was more informative, even if it was simultaneously less surprising. This study is the first to show that neural markers of feedback anticipation (stimulus-preceding negativity) and of feedback processing (feedback-related negativity; FRN) reflect informativeness of unexpected feedback. Meanwhile, changes in P3 amplitude indicated imminent adjustments in behaviour. Collectively, our findings provide new evidence that high-level information interacts with experience-driven learning in a flexible manner, enabling human learners to make informed decisions about whether to persevere or explore new options, a pivotal ability in our complex environment.",
"title": ""
},
{
"docid": "ed9528fe8e4673c30de35d33130c728e",
"text": "This paper introduces a friendly system to control the home appliances remotely by the use of mobile cell phones; this system is well known as “Home Automation System” (HAS).",
"title": ""
},
{
"docid": "102cd9d7db08d84645b9b524b3fa77e0",
"text": "The United States has become progressively more multicultural, with the ethnic population growing at record rates. The US Census Bureau projects that, by the year 2056, greater than 50% of the US population will be of non-Caucasian descent. Ethnic patients have different cosmetic concerns and natural features that are unique. The cosmetic concerns of ethnic patients also differ as the result of differences in skin pathophysiology, mechanisms of aging, and unique anatomic structure. There is no longer a single standard of beauty. We must now adapt to the more diverse population and understand how to accommodate the diversity of beauty in the United States. Ethnic patients do not necessarily want a Westernized look because what constitutes beauty is determined by racial, cultural, and environmental influences. We as leaders in skin care must understand these differences and adapt our practices accordingly. This article will focus on the differences in aging in different ethnic populations and highlight procedures unique to skin of color.",
"title": ""
},
{
"docid": "a90802bd8cb132334999e6376053d5ef",
"text": "We use single-agent and multi-agent Reinforcement Learning (RL) for learning dialogue policies in a resource allocation negotiation scenario. Two agents learn concurrently by interacting with each other without any need for simulated users (SUs) to train against or corpora to learn from. In particular, we compare the Qlearning, Policy Hill-Climbing (PHC) and Win or Learn Fast Policy Hill-Climbing (PHC-WoLF) algorithms, varying the scenario complexity (state space size), the number of training episodes, the learning rate, and the exploration rate. Our results show that generally Q-learning fails to converge whereas PHC and PHC-WoLF always converge and perform similarly. We also show that very high gradually decreasing exploration rates are required for convergence. We conclude that multiagent RL of dialogue policies is a promising alternative to using single-agent RL and SUs or learning directly from corpora.",
"title": ""
},
{
"docid": "e289e25a86e743a189fd5fec1d911f74",
"text": "Congestion avoidance mechanisms allow a network to operate in the optimal region of low delay and high throughput, thereby, preventing the network from becoming congested. This is different from the traditional congestion control mechanisms that allow the network to recover from the congested state of high delay and low throughput. Both congestion avoidance and congestion control mechanisms are basically resource management problems. They can be formulated as system control problems in which the system senses its state and feeds this back to its users who adjust their controls. The key component of any congestion avoidance scheme is the algorithm (or control function) used by the users to increase or decrease their load (window or rate). We abstractly characterize a wide class of such increase/decrease algorithms and compare them using several different performance metrics. They key metrics are efficiency, fairness, convergence time, and size of oscillations. It is shown that a simple additive increase and multiplicative decrease algorithm satisfies the sufficient conditions for convergence to an efficient and fair state regardless of the starting state of the network. This is the algorithm finally chosen for implementation in the congestion avoidance scheme recommended for Digital Networking Architecture and OSI Transport Class 4 Networks.",
"title": ""
},
{
"docid": "3a32bb2494edefe8ea28a83dad1dc4c4",
"text": "Objective: The challenging task of heart rate (HR) estimation from the photoplethysmographic (PPG) signal, during intensive physical exercises, is tackled in this paper. Methods: The study presents a detailed analysis of a novel algorithm (WFPV) that exploits a Wiener filter to attenuate the motion artifacts, a phase vocoder to refine the HR estimate and user-adaptive post-processing to track the subject physiology. Additionally, an offline version of the HR estimation algorithm that uses Viterbi decoding is designed for scenarios that do not require online HR monitoring (WFPV+VD). The performance of the HR estimation systems is rigorously compared with existing algorithms on the publically available database of 23 PPG recordings. Results: On the whole dataset of 23 PPG recordings, the algorithms result in average absolute errors of 1.97 and 1.37 BPM in the online and offline modes, respectively. On the test dataset of 10 PPG recordings which were most corrupted with motion artifacts, WFPV has an error of 2.95 BPM on its own and 2.32 BPM in an ensemble with two existing algorithms. Conclusion: The error rate is significantly reduced when compared with the state-of-the art PPG-based HR estimation methods. Significance: The proposed system is shown to be accurate in the presence of strong motion artifacts and in contrast to existing alternatives has very few free parameters to tune. The algorithm has a low computational cost and can be used for fitness tracking and health monitoring in wearable devices. The MATLAB implementation of the algorithm is provided online.",
"title": ""
},
{
"docid": "68a84156f64d4d1926a52d60fc3eadf3",
"text": "Parkinson's disease is a common and disabling disorder of movement owing to dopaminergic denervation of the striatum. However, it is still unclear how this denervation perverts normal functioning to cause slowing of voluntary movements. Recent work using tissue slice preparations, animal models and in humans with Parkinson's disease has demonstrated abnormally synchronized oscillatory activity at multiple levels of the basal ganglia-cortical loop. This excessive synchronization correlates with motor deficit, and its suppression by dopaminergic therapies, ablative surgery or deep-brain stimulation might provide the basic mechanism whereby diverse therapeutic strategies ameliorate motor impairment in patients with Parkinson's disease. This review is part of the INMED/TINS special issue, Physiogenic and pathogenic oscillations: the beauty and the beast, based on presentations at the annual INMED/TINS symposium (http://inmednet.com/).",
"title": ""
},
{
"docid": "471db984564becfea70fb2946ef4871e",
"text": "We propose a novel group regularization which we call exclusive lasso. Unlike the group lasso regularizer that assumes covarying variables in groups, the proposed exclusive lasso regularizer models the scenario when variables in the same group compete with each other. Analysis is presented to illustrate the properties of the proposed regularizer. We present a framework of kernel based multi-task feature selection algorithm based on the proposed exclusive lasso regularizer. An efficient algorithm is derived to solve the related optimization problem. Experiments with document categorization show that our approach outperforms state-of-theart algorithms for multi-task feature selection.",
"title": ""
},
{
"docid": "49d2d46a16571524e94b22997d1b585c",
"text": "In this paper, we discuss the development of the sprawling-type quadruped robot named “TITAN-XIII” and its dynamic walking algorithm. We develop an experimental quadruped robot especially designed for dynamic walking. Unlike dog-like robots, the prototype robot looks like a four-legged spider. As an experimental robot, we focus on the three basic concepts: lightweight, wide range of motion and ease of maintenance. To achieve these goals, we introduce a wire-driven mechanism using a synthetic fiber to transmit power to each axis making use of this wire-driven mechanism, we can locate the motors at the base of the leg, reducing, consequently, its inertia. Additionally, each part of the robot is unitized, and can be easily disassembled. As a dynamic walking algorithm, we proposed what we call “longitudinal acceleration trajectory”. This trajectory was applied to intermittent trot gait. The algorithm was tested with the developed robot, and its performance was confirmed through experiments.",
"title": ""
},
{
"docid": "a07472c2f086332bf0f97806255cb9d5",
"text": "The Learning Analytics Dashboard (LAD) is an application to show students’ online behavior patterns in a virtual learning environment. This supporting tool works by tracking students’ log-files, mining massive amounts of data to find meaning, and visualizing the results so they can be comprehended at a glance. This paper reviews previously developed applications to analyze their features. Based on the implications from the review of previous studies as well as a preliminary investigation on the need for such tools, an early version of the LAD was designed and developed. Also, in order to improve the LAD, a usability test incorporating a stimulus recall interview was conducted with 38 college students in two blended learning classes. Evaluation of this tool was performed in an experimental research setting with a control group and additional surveys were conducted asking students’ about perceived usefulness, conformity, level of understanding of graphs, and their behavioral changes. The results indicated that this newly developed learning analytics tool did not significantly impact on their learning achievement. However, lessons learned from the usability and pilot tests support that visualized information impacts on students’ understanding level; and the overall satisfaction with dashboard plays as a covariant that impacts on both the degree of understanding and students’ perceived change of behavior. Taking in the results of the tests and students’ openended responses, a scaffolding strategy to help them understand the meaning of the information displayed was included in each sub section of the dashboard. Finally, this paper discusses future directions in regard to improving LAD so that it better supports students’ learning performance, which might be helpful for those who develop learning analytics applications for students.",
"title": ""
},
{
"docid": "d64af720c926814a6720050ae99fef3a",
"text": "Today, mobile Botnets are well known in the IT security field. Whenever we talk about Botnets on mobile phones, we mostly deal with denial of service attacks (Kifayat and Wilson http://www.cms.livjm.ac.uk/pgnet2012/Proceedings/Papers/1569607737.pdf , 2012). This is due to the fact that we refer to classical Botnets on computers. But mobile phones are “mobiles” by definition. Indeed, they offer a lot of information not present on personal computers. They have a lot of sensors which are interesting for attackers. Most of the time, we used to think that targeted attacks have a single target. But with mobile phones, targeting a group of people does make sense. Coupled with data collected by the Sat Nav, we could so be able to localize with a certain probability meeting points in a criminal organization. By this way of attacking, we can deduce lots of things by cross-checking information obtained on devices. Thereby, this paper will aim to show the potential offered by such attacks. Firstly, this paper will focus on localization data. Furthermore, an implementation of an Android botnet and its server side part will be presented for illustrative purposes. Besides, the major part of the source code used will be included step by step in this paper. This paper aims to be technical because the author does not want to show any theory without trying some practicals tests with real and technical constraints.",
"title": ""
},
{
"docid": "542c115a46d263ee347702cf35b6193c",
"text": "We obtain universal bounds on the energy of codes and for designs in Hamming spaces. Our bounds hold for a large class of potential functions, allow unified treatment, and can be viewed as a generalization of the Levenshtein bounds for maximal codes.",
"title": ""
},
{
"docid": "f81a0561b27a50e99e6f8257685c3d20",
"text": "Small cells were introduced to support high data-rate services and for dense deployment. Owing to user equipment (UE) mobility and small-cell coverage, the load across a small-cell network recurrently becomes unbalanced. Such unbalanced loads result in performance degradation in throughput and handover success and can even cause radio link failure. In this paper, we propose a mobility load balancing algorithm for small-cell networks by adapting network load status and considering load estimation. To that end, the proposed algorithm adjusts handover parameters depending on the overloaded cells and adjacent cells. Resource usage depends on signal qualities and traffic demands of connected UEs in long-term evolution. Hence, we define a resource block-utilization ratio as a measurement of cell load and employ an adaptive threshold to determine overloaded cells, according to the network load situation. Moreover, to avoid performance oscillation, the impact of moving loads on the network is considered. Through system-level simulations, the performance of the proposed algorithm is evaluated in various environments. Simulation results show that the proposed algorithm provides a more balanced load across networks (i.e., smaller standard deviation across the cells) and higher network throughput than previous algorithms.",
"title": ""
},
{
"docid": "3cdab5427efd08edc4f73266b7ed9176",
"text": "Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.",
"title": ""
},
{
"docid": "7d6c441d745adf8a7f6d833da9e46716",
"text": "X-ray computed tomography is a widely used method for nondestructive visualization of the interior of different samples - also of wooden material. Different to usual applications very high resolution is needed to use such CT images in dendrochronology and to evaluate wood species. In dendrochronology big samples (up to 50 cm) are necessary to scan. The needed resolution is - depending on the species - about 20 mum. In wood identification usually very small samples have to be scanned, but wood anatomical characters of less than 1 mum in width have to be visualized. This paper deals with four examples of X-ray CT scanned images to be used for dendrochronology and wood identification.",
"title": ""
},
{
"docid": "0b30b01adbb8b39fa37e4b6348abac34",
"text": "Many of the leading approaches in language modeling introduce novel, complex and specialized architectures. We take existing state-of-the-art word level language models based on LSTMs and QRNNs and extend them to both larger vocabularies as well as character-level granularity. When properly tuned, LSTMs and QRNNs achieve stateof-the-art results on character-level (Penn Treebank, enwik8) and word-level (WikiText-103) datasets, respectively. Results are obtained in only 12 hours (WikiText-103) to 2 days (enwik8) using a single modern GPU.",
"title": ""
},
{
"docid": "9a10716e1d7e24b790fb5dd48ad254ab",
"text": "Probabilistic models based on Bayes' rule are an increasingly popular approach to understanding human cognition. Bayesian models allow immense representational latitude and complexity. Because they use normative Bayesian mathematics to process those representations, they define optimal performance on a given task. This article focuses on key mechanisms of Bayesian information processing, and provides numerous examples illustrating Bayesian approaches to the study of human cognition. We start by providing an overview of Bayesian modeling and Bayesian networks. We then describe three types of information processing operations-inference, parameter learning, and structure learning-in both Bayesian networks and human cognition. This is followed by a discussion of the important roles of prior knowledge and of active learning. We conclude by outlining some challenges for Bayesian models of human cognition that will need to be addressed by future research. WIREs Cogn Sci 2011 2 8-21 DOI: 10.1002/wcs.80 For further resources related to this article, please visit the WIREs website.",
"title": ""
},
{
"docid": "584456ef251fbf31363832fc82bd3d42",
"text": "Neural network architectures found by sophistic search algorithms achieve strikingly good test performance, surpassing most human-crafted network models by significant margins. Although computationally efficient, their design is often very complex, impairing execution speed. Additionally, finding models outside of the search space is not possible by design. While our space is still limited, we implement undiscoverable expert knowledge into the economic search algorithm Efficient Neural Architecture Search (ENAS), guided by the design principles and architecture of ShuffleNet V2. While maintaining baselinelike 2.85% test error on CIFAR-10, our ShuffleNASNets are significantly less complex, require fewer parameters, and are two times faster than the ENAS baseline in a classification task. These models also scale well to a low parameter space, achieving less than 5% test error with little regularization and only 236K parameters.",
"title": ""
},
{
"docid": "42e2a8b8c1b855fba201e3421639d80d",
"text": "Fraudulent behaviors in Google’s Android app market fuel search rank abuse and malware proliferation. We present FairPlay, a novel system that uncovers both malware and search rank fraud apps, by picking out trails that fraudsters leave behind. To identify suspicious apps, FairPlay’s PCF algorithm correlates review activities and uniquely combines detected review relations with linguistic and behavioral signals gleaned from longitudinal Google Play app data. We contribute a new longitudinal app dataset to the community, which consists of over 87K apps, 2.9M reviews, and 2.4M reviewers, collected over half a year. FairPlay achieves over 95% accuracy in classifying gold standard datasets of malware, fraudulent and legitimate apps. We show that 75% of the identified malware apps engage in search rank fraud. FairPlay discovers hundreds of fraudulent apps that currently evade Google Bouncer’s detection technology, and reveals a new type of attack campaign, where users are harassed into writing positive reviews, and install and review other apps.",
"title": ""
}
] |
scidocsrr
|
a4a74d88724099d063e0767f32505a01
|
Vision System for AGI: Problems and Directions
|
[
{
"docid": "94bb7d2329cbea921c6f879090ec872d",
"text": "We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment. An interactive version of this paper is available at https://worldmodels.github.io",
"title": ""
},
{
"docid": "5cb704f9980a9d28da4cabd903bf1699",
"text": "The ability for an agent to localize itself within an environment is crucial for many real-world applications. For unknown environments, Simultaneous Localization and Mapping (SLAM) enables incremental and concurrent building of and localizing within a map. We present a new, differentiable architecture, Neural Graph Optimizer, progressing towards a complete neural network solution for SLAM by designing a system composed of a local pose estimation model, a novel pose selection module, and a novel graph optimization process. The entire architecture is trained in an end-to-end fashion, enabling the network to automatically learn domain-specific features relevant to the visual odometry and avoid the involved process of feature engineering. We demonstrate the effectiveness of our system on a simulated 2D maze and the 3D ViZ-Doom environment.",
"title": ""
}
] |
[
{
"docid": "d055902aa91efacb35a204132c51a68e",
"text": "This paper provides a method for improving tensor-based compositional distributional models of meaning by the addition of an explicit disambiguation step prior to composition. In contrast with previous research where this hypothesis has been successfully tested against relatively simple compositional models, in our work we use a robust model trained with linear regression. The results we get in two experiments show the superiority of the prior disambiguation method and suggest that the effectiveness of this approach is modelindependent.",
"title": ""
},
{
"docid": "cc9c9720b223ff1d433758bce11a373a",
"text": "or to skim the text of the article quickly, while academics are more likely to download and print the paper. Further research investigating the ratio between HTML views and PDF downloads could uncover interesting findings about how the public interacts with the open access (OA) research literature. Scholars In addition to tracking scholarly impacts on traditionally invisible audiences, altmetrics hold potential for tracking previously hidden scholarly impacts. Faculty of 1000 Faculty of 1000 (F1000) is a service publishing reviews of important articles, as adjudged by a core “faculty” of selected scholars. Wets, Weedon, and Velterop (2003) argue that F1000 is valuable because it assesses impact at the article level, and adds a human level assessment that statistical indicators lack. Others disagree (Nature Neuroscience, 2005), pointing to a very strong correlation (r = 0.93) between F1000 score and Journal Impact Factor. This said, the service has clearly demonstrated some value, as over two thirds of the world’s top research institutions pay the annual subscription fee to use F1000 (Wets et al., 2003). Moreover, F1000 has been to shown to spot valuable articles which “sole reliance on bibliometric indicators would have led [researchers] to miss” (Allen, Jones, Dolby, Lynn, & Walport, 2009, p. 1). In the PLoS dataset, F1000 recommendations were not closely associated with citation or other altmetrics counts, and formed their own factor in factor analysis, suggesting they track a relatively distinct sort of impact. Conversation (scholarly blogging) In this context, “scholarly blogging” is distinguished from its popular counterpart by the expertise and qualifications of the blogger. While a useful distinction, this is inevitably an imprecise one. One approach has been to limit the investigation to science-only aggregators like ResearchBlogging (Groth & Gurney, 2010; Shema & Bar-Ilan, 2011). Academic blogging has grown steadily in visibility; academics have blogged their dissertations (Efimova, 2009), and the ranks of academic bloggers contain several Fields Medalists, Nobel laureates, and other eminent scholars (Nielsen, 2009). Economist and Nobel laureate Paul Krugman (Krugman, 2012), himself a blogger, argues that blogs are replacing the working-paper culture that has in turn already replaced economics journals as distribution tools. Given its importance, there have been surprisingly few altmetrics studies of scholarly blogging. Extant research, however, has shown that blogging shares many of the characteristics of more formal communication, including a long-tail distribution of cited articles (Groth & Gurney, 2010; Shema & Bar-Ilan, 2011). Although science bloggers can write anonymously, most blog under their real names (Shema & Bar-Ilan, 2011). Conversation (Twitter) Scholars on Twitter use the service to support different activities, including teaching (Dunlap & Lowenthal, 2009; Junco, Heiberger, & Loken, 2011), participating in conferences (Junco et al., 2011; Letierce et al., 2010; Ross et al., 2011), citing scholarly articles (Priem & Costello, 2010; Weller, Dröge, & Puschmann, 2011), and engaging in informal communication (Ross et al., 2011; Zhao & Rosson, 2009). Citations from Twitter are a particularly interesting data source, since they capture the sort of informal discussion that accompanies early important work. There is, encouragingly, evidence that Tweeting scholars take citations from Twitter seriously, both in creating and reading them (Priem & Costello, 2010). The number of scholars on Twitter is growing steadily, as shown in Figure 1. The same study found that, in a sample of around 10,000 Ph.D. students and faculty members at five representative universities, one 1 in 40 scholars had an active Twitter account. Although some have suggested that Twitter is only used by younger scholars, rank was not found to significantly associate with Twitter use, and in fact faculty members’ tweets were twice as likely to discuss their and others’ scholarly work. Conversation (article commenting) Following the lead of blogs and other social media platforms, many journals have added article-level commenting to their online platforms in the middle of the last decade. In theory, the discussion taking place in these threads is another valuable lens into the early impacts of scientific ideas. In practice, however, many commenting systems are virtual ghost towns. In a sample of top medical journals, fully half had commenting systems laying idle, completely unused by anyone (Schriger, Chehrazi, Merchant, & Altman, 2011). However, commenting was far from universally unsuccessful; several journals had comments on 50-76% of their articles. In a sample from the British Medical Journal, articles had, on average, nearly five comments each (Gotzsche, Delamothe, Godlee, & Lundh, 2010). Additionally, many articles may accumulate comments in other environments; the growing number of external comment sites allows users to post comments on journal articles published elsewhere. These have tended to appear and disappear quickly over the last few years. Neylon (2010) argues that online article commenting is thriving, particularly for controversial papers, but that \"...people are much more comfortable commenting in their own spaces” (para. 5), like their blogs and on Twitter. Reference managers Reference managers like Mendeley and CiteULike are very useful sources of altmetrics data and are currently among the most studied. Although scholars have used electronic reference managers for some time, this latest generation offers scientometricians the chance to query their datasets, offering a compelling glimpse into scholars’ libraries. It is worth summarizing three main points, though. First, the most important social reference managers are CiteULike and Mendeley. Another popular reference manager, Zotero, has received less study (but see Lucas, 2008). Papers and ReadCube are newer, smaller reference managers; Connotea and 2Collab both dealt poorly with spam; the latter has closed, and the former may follow. Second, the usage base of social reference managers—particularly Mendeley—is large and growing rapidly. Mendeley’s coverage, in particular, rivals that of commercial databases like Scopus and Web of Science (WoS) (Bar-Ilan et al., 2012; Haustein & Siebenlist, 2011; Li et al., 2011; Priem et al., 2012). Finally, inclusion in reference managers correlates to citation more strongly than most other altmetrics. Working with various datasets, researchers have reported correlations of .46 (Bar-Ilan, 2012), .56 (Li et al., 2011), and .5 (Priem et al., 2012) between inclusion in users’ Mendeley libraries, and WoS citations. This closer relationship is likely because of the importance of reference managers in the citation workflow. However, the lack of perfect or even strong correlation suggests that this altmetric, too, captures influence not reflected in the citation record. There has been particular interest in using social bookmarking for recommendations (Bogers & van den Bosch, 2008; Jiang, He, & Ni, 2011). pdf downloads As discussed earlier, most research on downloads today does not distinguish between HTML views in PDF downloads. However there is a substantial and growing body of research investigating article downloads, and their relation to later citation. Several researchers have found that downloads predict or correlate with later citation (Perneger, 2004; Brody et al., 2006). The MESUR project is the largest of these studies to date, and used linked usage events to create a novel map of the connections between disciplines, as well as analyses of potential metrics using download and citation data in novel ways (Bollen, et al., 2009). Shuai, Pepe, and Bollen (2012) show that downloads and Twitter citations interact, with Twitter likely driving traffic to new papers, and also reflecting reader interest. Uses, limitations and future research Uses Several uses of altmetrics have been proposed, which aim to capitalize on their speed, breadth, and diversity, including use in evaluation, analysis, and prediction. Evaluation The breadth of altmetrics could support more holistic evaluation efforts; a range of altmetrics may help solve the reliability problems of individual measures by triangulating scores from easily-accessible “converging partial indicators” (Martin & Irvine, 1983, p. 1). Altmetrics could also support the evaluation of increasingly important, non-traditional scholarly products like datasets and software, which are currently underrepresented in the citation record (Howison & Herbsleb, 2011; Sieber & Trumbo, 1995). Research that impacts wider audiences could also be better rewarded; Neylon (2012) relates a compelling example of how tweets reveal clinical use of a research paper—use that would otherwise go undiscovered and unrewarded. The speed of altmetrics could also be useful in evaluation, particularly for younger scholars whose research has not yet accumulated many citations. Most importantly, altmetrics could help open a window on scholars’ “scientific ‘street cred’” (Cronin, 2001, p. 6), helping reward researchers whose subtle influences—in conversations, teaching, methods expertise, and so on— influence their colleagues without perturbing the citation record. Of course, potential evaluators must be strongly cautioned that while uncritical application of any metric is dangerous, this is doubly so with altmetrics, whose research base is not yet adequate to support high-stakes decisions.",
"title": ""
},
{
"docid": "4e791e4367b5ef9ff4259a87b919cff7",
"text": "Considerable attention has been paid to dating the earliest appearance of hominins outside Africa. The earliest skeletal and artefactual evidence for the genus Homo in Asia currently comes from Dmanisi, Georgia, and is dated to approximately 1.77–1.85 million years ago (Ma)1. Two incisors that may belong to Homo erectus come from Yuanmou, south China, and are dated to 1.7 Ma2; the next-oldest evidence is an H. erectus cranium from Lantian (Gongwangling)—which has recently been dated to 1.63 Ma3—and the earliest hominin fossils from the Sangiran dome in Java, which are dated to about 1.5–1.6 Ma4. Artefacts from Majuangou III5 and Shangshazui6 in the Nihewan basin, north China, have also been dated to 1.6–1.7 Ma. Here we report an Early Pleistocene and largely continuous artefact sequence from Shangchen, which is a newly discovered Palaeolithic locality of the southern Chinese Loess Plateau, near Gongwangling in Lantian county. The site contains 17 artefact layers that extend from palaeosol S15—dated to approximately 1.26 Ma—to loess L28, which we date to about 2.12 Ma. This discovery implies that hominins left Africa earlier than indicated by the evidence from Dmanisi. An Early Pleistocene artefact assemblage from the Chinese Loess Plateau indicates that hominins had left Africa by at least 2.1 million years ago, and occupied the Loess Plateau repeatedly for a long time.",
"title": ""
},
{
"docid": "2fc1afae973ddd832afa92d27222ef09",
"text": "In our 1990 paper, we showed that managers concerned with their reputations might choose to mimic the behavior of other managers and ignore their own information. We presented a model in which “smart” managers receive correlated, informative signals, whereas “dumb” managers receive independent, uninformative signals. Managers have an incentive to follow the herd to indicate to the labor market that they have received the same signal as others, and hence are likely to be smart. This model of reputational herding has subsequently found empirical support in a number of recent papers, including Judith A. Chevalier and Glenn D. Ellison’s (1999) study of mutual fund managers and Harrison G. Hong et al.’s (2000) study of equity analysts. We argued in our 1990 paper that reputational herding “requires smart managers’ prediction errors to be at least partially correlated with each other” (page 468). In their Comment, Marco Ottaviani and Peter Sørensen (hereafter, OS) take issue with this claim. They write: “correlation is not necessary for herding, other than in degenerate cases.” It turns out that the apparent disagreement hinges on how strict a definition of herding one adopts. In particular, we had defined a herding equilibrium as one in which agentB alwaysignores his own information and follows agent A. (See, e.g., our Propositions 1 and 2.) In contrast, OS say that there is herding when agent B sometimesignores his own information and follows agent A. The OS conclusion is clearly correct given their weaker definition of herding. At the same time, however, it also seems that for the stricter definition that we adopted in our original paper, correlated errors on the part of smart managers are indeed necessary for a herding outcome—even when one considers the expanded parameter space that OS do. We will try to give some intuition for why the different definitions of herding lead to different conclusions about the necessity of correlated prediction errors. Along the way, we hope to convince the reader that our stricter definition is more appropriate for isolating the economic effects at work in the reputational herding model. An example is helpful in illustrating what is going on. Consider a simple case where the parameter values are as follows: p 5 3⁄4; q 5 1⁄4; z 5 1⁄2, andu 5 1⁄2. In our 1990 paper, we also imposed the constraint that z 5 ap 1 (1 2 a)q, which further implies thata 5 1⁄2. The heart of the OS Comment is the idea that this constraint should be disposed of—i.e., we should look at other values of a. Without loss of generality, we will consider values of a above 1⁄2, and distinguish two cases.",
"title": ""
},
{
"docid": "454c390fcd7d9a3d43842aee19c77708",
"text": "Altmetrics have gained momentum and are meant to overcome the shortcomings of citation-based metrics. In this regard some light is shed on the dangers associated with the new “all-in-one” indicator altmetric score.",
"title": ""
},
{
"docid": "6e80065ade40ada9efde1f58859498bc",
"text": "Neural networks, as powerful tools for data mining and knowledge engineering, can learn from data to build feature-based classifiers and nonlinear predictive models. Training neural networks involves the optimization of nonconvex objective functions, and usually, the learning process is costly and infeasible for applications associated with data streams. A possible, albeit counterintuitive, alternative is to randomly assign a subset of the networks’ weights so that the resulting optimization task can be formulated as a linear least-squares problem. This methodology can be applied to both feedforward and recurrent networks, and similar techniques can be used to approximate kernel functions. Many experimental results indicate that such randomized models can reach sound performance compared to fully adaptable ones, with a number of favorable benefits, including (1) simplicity of implementation, (2) faster learning with less intervention from human beings, and (3) possibility of leveraging overall linear regression and classification algorithms (e.g., l1 norm minimization for obtaining sparse formulations). This class of neural networks attractive and valuable to the data mining community, particularly for handling large scale data mining in real-time. However, the literature in the field is extremely vast and fragmented, with many results being reintroduced multiple times under different names. This overview aims to provide a self-contained, uniform introduction to the different ways in which randomization can be applied to the design of neural networks and kernel functions. A clear exposition of the basic framework underlying all these approaches helps to clarify innovative lines of research, open problems, and most importantly, foster the exchanges of well-known results throughout different communities. © 2017 John Wiley & Sons, Ltd",
"title": ""
},
{
"docid": "89238dd77c0bf0994b53190078eb1921",
"text": "Several methods exist for a computer to generate music based on data including Markov chains, recurrent neural networks, recombinancy, and grammars. We explore the use of unit selection and concatenation as a means of generating music using a procedure based on ranking, where, we consider a unit to be a variable length number of measures of music. We first examine whether a unit selection method, that is restricted to a finite size unit library, can be sufficient for encompassing a wide spectrum of music. This is done by developing a deep autoencoder that encodes a musical input and reconstructs the input by selecting from the library. We then describe a generative model that combines a deep structured semantic model (DSSM) with an LSTM to predict the next unit, where units consist of four, two, and one measures of music. We evaluate the generative model using objective metrics including mean rank and accuracy and with a subjective listening test in which expert musicians are asked to complete a forcedchoiced ranking task. Our system is compared to a note-level generative baseline model that consists of a stacked LSTM trained to predict forward by one note.",
"title": ""
},
{
"docid": "b306a3b20b73f537d8d9634957f0688c",
"text": "In this paper, we report real-time measurement results of various contact forces exerted on a new flexible capacitive three-axis tactile sensor array based on polydimethylsiloxane (PDMS). A unit sensor consists of two thick PDMS layers with embedded copper electrodes, a spacer layer, an insulation layer and a bump layer. There are four capacitors in a unit sensor to decompose a contact force into its normal and shear components. They are separated by a wall-type spacer to improve the mechanical response time. Four capacitors are arranged in a square form. The whole sensor is an 8 × 8 array of unit sensors and each unit sensor responds to forces in all three axes. Measurement results show that the full-scale range of detectable force is around 0–20 mN (250 kPa) for all three axes. The estimated sensitivities of a unit sensor with the current setup are 1.3, 1.2 and 1.2%/mN for the x-, yand z-axes, respectively. A simple mechanical model has been established to calculate each axial force component from the measured capacitance value. Normal and shear force distribution images are captured from the fabricated sensor using a real-time measurement system. The mechanical response time of a unit sensor has been estimated to be less than 160 ms. The flexibility of the sensor has also been demonstrated by operating the sensor on a curved surface of 4 mm radius of curvature. (Some figures in this article are in colour only in the electronic version)",
"title": ""
},
{
"docid": "41cf1b873d69f15cbc5fa25e849daa61",
"text": "Methods for controlling the bias/variance tradeoff typica lly assume that overfitting or overtraining is a global phenomenon. For multi-layer perceptron (MLP) neural netwo rks, global parameters such as the training time (e.g. based on validation tests), network size, or the amount of we ight decay are commonly used to control the bias/variance tradeoff. However, the degree of overfitting can vary signifi cantly throughout the input space of the model. We show that overselection of the degrees of freedom for an MLP train ed with backpropagation can improve the approximation in regions of underfitting, while not significantly overfitti ng in other regions. This can be a significant advantage over other models. Furthermore, we show that “better” learning a lgorithms such as conjugate gradient can in fact lead to worse generalization, because they can be more prone to crea ting v rying degrees of overfitting in different regions of the input space. While experimental results cannot cover all practical situations, our results do help to explain common behavior that does not agree with theoretical expect ations. Our results suggest one important reason for the relative success of MLPs, bring into question common bel iefs about neural network training regarding training algorithms, overfitting, and optimal network size, suggest alternate guidelines for practical use (in terms of the trai ning algorithm and network size selection), and help to direct fu ture work (e.g. regarding the importance of the MLP/BP training bias, the possibility of worse performance for “be tter” training algorithms, local “smoothness” criteria, a nd further investigation of localized overfitting).",
"title": ""
},
{
"docid": "ed282d88b5f329490f390372c502f238",
"text": "Extracting opinion expressions from text is an essential task of sentiment analysis, which is usually treated as one of the word-level sequence labeling problems. In such problems, compositional models with multiplicative gating operations provide efficient ways to encode the contexts, as well as to choose critical information. Thus, in this paper, we adopt Long Short-Term Memory (LSTM) recurrent neural networks to address the task of opinion expression extraction and explore the internal mechanisms of the model. The proposed approach is evaluated on the Multi-Perspective Question Answering (MPQA) opinion corpus. The experimental results demonstrate improvement over previous approaches, including the state-of-the-art method based on simple recurrent neural networks. We also provide a novel micro perspective to analyze the run-time processes and gain new insights into the advantages of LSTM selecting the source of information with its flexible connections and multiplicative gating operations.",
"title": ""
},
{
"docid": "dc813db85741a56d0f47044b9c2276d0",
"text": "We study the complexity required for the implementation of multi-agent contracts under a variety of solution concepts. A contract is a mapping from strategy profiles to outcomes. Practical implementation of a contract requires it to be ''simple'', an illusive concept that needs to be formalized. A major source of complexity is the burden involving verifying the contract fulfillment (for example in a court of law). Contracts which specify a small number of outcomes are easier to verify and are less prone to disputes. We therefore measure the complexity of a contract by the number of outcomes it specifies. Our approach is general in the sense that all strategic interaction represented by a normal form game are allowed. The class of solution concepts we consider is rather exhaustive and includes Nash equilibrium with both pure and mixed strategies, dominant strategy implementation, iterative elimination of dominated strategies and strong equilibria.\n Some interesting insights can be gained from our analysis: Firstly, our results indicate that the complexity of implementation is independent of the size of the strategy spaces of the players but for some solution concepts grows with the number of players. Second, the complexity of {\\em unique} implementation is sometimes slightly larger, but not much larger than non-unique implementation. Finally and maybe surprisingly, for most solution concepts implementation with optimal cost usually does not require higher complexity than the complexity necessary for implementation at all.",
"title": ""
},
{
"docid": "b0741999659724f8fa5dc1117ec86f0d",
"text": "With the rapidly growing scales of statistical problems, subset based communicationfree parallel MCMC methods are a promising future for large scale Bayesian analysis. In this article, we propose a new Weierstrass sampler for parallel MCMC based on independent subsets. The new sampler approximates the full data posterior samples via combining the posterior draws from independent subset MCMC chains, and thus enjoys a higher computational efficiency. We show that the approximation error for the Weierstrass sampler is bounded by some tuning parameters and provide suggestions for choice of the values. Simulation study shows the Weierstrass sampler is very competitive compared to other methods for combining MCMC chains generated for subsets, including averaging and kernel smoothing.",
"title": ""
},
{
"docid": "7fd9da6cb91385238335807348d7879e",
"text": "Modeling the popularity dynamics of an online item is an important open problem in computational social science. This paper presents an in-depth study of popularity dynamics under external promotions, especially in predicting popularity jumps of online videos, and determining effective and efficient schedules to promote online content. The recently proposed Hawkes Intensity Process (HIP) models popularity as a non-linear interplay between exogenous stimuli and the endogenous reactions. Here, we propose two novel metrics based on HIP: to describe popularity gain per unit of promotion, and to quantify the time it takes for such effects to unfold. We make increasingly accurate forecasts of future popularity by including information about the intrinsic properties of the video, promotions it receives, and the non-linear effects of popularity ranking. We illustrate by simulation the interplay between the unfolding of popularity over time, and the time-sensitive value of resources. Lastly, our model lends a novel explanation of the commonly adopted periodic and constant promotion strategy in advertising, as increasing the perceived viral potential. This study provides quantitative guidelines about setting promotion schedules considering content virality, timing, and economics.",
"title": ""
},
{
"docid": "d603e92c3f3c8ab6a235631ee3a55d52",
"text": "This work focuses on algorithms which learn from examples to perform multiclass text and speech categorization tasks. We rst show how to extend the standard notion of classiication by allowing each instance to be associated with multiple labels. We then discuss our approach for multiclass multi-label text categorization which is based on a new and improved family of boosting algorithms. We describe in detail an implementation, called BoosTexter, of the new boosting algorithms for text categorization tasks. We present results comparing the performance of BoosTexter and a number of other text-categorization algorithms on a variety of tasks. We conclude by describing the application of our system to automatic call-type identiication from unconstrained spoken customer responses.",
"title": ""
},
{
"docid": "7bd421d61df521c300740f4ed6789fa5",
"text": "Breast cancer has become a common disease around the world. Expert systems are valuable tools that have been successful for the disease diagnosis. In this research, we accordingly develop a new knowledge-based system for classification of breast cancer disease using clustering, noise removal, and classification techniques. Expectation Maximization (EM) is used as a clustering method to cluster the data in similar groups. We then use Classification and Regression Trees (CART) to generate the fuzzy rules to be used for the classification of breast cancer disease in the knowledge-based system of fuzzy rule-based reasoning method. To overcome the multi-collinearity issue, we incorporate Principal Component Analysis (PCA) in the proposed knowledge-based system. Experimental results on Wisconsin Diagnostic Breast Cancer and Mammographic mass datasets show that proposed methods remarkably improves the prediction accuracy of breast cancer. The proposed knowledge-based system can be used as a clinical decision support system to assist medical practitioners in the healthcare practice.",
"title": ""
},
{
"docid": "a0d4089e55a0a392a2784ae50b6fa779",
"text": "Organizations place a great deal of emphasis on hiring individuals who are a good fit for the organization and the job. Among the many ways that individuals are screened for a job, the employment interview is particularly prevalent and nearly universally used (Macan, 2009; Huffcutt and Culbertson, 2011). This Research Topic is devoted to a construct that plays a critical role in our understanding of job interviews: impression management (IM). In the interview context, IM describes behaviors an individual uses to influence the impression that others have of them (Bozeman and Kacmar, 1997). For instance, a job applicant can flatter an interviewer to be seen as likable (i.e., ingratiation), play up their qualifications and abilities to be seen as competent (i.e., self-promotion), or utilize excuses or justifications to make up for a negative event or error (i.e., defensive IM; Ellis et al., 2002). IM has emerged as a central theme in the interview literature over the last several decades (for reviews, see Posthuma et al., 2002; Levashina et al., 2014). Despite some pioneering early work (e.g., Schlenker, 1980; Leary and Kowalski, 1990; Stevens and Kristof, 1995), there has been a resurgence of interest in the area over the last decade. While the literature to date has set up a solid foundational knowledge about interview IM, there are a number of emerging trends and directions. In the following, we lay out some critical areas of inquiry in interview IM, and highlight how the innovative set of papers in this Research Topic is illustrative of these new directions.",
"title": ""
},
{
"docid": "6020b70701164e0a14b435153db1743e",
"text": "Supply chain Management has assumed a significant role in firm's performance and has attracted serious research attention over the last few years. In this paper attempt has been made to review the literature on Supply Chain Management. A literature review reveals a considerable spurt in research in theory and practice of SCM. We have presented a literature review for 29 research papers for the period between 2005 and 2011. The aim of this study was to provide an up-to-date and brief review of the SCM literature that was focused on broad areas of the SCM concept.",
"title": ""
},
{
"docid": "71f7ce3b6e4a20a112f6a1ae9c22e8e1",
"text": "The neural correlates of many emotional states have been studied, most recently through the technique of fMRI. However, nothing is known about the neural substrates involved in evoking one of the most overwhelming of all affective states, that of romantic love, about which we report here. The activity in the brains of 17 subjects who were deeply in love was scanned using fMRI, while they viewed pictures of their partners, and compared with the activity produced by viewing pictures of three friends of similar age, sex and duration of friendship as their partners. The activity was restricted to foci in the medial insula and the anterior cingulate cortex and, subcortically, in the caudate nucleus and the putamen, all bilaterally. Deactivations were observed in the posterior cingulate gyrus and in the amygdala and were right-lateralized in the prefrontal, parietal and middle temporal cortices. The combination of these sites differs from those in previous studies of emotion, suggesting that a unique network of areas is responsible for evoking this affective state. This leads us to postulate that the principle of functional specialization in the cortex applies to affective states as well.",
"title": ""
}
] |
scidocsrr
|
0f58724c0c6bc801bf7bcfc0fe5698c4
|
Automatic projector calibration with embedded light sensors
|
[
{
"docid": "0c5dbac11af955a8261a4f3b8b5fe908",
"text": "We describe a calibration and rendering technique for a projector that can render rectangular images under keystoned position. The projector utilizes a rigidly attached camera to form a stereo pair. We describe a very easy to use technique for calibration of the projector-camera pair using only black planar surfaces. We present an efficient rendering method to pre-warp images so that they appear correctly on the screen, and show experimental results.",
"title": ""
}
] |
[
{
"docid": "bd1c93dfc02d90ad2a0c7343236342a7",
"text": "Osteochondritis dissecans (OCD) of the capitellum is an uncommon disorder seen primarily in the adolescent overhead athlete. Unlike Panner disease, a self-limiting condition of the immature capitellum, OCD is multifactorial and likely results from microtrauma in the setting of cartilage mismatch and vascular susceptibility. The natural history of OCD is poorly understood, and degenerative joint disease may develop over time. Multiple modalities aid in diagnosis, including radiography, MRI, and magnetic resonance arthrography. Lesion size, location, and grade determine management, which should attempt to address subchondral bone loss and articular cartilage damage. Early, stable lesions are managed with rest. Surgery should be considered for unstable lesions. Most investigators advocate arthroscopic débridement with marrow stimulation. Fragment fixation and bone grafting also have provided good short-term results, but concerns persist regarding the healing potential of advanced lesions. Osteochondral autograft transplantation appears to be promising and should be reserved for larger, higher grade lesions. Clinical outcomes and return to sport are variable. Longer-term follow-up studies are necessary to fully assess surgical management, and patients must be counseled appropriately.",
"title": ""
},
{
"docid": "a1118a6310736fc36dbc70bd25bd5f28",
"text": "Many studies have documented large and persistent productivity differences across producers, even within narrowly defined industries. This paper both extends and departs from the past literature, which focused on technological explanations for these differences, by proposing that demand-side features also play a role in creating the observed productivity variation. The specific mechanism investigated here is the effect of spatial substitutability in the product market. When producers are densely clustered in a market, it is easier for consumers to switch between suppliers (making the market in a certain sense more competitive). Relatively inefficient producers find it more difficult to operate profitably as a result. Substitutability increases truncate the productivity distribution from below, resulting in higher minimum and average productivity levels as well as less productivity dispersion. The paper presents a model that makes this process explicit and empirically tests it using data from U.S. ready-mixed concrete plants, taking advantage of geographic variation in substitutability created by the industry’s high transport costs. The results support the model’s predictions and appear robust. Markets with high demand density for ready-mixed concrete—and thus high concrete plant densities—have higher lower-bound and average productivity levels and exhibit less productivity dispersion among their producers.",
"title": ""
},
{
"docid": "100c2517fd0d01242ca34a124ef4e694",
"text": "Recently, the pervasiveness of street cameras for security and traffic monitoring opens new challenges to the computer vision technology to provide reliable monitoring schemes. These monitoring schemes require the basic processes of detecting and tracking objects, such as vehicles. However, object detection performance often suffers under occlusion. This work proposes a vehicle occlusion handling improvement of an existing traffic video monitoring system, which was later integrated. Two scenarios were considered in occlusion: indistinct and distinct - wherein the occluded vehicles have similar and dissimilar colors, respectively. K-means clustering using the HSV color space was used for distinct occlusion while sliding window algorithm was used for indistinct occlusion. The proposed method also applies deep convolutional neural networks to further improve vehicle recognition and classification. The CNN model obtained a 97.21% training accuracy and a 98.27% testing accuracy. Moreover, it minimizes the effect of occlusion to vehicle detection and classification. It also identifies common vehicle types (bus, truck, van, sedan, SUV, jeepney, and motorcycle) rather than classifying these as small, medium and large vehicles, which were the previous categories. Despite the implementation and results, it is recommended to improve the occlusion handling issue. The disadvantage of the sliding window algorithm is that it requires a lot of memory and is time-consuming. In case of deploying this research for more substantial purposes and intentions, it is ideal to enhance the CNN model by training it with more varied images of vehicles and to implement the system real-time. The results of this work can serve as a contribution for future works that are significant to traffic monitoring and air quality surveillance.",
"title": ""
},
{
"docid": "85016bc639027363932f9adf7012d7a7",
"text": "The output voltage ripple is one of the most significant system parameters in switch-mode power supplies. This ripple degrades the performance of application specific integrated circuits (ASICs). The most common way to reduce it is to use additional integrated low drop-out regulators (LDO) on the ASIC. This technique usually suffers from high system efficiency as it is required for portable electronic systems. It also increases the design challenges of on-chip power management circuits and area required for the LDOs. This work presents a low-power fully integrated 0.97mm2 DC-DC Buck converter with a tuned series LDO with 1mV voltage ripple in a 0.25μm BiCMOS process. The converter prodives a power supply rejection ratio of more than 60 dB from 1 to 6MHz and a load current range of 0...400 mA. A peak efficiency of 93.7% has been measured. For high light load efficiency, automatic mode operation is implemented. To decrease the form factor and costs, the external components count has been reduced to a single inductor of 1 μH and two external capacitors of 2 μF each.",
"title": ""
},
{
"docid": "f1deb9134639fb8407d27a350be5b154",
"text": "This work introduces a novel Convolutional Network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a ‘stacked hourglass’ network based on the successive steps of pooling and upsampling that are done to produce a final set of estimates. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"title": ""
},
{
"docid": "dee5489accb832615f63623bc445212f",
"text": "In this paper a simulation-based scheduling system is discussed which was developed for a semiconductor Backend facility. Apart from the usual dispatching rules it uses heuristic search strategies for the optimization of the operating sequences. In practice hereby multiple objectives have to be considered, e. g. concurrent minimization of mean cycle time, maximization of throughput and due date compliance. Because the simulation model is very complex and simulation time itself is not negligible, we emphasize to increase the convergence of heuristic optimization methods, consequentially reducing the number of necessary iterations. Several realized strategies are presented.",
"title": ""
},
{
"docid": "dd62fd669d40571cc11d64789314dba1",
"text": "It took the author 30 years to develop the Viable System Model, which sets out to explain how systems are viable – that is, capable of independent existence. He wanted to elucidate the laws of viability in order to facilitate the management task, and did so in a stream of papers and three (of his ten) books. Much misunderstanding about the VSM and its use seems to exist; especially its methodological foundations have been largely forgotten, while its major results have hardly been noted. This paper reflects on the history, nature and present status of the VSM, without seeking once again to expound the model in detail or to demonstrate its validity. It does, however, provide a synopsis, present the methodology and confront some highly contentious issues about both the managerial and scientific paradigms.",
"title": ""
},
{
"docid": "5b43ea9e56c81e98c52b4041b0c32fdf",
"text": "A novel broadband probe-type waveguide-to-microstrip transition adapted for operation in V band is presented. The transition is realized on a standard high frequency printed circuit board (PCB) fixed between a standard WR-15 waveguide and a simple backshort. The microstrip-fed probe is placed at the same side of the PCB with the backshort and acts as an impedance matching element. The proposed transition additionally includes two through holes implemented on the PCB in the center of the transition area. Thus, significant part of the lossy PCB dielectric is removed from that area providing wideband and low-loss performance of the transition. Measurements show that the designed transition has the bandwidth of 50–70 GHz for the −10 dB level of the reflection coefficient with the loss level of only 0.75 dB within the transition bandwidth.",
"title": ""
},
{
"docid": "2b4caf3ecdcd78ac57d8acd5788084d2",
"text": "In the age of information network explosion, Along with the popularity of the Internet, users can link to all kinds of social networking sites anytime and anywhere to interact and discuss with others. This phenomenon indicates that social networking sites have become a platform for interactions between companies and customers so far. Therefore, with the above through social science and technology development trend arising from current social phenomenon, research of this paper, mainly expectations for analysis by the information of interaction between people on the social network, such as: user clicked fans pages, user's graffiti wall message information, friend clicked fans pages etc. Three kinds of personal information for personal preference analysis, and from this huge amount of personal data to find out corresponding diverse group for personal preference category. We can by personal preference information for diversify personal advertising, product recommendation and other services. The paper at last through the actual business verification, the research can improve website browsing pages growth 11%, time on site growth 15%, site bounce rate dropped 13.8%, product click through rate growth 43%, more fully represents the results of this research fit the use's preference.",
"title": ""
},
{
"docid": "93a03403b2e44cddccfbe4e6b6e9d0ef",
"text": "Safety and security are two key properties of Cyber-Physical Systems (CPS). Safety is aimed at protecting the systems from accidental failures in order to avoid hazards, while security is focused on protecting the systems from intentional attacks. They share identical goals – protecting CPS from failing. When aligned within a CPS, safety and security work well together in providing a solid foundation of an invincible CPS, while weak alignment may produce inefficient development and partially-protected systems. The need of such alignment has been recognized by the research community, the industry, as well as the International Society of Automation (ISA), which identified a need of alignment between safety and security standards ISA84 (IEC 61511) and ISA99 (IEC 62443). We propose an approach for aligning CPS safety and security at early development phases by synchronizing safety and security lifecycles based on ISA84 and ISA99 standards. The alignment is achieved by merging safety and security lifecycle phases, and developing an unified model – Failure-Attack-CounTermeasure (FACT) Graph. The FACT graph incorporates safety artefacts (fault trees and safety countermeasures) and security artefacts (attack trees and security countermeasures), and can be used during safety and security alignment analysis, as well as in later CPS development and operation phases, such as verification, validation, monitoring, and periodic safety and security assessment.",
"title": ""
},
{
"docid": "6d594c21ff1632b780b510620484eb62",
"text": "The last several years have seen intensive interest in exploring neural-networkbased models for machine comprehension (MC) and question answering (QA). In this paper, we approach the problems by closely modelling questions in a neural network framework. We first introduce syntactic information to help encode questions. We then view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline.",
"title": ""
},
{
"docid": "8240df0c9498482522ef86b4b1e924ab",
"text": "The advent of the IT-led era and the increased competition have forced companies to react to the new changes in order to remain competitive. Enterprise resource planning (ERP) systems offer distinct advantages in this new business environment as they lower operating costs, reduce cycle times and (arguably) increase customer satisfaction. This study examines, via an exploratory survey of 26 companies, the underlying reasons why companies choose to convert from conventional information systems (IS) to ERP systems and the changes brought in, particularly in the accounting process. The aim is not only to understand the changes and the benefits involved in adopting ERP systems compared with conventional IS, but also to establish the best way forward in future ERP applications. The empirical evidence confirms a number of changes in the accounting process introduced with the adoption of ERP systems.",
"title": ""
},
{
"docid": "c95c46d75c2ff3c783437100ba06b366",
"text": "Co-references are traditionally used when integrating data from different datasets. This approach has various benefits such as fault tolerance, ease of integration and traceability of provenance; however, it often results in the problem of entity consolidation, i.e., of objectively stating whether all the co-references do really refer to the same entity; and, when this is the case, whether they all convey the same intended meaning. Relying on the sole presence of a single equivalence (owl:sameAs) statement is often problematic and sometimes may even cause serious troubles. It has been observed that to indicate the likelihood of an equivalence one could use a numerically weighted measure, but the real hard questions of where precisely will these values come from arises. To answer this question we propose a methodology based on a graph clustering algorithm.",
"title": ""
},
{
"docid": "c05d94b354b1d3a024a87e64d06245f1",
"text": "This paper outlines an innovative game model for learning computational thinking (CT) skills through digital game-play. We have designed a game framework where students can practice and develop their skills in CT with little or no programming knowledge. We analyze how this game supports various CT concepts and how these concepts can be mapped to programming constructs to facilitate learning introductory computer programming. Moreover, we discuss the potential benefits of our approach as a support tool to foster student motivation and abilities in problem solving. As initial evaluation, we provide some analysis of feedback from a survey response group of 25 students who have played our game as a voluntary exercise. Structured empirical evaluation will follow, and the plan for that is briefly described.",
"title": ""
},
{
"docid": "46adb4d23404c7f404ede6656ec8712f",
"text": "Over the past decades, the importance of multimedia services such as video streaming has increased considerably. HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for adaptive video streaming services. In HAS, a video is split into multiple segments and encoded at multiple quality levels. State-of-the-art HAS clients employ deterministic heuristics to dynamically adapt the requested quality level based on the perceived network and device conditions. Current HAS client heuristics are however hardwired to fit specific network configurations, making them less flexible to fit a vast range of settings. In this article, an adaptive Q-Learning-based HAS client is proposed. In contrast to existing heuristics, the proposed HAS client dynamically learns the optimal behavior corresponding to the current network environment. Considering multiple aspects of video quality, a tunable reward function has been constructed, giving the opportunity to focus on different aspects of the Quality of Experience, the quality as perceived by the end-user. The proposed HAS client has been thoroughly evaluated using a network-based simulator, investigating multiple reward configurations and Reinforcement Learning specific settings. The evaluations show that the proposed client can outperform standard HAS in the evaluated networking environments.",
"title": ""
},
{
"docid": "588f49731321da292235ca0f36f04465",
"text": "Taylor and Francis Ltd CUS103837.sgm 10.1080/00220270500038545 Journal of Curri ulum Studies 0 00-0 00 (p i t)/0 -0000 (online) Original Article 2 5 & Francis Group Ltd 002 5 Hele Timperley School f Edu a onUniversity of AucklandPrivate Bag 92019AucklandNew [email protected] Hopes that the transformation of schools lies with exceptional leaders have proved both unrealistic and unsustainable. The idea of leadership as distributed across multiple people and situations has proven to be a more useful framework for understanding the realities of schools and how they might be improved. However, empirical work on how leadership is distributed within more and less successful schools is rare. This paper presents key concepts related to distributed leadership and illustrates them with an empirical study in a schoolimprovement context in which varying success was evident. Grounding the theory in this practice-context led to the identification of some risks and benefits of distributing leadership and to a challenge of some key concepts presented in earlier theorizing about leadership and its distribution.",
"title": ""
},
{
"docid": "0bc40c2f559a8daa37fbf2026db2f411",
"text": "A novel algorithm for calculating the QR decomposition (QRD) of polynomial matrix is proposed. The algorithm operates by applying a series of polynomial Givens rotations to transform a polynomial matrix into an upper-triangular polynomial matrix and, therefore, amounts to a generalisation of the conventional Givens method for formulating the QRD of a scalar matrix. A simple example is given to demonstrate the algorithm, but also illustrates two clear advantages of this algorithm when compared to an existing method for formulating the decomposition. Firstly, it does not demonstrate the same unstable behaviour that is sometimes observed with the existing algorithm and secondly, it typically requires less iterations to converge. The potential application of the decomposition is highlighted in terms of broadband multi-input multi-output (MIMO) channel equalisation.",
"title": ""
},
{
"docid": "a709d8ad8d8dd2226a90e0a60a5c36de",
"text": "Intermediate online targeted advertising (IOTA) is a new business model for online targeted advertising. Posting the right banner advertisement to the right web user at the right time is what advertisements allocation does in IOTA business model. This research uses probability theory to build a theoretical model based on Bayesian network to optimize advertisements allocation. The Bayesian network model allows us to calculate the probability that Web user will click the banner based on historical data. And these can help us to make optimal decision in advertisements allocation. Data availability is also be discussed in this paper. An experiment base on practical data is run to verify the feasibility of the Bayesian network model.",
"title": ""
},
{
"docid": "8b3431783f1dc699be1153ad80348d3e",
"text": "Quality Function Deployment (QFD) was conceived in Japan in the late 1960's, and introduced to America and Europe in 1983. This paper will provide a general overview of the QFD methodology and approach to product development. Once familiarity with the tool is established, a real-life application of the technique will be provided in a case study. The case study will illustrate how QFD was used to develop a new tape product and provide counsel to those that may want to implement the QFD process. Quality function deployment (QFD) is a “method to transform user demands into design quality, to deploy the functions forming quality, and to deploy methods for achieving the design quality into subsystems and component parts, and ultimately to specific elements of the manufacturing process.”",
"title": ""
}
] |
scidocsrr
|
bb7d7a006b01c38d5d7ef8f463592690
|
The Language of Fake News: Opening the Black-Box of Deep Learning Based Detectors
|
[
{
"docid": "8010b3fdc1c223202157419c4f61bacf",
"text": "Thanks to information explosion, data for the objects of interest can be collected from increasingly more sources. However, for the same object, there usually exist conflicts among the collected multi-source information. To tackle this challenge, truth discovery, which integrates multi-source noisy information by estimating the reliability of each source, has emerged as a hot topic. Several truth discovery methods have been proposed for various scenarios, and they have been successfully applied in diverse application domains. In this survey, we focus on providing a comprehensive overview of truth discovery methods, and summarizing them from different aspects. We also discuss some future directions of truth discovery research. We hope that this survey will promote a better understanding of the current progress on truth discovery, and offer some guidelines on how to apply these approaches in application domains.",
"title": ""
}
] |
[
{
"docid": "bbb592c079f1cb2248ded2e249dcc943",
"text": "A family of super deep networks, referred to as residual networks or ResNet [14], achieved record-beating performance in various visual tasks such as image recognition, object detection, and semantic segmentation. The ability to train very deep networks naturally pushed the researchers to use enormous resources to achieve the best performance. Consequently, in many applications super deep residual networks were employed for just a marginal improvement in performance. In this paper, we propose ∊-ResNet that allows us to automatically discard redundant layers, which produces responses that are smaller than a threshold ∊, without any loss in performance. The ∊-ResNet architecture can be achieved using a few additional rectified linear units in the original ResNet. Our method does not use any additional variables nor numerous trials like other hyperparameter optimization techniques. The layer selection is achieved using a single training process and the evaluation is performed on CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets. In some instances, we achieve about 80% reduction in the number of parameters.",
"title": ""
},
{
"docid": "e6922a113d619784bd902c06863b5eeb",
"text": "Brake Analysis and NVH (Noise, Vibration and Harshness) Optimization have become critically important areas of application in the Automotive Industry. Brake Noise and Vibration costs approximately $1Billion/year in warranty work in Detroit alone. NVH optimization is now increasingly being used to predict the vehicle tactile and acoustic responses in relation to the established targets for design considerations. Structural optimization coupled with frequency response analysis is instrumental in driving the design process so that the design targets are met in a timely fashion. Usual design targets include minimization of vehicle weight, the adjustment of fundamental eigenmodes and the minimization of acoustic pressure or vibration at selected vehicle locations. Both, Brake Analysis and NVH Optimization are computationally expensive analyses involving eigenvalue calculations. From a computational sense and the viewpoint of MSC.Nastran, brake analysis exercises the CEAD (Complex Eigenvalue Analysis Dmap) module, while NVH optimization invokes the DSADJ (Design Sensitivity using ADJoint method DMAP) module. In this paper, two automotive applications are presented to demonstrate the performance improvements of the CEAD and DSADJ modules on NEC vector-parallel supercomputers. Dramatic improvements in the DSADJ module resulting in approx. 8-9 fold performance improvement as compared to MSC.Nastran V70 were observed for NVH optimization. Also, brake simulations and experiences at General Motors will be presented. This analysis method has been successfully applied to 4 different programs at GM and the simulation results were consistent with laboratory experiments on test vehicles.",
"title": ""
},
{
"docid": "4eb205978a12b780dc26909bee0eebaa",
"text": "This paper introduces CPE, the CIRCE Plugin for Eclipse. The CPE adds to the open-source development environment Eclipse the ability of writing and analysing software requirements written in natural language. Models of the software described by the requirements can be examined on-line during the requirements writing process. Initial UML models and skeleton Java code can be generated from the requirements, and imported into Eclipse for further editing and analysis.",
"title": ""
},
{
"docid": "632fd895e8920cd9b25b79c9d4bd4ef4",
"text": "In minimally invasive surgery, instruments are inserted from the exterior of the patient’s body into the surgical field inside the body through the minimum incision, resulting in limited visibility, accessibility, and dexterity. To address this problem, surgical instruments with articulated joints and multiple degrees of freedom have been developed. The articulations in currently available surgical instruments use mainly wire or link mechanisms. These mechanisms are generally robust and reliable, but the miniaturization of the mechanical parts required often results in problems with size, weight, durability, mechanical play, sterilization, and assembly costs. We thus introduced a compliant mechanism to a laparoscopic surgical instrument with multiple degrees of freedom at the tip. To show the feasibility of the concept, we developed a prototype with two degrees of freedom articulated surgical instruments that can perform the grasping and bending movements. The developed prototype is roughly the same size of the conventional laparoscopic instrument, within the diameter of 4 mm. The elastic parts were fabricated by Ni-Ti alloy and SK-85M, rigid parts ware fabricated by stainless steel, covered by 3D- printed ABS resin. The prototype was designed using iterative finite element method analysis, and has a minimal number of mechanical parts. The prototype showed hysteresis in grasping movement presumably due to the friction; however, the prototype showed promising mechanical characteristics and was fully functional in two degrees of freedom. In addition, the prototype was capable to exert over 15 N grasping that is sufficient for the general laparoscopic procedure. The evaluation tests thus positively showed the concept of the proposed mechanism. The prototype showed promising characteristics in the given mechanical evaluation experiments. Use of a compliant mechanism such as in our prototype may contribute to the advancement of surgical instruments in terms of simplicity, size, weight, dexterity, and affordability.",
"title": ""
},
{
"docid": "309dee96492cf45ed2887701b27ad3ee",
"text": "The objective of a systematic review is to obtain empirical evidence about the topic under review and to allow moving forward the body of knowledge of a discipline. Therefore, systematic reviewing is a tool we can apply in Software Engineering to develop well founded guidelines with the final goal of improving the quality of the software systems. However, we still do not have as much experience in performing systematic reviews as in other disciplines like medicine, and therefore we need detailed guidance. This paper presents a proposal of a improved process to perform systematic reviews in software engineering. This process is the result of the tasks carried out in a first review and a subsequent update concerning the effectiveness of elicitation techniques.",
"title": ""
},
{
"docid": "79eb0a39106679e80bd1d1edcd100d4d",
"text": "Multi-agent predictive modeling is an essential step for understanding physical, social and team-play systems. Recently, Interaction Networks (INs) were proposed for the task of modeling multi-agent physical systems. One of the drawbacks of INs is scaling with the number of interactions in the system (typically quadratic or higher order in the number of agents). In this paper we introduce VAIN, a novel attentional architecture for multi-agent predictive modeling that scales linearly with the number of agents. We show that VAIN is effective for multiagent predictive modeling. Our method is evaluated on tasks from challenging multi-agent prediction domains: chess and soccer, and outperforms competing multi-agent approaches.",
"title": ""
},
{
"docid": "bb93778655c0bfa525d9539f8f720da6",
"text": "Small embedded integrated circuits (ICs) such as smart cards are vulnerable to the so-called side-channel attacks (SCAs). The attacker can gain information by monitoring the power consumption, execution time, electromagnetic radiation, and other information leaked by the switching behavior of digital complementary metal-oxide-semiconductor (CMOS) gates. This paper presents a digital very large scale integrated (VLSI) design flow to create secure power-analysis-attack-resistant ICs. The design flow starts from a normal design in a hardware description language such as very-high-speed integrated circuit (VHSIC) hardware description language (VHDL) or Verilog and provides a direct path to an SCA-resistant layout. Instead of a full custom layout or an iterative design process with extensive simulations, a few key modifications are incorporated in a regular synchronous CMOS standard cell design flow. The basis for power analysis attack resistance is discussed. This paper describes how to adjust the library databases such that the regular single-ended static CMOS standard cells implement a dynamic and differential logic style and such that 20 000+ differential nets can be routed in parallel. This paper also explains how to modify the constraints and rules files for the synthesis, place, and differential route procedures. Measurement-based experimental results have demonstrated that the secure digital design flow is a functional technique to thwart side-channel power analysis. It successfully protects a prototype Advanced Encryption Standard (AES) IC fabricated in an 0.18-mum CMOS",
"title": ""
},
{
"docid": "54b094c7747c8ac0b1fbd1f93e78fd8e",
"text": "It is essential for the marine navigator conducting maneuvers of his ship at sea to know future positions of himself and target ships in a specific time span to effectively solve collision situations. This article presents an algorithm of ship movement trajectory prediction, which, through data fusion, takes into account measurements of the ship's current position from a number of doubled autonomous devices. This increases the reliability and accuracy of prediction. The algorithm has been implemented in NAVDEC, a navigation decision support system and practically used on board ships.",
"title": ""
},
{
"docid": "76f66971abcce88b670940c8cc237cfc",
"text": "A challenging goal in neuroscience is to be able to read out, or decode, mental content from brain activity. Recent functional magnetic resonance imaging (fMRI) studies have decoded orientation, position and object category from activity in visual cortex. However, these studies typically used relatively simple stimuli (for example, gratings) or images drawn from fixed categories (for example, faces, houses), and decoding was based on previous measurements of brain activity evoked by those same stimuli or categories. To overcome these limitations, here we develop a decoding method based on quantitative receptive-field models that characterize the relationship between visual stimuli and fMRI activity in early visual areas. These models describe the tuning of individual voxels for space, orientation and spatial frequency, and are estimated directly from responses evoked by natural images. We show that these receptive-field models make it possible to identify, from a large set of completely novel natural images, which specific image was seen by an observer. Identification is not a mere consequence of the retinotopic organization of visual areas; simpler receptive-field models that describe only spatial tuning yield much poorer identification performance. Our results suggest that it may soon be possible to reconstruct a picture of a person’s visual experience from measurements of brain activity alone.",
"title": ""
},
{
"docid": "4e23da50d4f1f0c4ecdbbf5952290c98",
"text": "[Context and motivation] User stories are an increasingly popular textual notation to capture requirements in agile software development. [Question/Problem] To date there is no scientific evidence on the effectiveness of user stories. The goal of this paper is to explore how practicioners perceive this artifact in the context of requirements engineering. [Principal ideas/results] We explore perceived effectiveness of user stories by reporting on a survey with 182 responses from practitioners and 21 follow-up semi-structured interviews. The data shows that practitioners agree that using user stories, a user story template and quality guidelines such as the INVEST mnemonic improve their productivity and the quality of their work deliverables. [Contribution] By combining the survey data with 21 semi-structured follow-up interviews, we present 12 findings on the usage and perception of user stories by practitioners that employ user stories in their everyday work environment.",
"title": ""
},
{
"docid": "a39d7490a353f845da616a06eedbb211",
"text": "The explosive growth in online information is making it harder for large, globally distributed organizations to foster collaboration and leverage their intellectual assets. Recently, there has been a growing interest in the development of next generation knowledge management systems focussing on the artificial intelligence based technologies. We propose a generic knowledge management system architecture based on ADIPS (agent-based distributed information processing system) framework. This contributes to the stream of research on intelligent KM system to supports the creation, acquisition, management, and sharing of information that is widely distributed over a network system. It will benefit the users through the automatic provision of timely and relevant information with minimal effort to search for that information. Ontologies which stand out as a keystone of new generation of multiagent information systems, are used for the purpose of structuring the resources. This framework provides personalized information delivery, identifies items of interest to user proactively and enables unwavering management of distributed intellectual assets.",
"title": ""
},
{
"docid": "462ab6cc559053625e7447994b9c4f43",
"text": "The relationship of cortical structure and specific neuronal circuitry to global brain function, particularly its perturbations related to the development and progression of neuropathology, is an area of great interest in neurobehavioral science. Disruption of these neural networks can be associated with a wide range of neurological and neuropsychiatric disorders. Herein we review activity of the Default Mode Network (DMN) in neurological and neuropsychiatric disorders, including Alzheimer's disease, Parkinson's disease, Epilepsy (Temporal Lobe Epilepsy - TLE), attention deficit hyperactivity disorder (ADHD), and mood disorders. We discuss the implications of DMN disruptions and their relationship to the neurocognitive model of each disease entity, the utility of DMN assessment in clinical evaluation, and the changes of the DMN following treatment.",
"title": ""
},
{
"docid": "7321e113293a7198bf88a1744a7ca6c9",
"text": "It is widely claimed that research to discover and develop new pharmaceuticals entails high costs and high risks. High research and development (R&D) costs influence many decisions and policy discussions about how to reduce global health disparities, how much companies can afford to discount prices for lowerand middle-income countries, and how to design innovative incentives to advance research on diseases of the poor. High estimated costs also affect strategies for getting new medicines to the world’s poor, such as the advanced market commitment, which built high estimates into its inflated size and prices. This article takes apart the most detailed and authoritative study of R&D costs in order to show how high estimates have been constructed by industry-supported economists, and to show how much lower actual costs may be. Besides serving as an object lesson in the construction of ‘facts’, this analysis provides reason to believe that R&D costs need not be such an insuperable obstacle to the development of better medicines. The deeper problem is that current incentives reward companies to develop mainly new medicines of little advantage and compete for market share at high prices, rather than to develop clinically superior medicines with public funding so that prices could be much lower and risks to companies lower as well. BioSocieties advance online publication, 7 February 2011; doi:10.1057/biosoc.2010.40",
"title": ""
},
{
"docid": "28f61d005f1b53ad532992e30b9b9b71",
"text": "We propose a method for nonlinear residual echo suppression that consists of extracting spectral features from the far-end signal, and using an artificial neural network to model the residual echo magnitude spectrum from these features. We compare the modeling accuracy achieved by realizations with different features and network topologies, evaluating the mean squared error of the estimated residual echo magnitude spectrum. We also present a low complexity real-time implementation combining an offline-trained network with online adaptation, and investigate its performance in terms of echo suppression and speech distortion for real mobile phone recordings.",
"title": ""
},
{
"docid": "22c85072db1f5b5a51b69fcabf01eb5e",
"text": "Websites’ and mobile apps’ privacy policies, written in natural language, tend to be long and difficult to understand. Information privacy revolves around the fundamental principle of notice and choice, namely the idea that users should be able to make informed decisions about what information about them can be collected and how it can be used. Internet users want control over their privacy, but their choices are often hidden in long and convoluted privacy policy documents. Moreover, little (if any) prior work has been done to detect the provision of choices in text. We address this challenge of enabling user choice by automatically identifying and extracting pertinent choice language in privacy policies. In particular, we present a two-stage architecture of classification models to identify opt-out choices in privacy policy text, labelling common varieties of choices with a mean F1 score of 0.735. Our techniques enable the creation of systems to help Internet users to learn about their choices, thereby effectuating notice and choice and improving Internet privacy.",
"title": ""
},
{
"docid": "7fd21ee95850fec1f1e00b766eebbc06",
"text": "HP’s StoreAll with Express Query is a scalable commercial file archiving product that offers sophisticated file metadata management and search capabilities [3]. A new REST API enables fast, efficient searching to find all files that meet a given set of metadata criteria and the ability to tag files with custom metadata fields. The product brings together two significant systems: a scale out file system and a metadata database based on LazyBase [10]. In designing and building the combined product, we identified several real-world issues in using a pipelined database system in a distributed environment, and overcame several interesting design challenges that were not contemplated by the original research prototype. This paper highlights our experiences.",
"title": ""
},
{
"docid": "2cd8c6284e802d810084dd85f55b8fca",
"text": "Entities are essential elements of natural language. In this paper, we present methods for learning multi-level representations of entities on three complementary levels: character (character patterns in entity names extracted, e.g., by neural networks), word (embeddings of words in entity names) and entity (entity embeddings). We investigate state-of-theart learning methods on each level and find large differences, e.g., for deep learning models, traditional ngram features and the subword model of fasttext (Bojanowski et al., 2016) on the character level; for word2vec (Mikolov et al., 2013) on the word level; and for the order-aware model wang2vec (Ling et al., 2015a) on the entity level. We confirm experimentally that each level of representation contributes complementary information and a joint representation of all three levels improves the existing embedding based baseline for fine-grained entity typing by a large margin. Additionally, we show that adding information from entity descriptions further improves multi-level representations of entities.",
"title": ""
},
{
"docid": "420e6237516e111b7db525ac61d829bc",
"text": "The problem of human-computer interaction can be viewed as two powerful information processors (human and computer) attempting to communicate with each other via a narrowbandwidth, highly constrained interface [23]. To address it, we seek faster, more natural, and more convenient means for users and computers to exchange information. The user’s side is constrained by the nature of human communication organs and abilities; the computer’s is constrained only by input/output devices and interaction techniques that we can invent. Current technology has been stronger in the computer-to-user direction than user-to-computer, hence today’s user-computer dialogues are rather one-sided, with the bandwidth from the computer to the user far greater than that from user to computer. Using eye movements as a user-tocomputer communication medium can help redress this imbalance. This chapter describes the relevant characteristics of the human eye, eye tracking technology, how to design interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way, and the relationship between eye movement interfaces and virtual environments.",
"title": ""
},
{
"docid": "854f26f24986e729be06962952f9eaa2",
"text": "This paper illustrates the result of land use/cover change in Dhaka Metropolitan of Bangladesh using topographic maps and multi-temporal remotely sensed data from 1960 to 2005. The Maximum likelihood supervised classification technique was used to extract information from satellite data, and post-classification change detection method was employed to detect and monitor land use/cover change. Derived land use/cover maps were further validated by using high resolution images such as SPOT, IRS, IKONOS and field data. The overall accuracy of land cover change maps, generated from Landsat and IRS-1D data, ranged from 85% to 90%. The analysis indicated that the urban expansion of Dhaka Metropolitan resulted in the considerable reduction of wetlands, cultivated land, vegetation and water bodies. The maps showed that between 1960 and 2005 built-up areas increased approximately 15,924 ha, while agricultural land decreased 7,614 ha, vegetation decreased 2,336 ha, wetland/lowland decreased 6,385 ha, and water bodies decreased about 864 ha. The amount of urban land increased from 11% (in 1960) to 344% in 2005. Similarly, the growth of landfill/bare soils category was about 256% in the same period. Much of the city's rapid growth in population has been accommodated in informal settlements with little attempt being made to limit the risk of environmental impairments. The study quantified the patterns of land use/cover change for the last 45 years for Dhaka Metropolitan that forms valuable resources for urban planners and decision makers to devise sustainable land use and environmental planning.",
"title": ""
},
{
"docid": "d9176322068e6ca207ae913b1164b3da",
"text": "Topic Detection and Tracking (TDT) is a variant of classiication in which the classes are not known or xed in advance. Consider for example an incoming stream of news articles or email messages that are to be classiied by topic; new classes must be created as new topics arise. The problem is a challenging one for machine learning. Instances of new topics must be recognized as not belonging to any of the existing classes (detection), and instances of old topics must be correctly classiied (tracking)|often with extremely little training data per class. This paper proposes a new approach to TDT based on probabilis-tic, generative models. Strong statistical techniques are used to address the many challenges: hierarchical shrinkage for sparse data, statistical \\garbage collection\" for new event detection, clustering in time to separate the diierent events of a common topic, and deterministic anneal-ing for creating the hierarchy. Preliminary experimental results show promise.",
"title": ""
}
] |
scidocsrr
|
86ecc7239b53adcda02cb6076e7da7f6
|
Advanced Control Architectures for Intelligent Microgrids—Part II: Power Quality, Energy Storage, and AC/DC Microgrids
|
[
{
"docid": "f63dc3a5ceb6df8596410f1fdc7047c3",
"text": "This paper presents an energy management system (EMS) for a stand-alone droop-controlled microgrid, which adjusts generators output power to minimize fuel consumption and also ensures stable operation. It has previously been shown that frequency-droop gains have a significant effect on stability in such microgrids. Relationship between these parameters and stability margins are therefore identified, using qualitative analysis and small-signal techniques. This allows them to be selected to ensure stability. Optimized generator outputs are then implemented in real-time by the EMS, through adjustments to droop characteristics within this constraint. Experimental results from a laboratory-sized microgrid confirm the EMS function.",
"title": ""
},
{
"docid": "c8b36dd0f892c750f17bc714d177f3d1",
"text": "A scheme for controlling parallel connected inverters in a stand-alone AC supply system is presented. A key feature of this scheme is that it uses only those variables which can be measured locally at the inverter, and does not need communication of control signals between the inverters. This feature is important in high reliability uninterruptible power supply (UPS) systems, and in large DC power sources connected to an AC distribution system. Real and reactive power sharing between inverters can be achieved by controlling two independent quantities at the inverter: the power angle and the fundamental inverter voltage magnitude.<<ETX>>",
"title": ""
},
{
"docid": "a5911891697a1b2a407f231cf0ad6c28",
"text": "In this paper, a new control method for the parallel operation of inverters operating in an island grid or connected to an infinite bus is described. Frequency and voltage control, including mitigation of voltage harmonics, are achieved without the need for any common control circuitry or communication between inverters. Each inverter supplies a current that is the result of the voltage difference between a reference ac voltage source and the grid voltage across a virtual complex impedance. The reference ac voltage source is synchronized with the grid, with a phase shift, depending on the difference between rated and actual grid frequency. A detailed analysis shows that this approach has a superior behavior compared to existing methods, regarding the mitigation of voltage harmonics, short-circuit behavior and the effectiveness of the frequency and voltage control, as it takes the R to X line impedance ratio into account. Experiments show the behavior of the method for an inverter feeding a highly nonlinear load and during the connection of two parallel inverters in operation.",
"title": ""
},
{
"docid": "3216434dce13125f4d49c2e6890fd36a",
"text": "In this paper, a power control strategy is proposed for a low-voltage microgrid, where the mainly resistive line impedance, the unequal impedance among distributed generation (DG) units, and the microgrid load locations make the conventional frequency and voltage droop method unpractical. The proposed power control strategy contains a virtual inductor at the interfacing inverter output and an accurate power control and sharing algorithm with consideration of both impedance voltage drop effect and DG local load effect. Specifically, the virtual inductance can effectively prevent the coupling between the real and reactive powers by introducing a predominantly inductive impedance even in a low-voltage network with resistive line impedances. On the other hand, based on the predominantly inductive impedance, the proposed accurate reactive power sharing algorithm functions by estimating the impedance voltage drops and significantly improves the reactive power control and sharing accuracy. Finally, considering the different locations of loads in a multibus microgrid, the reactive power control accuracy is further improved by employing an online estimated reactive power offset to compensate the effects of DG local load power demands. The proposed power control strategy has been tested in simulation and experimentally on a low-voltage microgrid prototype.",
"title": ""
}
] |
[
{
"docid": "4d964a5cfd5b21c6196a31f4b204361d",
"text": "Edge detection is a fundamental tool in the field of image processing. Edge indicates sudden change in the intensity level of image pixels. By detecting edges in the image, one can preserve its features and eliminate useless information. In the recent years, especially in the field of Computer Vision, edge detection has been emerged out as a key technique for image processing. There are various gradient based edge detection algorithms such as Robert, Prewitt, Sobel, Canny which can be used for this purpose. This paper reviews all these gradient based edge detection techniques and provides comparative analysis. MATLAB/Simulink is used as a simulation tool. System is designed by configuring ISE Design suit with MATLAB. Hardware Description Language (HDL) is generated using Xilinx System Generator. HDL code is synthesized and implemented using Field Programmable Gate Array (FPGA).",
"title": ""
},
{
"docid": "791cc656afc2d36e1f491c5a80b77b97",
"text": "With the wide diffusion of smartphones and their usage in a plethora of processes and activities, these devices have been handling an increasing variety of sensitive resources. Attackers are hence producing a large number of malware applications for Android (the most spread mobile platform), often by slightly modifying existing applications, which results in malware being organized in families. Some works in the literature showed that opcodes are informative for detecting malware, not only in the Android platform. In this paper, we investigate if frequencies of ngrams of opcodes are effective in detecting Android malware and if there is some significant malware family for which they are more or less effective. To this end, we designed a method based on state-of-the-art classifiers applied to frequencies of opcodes ngrams. Then, we experimentally evaluated it on a recent dataset composed of 11120 applications, 5560 of which are malware belonging to several different families. Results show that an accuracy of 97% can be obtained on the average, whereas perfect detection rate is achieved for more than one malware family.",
"title": ""
},
{
"docid": "bd64a38a507001f0b17098138f297cc7",
"text": "Affect sensitivity is of the utmost importance for a robot companion to be able to display socially intelligent behaviour, a key requirement for sustaining long-term interactions with humans. This paper explores a naturalistic scenario in which children play chess with the iCat, a robot companion. A person-independent, Bayesian approach to detect the user's engagement with the iCat robot is presented. Our framework models both causes and effects of engagement: features related to the user's non-verbal behaviour, the task and the companion's affective reactions are identified to predict the children's level of engagement. An experiment was carried out to train and validate our model. Results show that our approach based on multimodal integration of task and social interaction-based features outperforms those based solely on non-verbal behaviour or contextual information (94.79 % vs. 93.75 % and 78.13 %).",
"title": ""
},
{
"docid": "79c14cc420caa8db93bc74916ce5bb4d",
"text": "Hadoop has become the de facto platform for large-scale data analysis in commercial applications, and increasingly so in scientific applications. However, Hadoop's byte stream data model causes inefficiencies when used to process scientific data that is commonly stored in highly-structured, array-based binary file formats resulting in limited scalability of Hadoop applications in science. We introduce Sci-Hadoop, a Hadoop plugin allowing scientists to specify logical queries over array-based data models. Sci-Hadoop executes queries as map/reduce programs defined over the logical data model. We describe the implementation of a Sci-Hadoop prototype for NetCDF data sets and quantify the performance of five separate optimizations that address the following goals for several representative aggregate queries: reduce total data transfers, reduce remote reads, and reduce unnecessary reads. Two optimizations allow holistic aggregate queries to be evaluated opportunistically during the map phase; two additional optimizations intelligently partition input data to increase read locality, and one optimization avoids block scans by examining the data dependencies of an executing query to prune input partitions. Experiments involving a holistic function show run-time improvements of up to 8x, with drastic reductions of IO, both locally and over the network.",
"title": ""
},
{
"docid": "3304f4d4c936a416b0ced56ee8e96f20",
"text": "Big Data analytics plays a key role through reducing the data size and complexity in Big Data applications. Visualization is an important approach to helping Big Data get a complete view of data and discover data values. Big Data analytics and visualization should be integrated seamlessly so that they work best in Big Data applications. Conventional data visualization methods as well as the extension of some conventional methods to Big Data applications are introduced in this paper. The challenges of Big Data visualization are discussed. New methods, applications, and technology progress of Big Data visualization are presented.",
"title": ""
},
{
"docid": "2a404b0be685e069083596b4f7a2dd80",
"text": "Sexual relations with intercourse (ASR-I) and high prevalence of teen pregnancies (19.2%, in 2002) among adolescents in Puerto Rico constitute a serious biopsychosocial problem. Studying the consequences and correlates of ASR-I in community and mental health samples of adolescents is important in designing and implementing sexual health programs. Randomized representative cross-sectional samples of male and female adolescents from 11-18 years old (N = 994 from the general community, N = 550 receiving mental health services) who had engaged in ASR-I were the subjects of this study. Demographic, family, and sexual data and the DISC-IV were collected from individual interviews. Logistic regression models, bivariate odds ratios, Chi-squares, and t tests were used in the statistical analysis. The mental health sample showed higher rates of ASR-I, lifetime reports of pregnancy and lower age of ASR-I onset for females. No gender difference in the prevalence of ASR-I was observed in both samples. Older adolescents from the community sample meeting psychiatric diagnosis criteria, and with lower parental monitoring, were more likely to engage in ASR-I, whereas in the mental health sample, adolescents with lower parental monitoring and parental involvement reported significantly more ASR-I. Prevalence of ASR-I and Risky Sexual Behavior (RSB) were almost identical. Adolescents with mental health disorders initiate and engage in ASR-I earlier and more frequently regardless of gender. Older adolescents are more likely to engage in ASR-I and parent-child relationships emerged as a highly relevant predictor of adolescent sexual behavior. The high correspondence between ASR-I and RSB has important clinical implications.",
"title": ""
},
{
"docid": "43850ef433d1419ed37b7b12f3ff5921",
"text": "We have seen ten years of the application of AI planning to the problem of narrative generation in Interactive Storytelling (IS). In that time planning has emerged as the dominant technology and has featured in a number of prototype systems. Nevertheless key issues remain, such as how best to control the shape of the narrative that is generated (e.g., by using narrative control knowledge, i.e., knowledge about narrative features that enhance user experience) and also how best to provide support for real-time interactive performance in order to scale up to more realistic sized systems. Recent progress in planning technology has opened up new avenues for IS and we have developed a novel approach to narrative generation that builds on this. Our approach is to specify narrative control knowledge for a given story world using state trajectory constraints and then to treat these state constraints as landmarks and to use them to decompose narrative generation in order to address scalability issues and the goal of real-time performance in larger story domains. This approach to narrative generation is fully implemented in an interactive narrative based on the “Merchant of Venice.” The contribution of the work lies both in our novel use of state constraints to specify narrative control knowledge for interactive storytelling and also our development of an approach to narrative generation that exploits such constraints. In the article we show how the use of state constraints can provide a unified perspective on important problems faced in IS.",
"title": ""
},
{
"docid": "1a6a7f515aa19b3525989f2cc4aa514f",
"text": "Hundreds of thousands of photographs are uploaded to the internet every minute through various social networking and photo sharing platforms. While some images get millions of views, others are completely ignored. Even from the same users, different photographs receive different number of views. This begs the question: What makes a photograph popular? Can we predict the number of views a photograph will receive even before it is uploaded? These are some of the questions we address in this work. We investigate two key components of an image that affect its popularity, namely the image content and social context. Using a dataset of about 2.3 million images from Flickr, we demonstrate that we can reliably predict the normalized view count of images with a rank correlation of 0.81 using both image content and social cues. In this paper, we show the importance of image cues such as color, gradients, deep learning features and the set of objects present, as well as the importance of various social cues such as number of friends or number of photos uploaded that lead to high or low popularity of images.",
"title": ""
},
{
"docid": "4a27c9c13896eb50806371e179ccbf33",
"text": "A geographical information system (CIS) is proposed as a suitable tool for mapping the spatial distribution of forest fire danger. Using a region severely affected by forest fires in Central Spain as the study area, topography, meteorological data, fuel models and human-caused risk were mapped and incorporated within a GIS. Three danger maps were generated: probability of ignition, fuel hazard and human risk, and all of them were overlaid in an integrated fire danger map, based upon the criteria established by the Spanish Forest Service. CIS make it possible to improve our knowledge of the geographical distribution of fire danger, which is crucial for suppression planning (particularly when hotshot crews are involved) and for elaborating regional fire defence plans.",
"title": ""
},
{
"docid": "d56e3d58fdc0ca09fe7f708c7d12122e",
"text": "About nine billion people in the world are deaf and dumb. The communication between a deaf and hearing person poses to be a serious problem compared to communication between blind and normal visual people. This creates a very little room for them with communication being a fundamental aspect of human life. The blind people can talk freely by means of normal language whereas the deaf-dumb have their own manual-visual language known as sign language. Sign language is a non-verbal form of intercourse which is found amongst deaf communities in world. The languages do not have a common origin and hence difficult to interpret. The project aims to facilitate people by means of a glove based communication interpreter system. The glove is internally equipped with five flex sensors. For each specific gesture, the flex sensor produces a proportional change in resistance. The output from the sensor is analog values it is converted to digital. The processing of these hand gestures is in Arduino Duemilanove Board which is an advance version of the microcontroller. It compares the input signal with predefined voltage levels stored in memory. According to that required output displays on the LCD in the form of text & sound is produced which is stored is memory with the help of speaker. In such a way it is easy for deaf and dumb to communicate with normal people. This system can also be use for the woman security since we are sending a message to authority with the help of smart phone.",
"title": ""
},
{
"docid": "1193515655256edf4c9b490fb5d9f03e",
"text": "Long-term demand forecasting presents the first step in planning and developing future generation, transmission and distribution facilities. One of the primary tasks of an electric utility accurately predicts load demand requirements at all times, especially for long-term. Based on the outcome of such forecasts, utilities coordinate their resources to meet the forecasted demand using a least-cost plan. In general, resource planning is performed subject to numerous uncertainties. Expert opinion indicates that a major source of uncertainty in planning for future capacity resource needs and operation of existing generation resources is the forecasted load demand. This paper presents an overview of the past and current practice in longterm demand forecasting. It introduces methods, which consists of some traditional methods, neural networks, genetic algorithms, fuzzy rules, support vector machines, wavelet networks and expert systems.",
"title": ""
},
{
"docid": "075e263303b73ee5d1ed6cff026aee63",
"text": "Automatic and accurate whole-heart and great vessel segmentation from 3D cardiac magnetic resonance (MR) images plays an important role in the computer-assisted diagnosis and treatment of cardiovascular disease. However, this task is very challenging due to ambiguous cardiac borders and large anatomical variations among different subjects. In this paper, we propose a novel densely-connected volumetric convolutional neural network, referred as DenseVoxNet, to automatically segment the cardiac and vascular structures from 3D cardiac MR images. The DenseVoxNet adopts the 3D fully convolutional architecture for effective volume-to-volume prediction. From the learning perspective, our DenseVoxNet has three compelling advantages. First, it preserves the maximum information flow between layers by a densely-connected mechanism and hence eases the network training. Second, it avoids learning redundant feature maps by encouraging feature reuse and hence requires fewer parameters to achieve high performance, which is essential for medical applications with limited training data. Third, we add auxiliary side paths to strengthen the gradient propagation and stabilize the learning process. We demonstrate the effectiveness of DenseVoxNet by comparing it with the state-of-the-art approaches from HVSMR 2016 challenge in conjunction with MICCAI, and our network achieves the best dice coefficient. We also show that our network can achieve better performance than other 3D ConvNets but with fewer parameters.",
"title": ""
},
{
"docid": "6825c5294da2dfe7a26b6ac89ba8f515",
"text": "Restoring natural walking for amputees has been increasingly investigated because of demographic evolution, leading to increased number of amputations, and increasing demand for independence. The energetic disadvantages of passive pros-theses are clear, and active prostheses are limited in autonomy. This paper presents the simulation, design and development of an actuated knee-ankle prosthesis based on a variable stiffness actuator with energy transfer from the knee to the ankle. This approach allows a good approximation of the joint torques and the kinematics of the human gait cycle while maintaining compliant joints and reducing energy consumption during level walking. This first prototype consists of a passive knee and an active ankle, which are energetically coupled to reduce the power consumption.",
"title": ""
},
{
"docid": "731c5544759a958272e08f928bd364eb",
"text": "A key method of reducing morbidity and mortality is childhood immunization, yet in 2003 only 69% of Filipino children received all suggested vaccinations. Data from the 2003 Philippines Demographic Health Survey were used to identify risk factors for non- and partial-immunization. Results of the multinomial logistic regression analyses indicate that mothers who have less education, and who have not attended the minimally-recommended four antenatal visits are less likely to have fully immunized children. To increase immunization coverage in the Philippines, knowledge transfer to mothers must improve.",
"title": ""
},
{
"docid": "4f81901c2269cd4561dd04f59a04a473",
"text": "The advent of powerful acid-suppressive drugs, such as proton pump inhibitors (PPIs), has revolutionized the management of acid-related diseases and has minimized the role of surgery. The major and universally recognized indications for their use are represented by treatment of gastro-esophageal reflux disease, eradication of Helicobacter pylori infection in combination with antibiotics, therapy of H. pylori-negative peptic ulcers, healing and prophylaxis of non-steroidal anti-inflammatory drug-associated gastric ulcers and control of several acid hypersecretory conditions. However, in the last decade, we have witnessed an almost continuous growth of their use and this phenomenon cannot be only explained by the simple substitution of the previous H2-receptor antagonists, but also by an inappropriate prescription of these drugs. This endless increase of PPI utilization has created an important problem for many regulatory authorities in terms of increased costs and greater potential risk of adverse events. The main reasons for this overuse of PPIs are the prevention of gastro-duodenal ulcers in low-risk patients or the stress ulcer prophylaxis in non-intensive care units, steroid therapy alone, anticoagulant treatment without risk factors for gastro-duodenal injury, the overtreatment of functional dyspepsia and a wrong diagnosis of acid-related disorder. The cost for this inappropriate use of PPIs has become alarming and requires to be controlled. We believe that gastroenterologists together with the scientific societies and the regulatory authorities should plan educational initiatives to guide both primary care physicians and specialists to the correct use of PPIs in their daily clinical practice, according to the worldwide published guidelines.",
"title": ""
},
{
"docid": "fcf5a390d9757ab3c8958638ccc54925",
"text": "This paper presents design equations for the microstrip-to-Substrate Integrated Waveguide (SIW) transition. The transition is decomposed in two distinct parts: the microstrip taper and the microstrip-to-SIW step. Analytical equations are used for the microstrip taper. As for the step, the microstrip is modeled by an equivalent transverse electromagnetic (TEM) waveguide. An equation relating the optimum microstrip width to the SIW width is derived using a curve fitting technique. It is shown that when the step is properly sized, it provides a return loss superior to 20 dB. Three design examples are presented using different substrate permittivity and frequency bands between 18 GHz and 75 GHz. An experimental verification is also presented. The presented technique allows to design transitions covering the complete single-mode SIW bandwidth.",
"title": ""
},
{
"docid": "97e2d66e927c0592b88bef38a8899547",
"text": "Shared services have been heralded as a means of enhancing services and improving the efficiency of their delivery. As such they have been embraced by the private, and increasingly, the public sectors. Yet implementation has proved to be difficult and the number of success stories has been limited. Which factors are critical to success in the development of shared services arrangements is not yet well understood. The current paper examines existing research in the area of critical success factors (CSFs) and suggests that there are actually three distinct types of CSF: outcome, implementation process and operating environment characteristic. Two case studies of public sector shared services in Australia and the Netherlands are examined through a lens that both incorporates all three types of CSF and distinguishes between them.",
"title": ""
},
{
"docid": "653bdddafdb40af00d5d838b1a395351",
"text": "Advances in electronic location technology and the coming of age of mobile computing have opened the door for location-aware applications to permeate all aspects of everyday life. Location is at the core of a large number of high-value applications ranging from the life-and-death context of emergency response to serendipitous social meet-ups. For example, the market for GPS products and services alone is expected to grow to US$200 billion by 2015. Unfortunately, there is no single location technology that is good for every situation and exhibits high accuracy, low cost, and universal coverage. In fact, high accuracy and good coverage seldom coexist, and when they do, it comes at an extreme cost. Instead, the modern localization landscape is a kaleidoscope of location systems based on a multitude of different technologies including satellite, mobile telephony, 802.11, ultrasound, and infrared among others. This lecture introduces researchers and developers to the most popular technologies and systems for location estimation and the challenges and opportunities that accompany their use. For each technology, we discuss the history of its development, the various systems that are based on it, and their trade-offs and their effects on cost and performance. We also describe technology-independent algorithms that are commonly used to smooth streams of location estimates and improve the accuracy of object tracking. Finally, we provide an overview of the wide variety of application domains where location plays a key role, and discuss opportunities and new technologies on the horizon. KEyWoRDS localization, location systems, location tracking, context awareness, navigation, location sensing, tracking, Global Positioning System, GPS, infrared location, ultrasonic location, 802.11 location, cellular location, Bayesian filters, RFID, RSSI, triangulation",
"title": ""
},
{
"docid": "f5e934d65fa436cdb8e5cfa81ea29028",
"text": "Recently, there has been substantial research on augmenting aggregate forecasts with individual consumer data from internet platforms, such as search traffic or social network shares. Although the majority of studies report increased accuracy, many exhibit design weaknesses including lack of adequate benchmarks or rigorous evaluation. Furthermore, their usefulness over the product life-cycle has not been investigated, which may change, as initially, consumers may search for pre-purchase information, but later for after-sales support. In this study, we first review the relevant literature and then attempt to support the key findings using two forecasting case studies. Our findings are in stark contrast to the literature, and we find that established univariate forecasting benchmarks, such as exponential smoothing, consistently perform better than when online information is included. Our research underlines the need for thorough forecast evaluation and argues that online platform data may be of limited use for supporting operational decisions.",
"title": ""
}
] |
scidocsrr
|
1a2085922e9c7073815bde0d43b73f1e
|
Leveraging Gloss Knowledge in Neural Word Sense Disambiguation by Hierarchical Co-Attention
|
[
{
"docid": "d735cfbf58094aac2fe0a324491fdfe7",
"text": "We present AutoExtend, a system to learn embeddings for synsets and lexemes. It is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The synset/lexeme embeddings obtained live in the same vector space as the word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet as a lexical resource, but AutoExtend can be easily applied to other resources like Freebase. AutoExtend achieves state-of-the-art performance on word similarity and word sense disambiguation tasks.",
"title": ""
},
{
"docid": "2bb0b89491015f124e4b244954508234",
"text": "In recent years, deep neural networks have achieved significant success in Chinese word segmentation and many other natural language processing tasks. Most of these algorithms are end-to-end trainable systems and can effectively process and learn from large scale labeled datasets. However, these methods typically lack the capability of processing rare words and data whose domains are different from training data. Previous statistical methods have demonstrated that human knowledge can provide valuable information for handling rare cases and domain shifting problems. In this paper, we seek to address the problem of incorporating dictionaries into neural networks for the Chinese word segmentation task. Two different methods that extend the bi-directional long short-term memory neural network are proposed to perform the task. To evaluate the performance of the proposed methods, state-of-the-art supervised models based methods and domain adaptation approaches are compared with our methods on nine datasets from different domains. The experimental results demonstrate that the proposed methods can achieve better performance than other state-of-the-art neural network methods and domain adaptation approaches in most cases.",
"title": ""
},
{
"docid": "78a1ebceb57a90a15357390127c443b7",
"text": "In this paper we present a clean, yet effective, model for word sense disambiguation. Our approach leverage a bidirectional long short-term memory network which is shared between all words. This enables the model to share statistical strength and to scale well with vocabulary size. The model is trained end-to-end, directly from the raw text to sense labels, and makes effective use of word order. We evaluate our approach on two standard datasets, using identical hyperparameter settings, which are in turn tuned on a third set of held out data. We employ no external resources (e.g. knowledge graphs, part-of-speech tagging, etc), language specific features, or hand crafted rules, but still achieve statistically equivalent results to the best state-of-the-art systems, that employ no such limitations.",
"title": ""
}
] |
[
{
"docid": "f8adbe748056a503396bb5b17da84f07",
"text": "Unsupervised word embeddings provide rich linguistic and conceptual information about words. However, they may provide weak information about domain specific semantic relations for certain tasks such as semantic parsing of natural language queries, where such information about words can be valuable. To encode the prior knowledge about the semantic word relations, we present new method as follows: we extend the neural network based lexical word embedding objective function (Mikolov et al. 2013) by incorporating the information about relationship between entities that we extract from knowledge bases. Our model can jointly learn lexical word representations from free text enriched by the relational word embeddings from relational data (e.g., Freebase) for each type of entity relations. We empirically show on the task of semantic tagging of natural language queries that our enriched embeddings can provide information about not only short-range syntactic dependencies but also long-range semantic dependencies between words. Using the enriched embeddings, we obtain an average of 2% improvement in F-score compared to the previous baselines.",
"title": ""
},
{
"docid": "fb31ead676acdd048d699ddfb4ddd17a",
"text": "Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.",
"title": ""
},
{
"docid": "c07cb4fee98fd54b21f2f46b7384f171",
"text": "This study was conducted to provide basic data as part of a project to distinguish naturally occurring organic acids from added preservatives. Accordingly, we investigated naturally occurring levels of sorbic, benzoic and propionic acids in fish and their processed commodities. The levels of sorbic, benzoic and propionic acids in 265 fish and their processed commodities were determined by high-performance liquid chromatography-photodiode detection array (HPLC-PDA) of sorbic and benzoic acids and gas chromatography-mass spectrometry (GC/MS) of propionic acid. For propionic acid, GC-MS was used because of its high sensitivity and selectivity in complicated matrix samples. Propionic acid was detected in 36.6% of fish samples and 50.4% of processed fish commodities. In contrast, benzoic acid was detected in 5.6% of fish samples, and sorbic acid was not detected in any sample. According to the Korean Food and Drug Administration (KFDA), fishery products and salted fish may only contain sorbic acid in amounts up to 2.0 g kg-1 and 1.0 g kg-1, respectively. The results of the monitoring in this study can be considered violations of KFDA regulations (total 124; benzoic acid 8, propionic acid 116). However, it is difficult to distinguish naturally generated organic acids and artificially added preservatives in fishery products. Therefore, further studies are needed to extend the database for distinction of naturally generated organic acids and added preservatives.",
"title": ""
},
{
"docid": "f076287d119022d75ccc9d05f405be2b",
"text": "With the emergence of new application-specific sensor and Ad-hoc networks, increasingly complex and custom protocols will be designed and deployed. We propose a framework to systematically design and evaluate networking protocols based on a 'building block' approach. In this approach, each protocol is broken down into a set of parameterized modules called \"building blocks\", each having its own specific functionality. The properties of these building blocks and their interaction define the overall behavior of the protocol. In this paper, we aim to identify the major research challenges and questions in the building block approach. By addressing some of those questions, we point out potential directions to analyze and understand the behavior of networking protocols systematically. We discuss two case studies on utilizing the building block approach for analyzing Ad-hoc routing protocols and IP mobility protocols in a systematic manner.",
"title": ""
},
{
"docid": "ae7117416b4a07d2b15668c2c8ac46e3",
"text": "We present OntoWiki, a tool providing support for agile, distributed knowledge engineering scenarios. OntoWiki facilitates the visual presentation of a knowledge base as an information map, with different views on instance data. It enables intuitive authoring of semantic content, with an inline editing mode for editing RDF content, similar to WYSIWYG for text documents. It fosters social collaboration aspects by keeping track of changes, allowing comments and discussion on every single part of a knowledge base, enabling to rate and measure the popularity of content and honoring the activity of users. OntoWiki enhances the browsing and retrieval by offering semantic enhanced search strategies. All these techniques are applied with the ultimate goal of decreasing the entrance barrier for projects and domain experts to collaborate using semantic technologies. In the spirit of the Web 2.0 OntoWiki implements an ”architecture of participation” that allows users to add value to the application as they use it. It is available as open-source software and a demonstration platform can be accessed at http://3ba.se.",
"title": ""
},
{
"docid": "282424d3a055bcc2d0d5c99c6f8e58e9",
"text": "Over the last few years, neuroimaging techniques have contributed greatly to the identification of the structural and functional neuroanatomy of anxiety disorders. The amygdala seems to be a crucial structure for fear and anxiety, and has consistently been found to be activated in anxiety-provoking situations. Apart from the amygdala, the insula and anterior cinguiate cortex seem to be critical, and ail three have been referred to as the \"fear network.\" In the present article, we review the main findings from three major lines of research. First, we examine human models of anxiety disorders, including fear conditioning studies and investigations of experimentally induced panic attacks. Then we turn to research in patients with anxiety disorders and take a dose look at post-traumatic stress disorder and obsessive-compulsive disorder. Finally, we review neuroimaging studies investigating neural correlates of successful treatment of anxiety, focusing on exposure-based therapy and several pharmacological treatment options, as well as combinations of both.",
"title": ""
},
{
"docid": "0ce0eda3b12e71163c44d649f35f424c",
"text": "In the light of the identified problem, the primary objective of this study was to investigate the perceived role of strategic leadership in strategy implementation in South African organisations. The conclusion is that strategic leadership positively contributes to effective strategy implementation in South African organisations.",
"title": ""
},
{
"docid": "3e1690ae4d61d87edb0e4c3ce40f6a88",
"text": "Despite previous efforts in auditing software manually and automatically, buffer overruns are still being discovered in programs in use. A dynamic bounds checker detects buffer overruns in erroneous software before it occurs and thereby prevents attacks from corrupting the integrity of the system. Dynamic buffer overrun detectors have not been adopted widely because they either (1) cannot guard against all buffer overrun attacks, (2) break existing code, or (3) incur too high an overhead. This paper presents a practical detector called CRED (C Range Error Detector) that avoids each of these deficiencies. CRED finds all buffer overrun attacks as it directly checks for the bounds of memory accesses. Unlike the original referent-object based bounds-checking technique, CRED does not break existing code because it uses a novel solution to support program manipulation of out-of-bounds addresses. Finally, by restricting the bounds checks to strings in a program, CRED’s overhead is greatly reduced without sacrificing protection in the experiments we performed. CRED is implemented as an extension of the GNU C compiler version 3.3.1. The simplicity of our design makes possible a robust implementation that has been tested on over 20 open-source programs, comprising over 1.2 million lines of C code. CRED proved effective in detecting buffer overrun attacks on programs with known vulnerabilities, and is the only tool found to guard against a testbed of 20 different buffer overflow attacks[34]. Finding overruns only on strings impose an overhead of less This research was performed while the first author was at Stanford University, and this material is based upon work supported in part by the National Science Foundation under Grant No. 0086160. than 26% for 14 of the programs, and an overhead of up to 130% for the remaining six, while the previous state-ofthe-art bounds checker by Jones and Kelly breaks 60% of the programs and is 12 times slower. Incorporating wellknown techniques for optimizing bounds checking into CRED could lead to further performance improvements.",
"title": ""
},
{
"docid": "526586bfdce4f8bdd0841dcd05ac05a2",
"text": "Systematic reviews show some evidence for the efficacy of group-based social skills group training in children and adolescents with autism spectrum disorder, but more rigorous research is needed to endorse generalizability. In addition, little is known about the perspectives of autistic individuals participating in social skills group training. Using a qualitative approach, the objective of this study was to examine experiences and opinions about social skills group training of children and adolescents with higher functioning autism spectrum disorder and their parents following participation in a manualized social skills group training (\"KONTAKT\"). Within an ongoing randomized controlled clinical trial (NCT01854346) and based on outcome data from the Social Responsiveness Scale, six high responders and five low-to-non-responders to social skills group training and one parent of each child (N = 22) were deep interviewed. Interestingly, both high responders and low-to-non-responders (and their parents) reported improvements in social communication and related skills (e.g. awareness of own difficulties, self-confidence, independence in everyday life) and overall treatment satisfaction, although more positive intervention experiences were expressed by responders. These findings highlight the added value of collecting verbal data in addition to quantitative data in a comprehensive evaluation of social skills group training.",
"title": ""
},
{
"docid": "eca39d2aa9172808b7545624870b4002",
"text": "In this paper, the isolated full-bridge boost converter with active clamp is described and a new active-clamping algorithm to improve the efficiency is suggested. In the proposed method, the resonance between the clamp capacitor and the leakage inductor is utilized to reduce switching losses. The loss analysis is performed by simulation and the improved performance is confirmed by experimental results.",
"title": ""
},
{
"docid": "6b21fc5c80677016fbd3d52d721ecce5",
"text": "This paper focuses on the problem of generating human face pictures from specific attributes. The existing CNN-based face generation models, however, either ignore the identity of the generated face or fail to preserve the identity of the reference face image. Here we address this problem from the view of optimization, and suggest an optimization model to generate human face with the given attributes while keeping the identity of the reference image. The attributes can be obtained from the attribute-guided image or by tuning the attribute features of the reference image. With the deep convolutional network \"VGG-Face\", the loss is defined on the convolutional feature maps. We then apply the gradient decent algorithm to solve this optimization problem. The results validate the effectiveness of our method for attribute driven and identity-preserving face generation.",
"title": ""
},
{
"docid": "fe043223b37f99419d9dc2c4d787cfbb",
"text": "We describe a Markov chain Monte Carlo based particle filter that effectively deals with interacting targets, i.e., targets that are influenced by the proximity and/or behavior of other targets. Such interactions cause problems for traditional approaches to the data association problem. In response, we developed a joint tracker that includes a more sophisticated motion model to maintain the identity of targets throughout an interaction, drastically reducing tracker failures. The paper presents two main contributions: (1) we show how a Markov random field (MRF) motion prior, built on the fly at each time step, can substantially improve tracking when targets interact, and (2) we show how this can be done efficiently using Markov chain Monte Carlo (MCMC) sampling. We prove that incorporating an MRF to model interactions is equivalent to adding an additional interaction factor to the importance weights in a joint particle filter. Since a joint particle filter suffers from exponential complexity in the number of tracked targets, we replace the traditional importance sampling step in the particle filter with an MCMC sampling step. The resulting filter deals efficiently and effectively with complicated interactions when targets approach each other. We present both qualitative and quantitative results to substantiate the claims made in the paper, including a large scale experiment on a video-sequence of over 10,000 frames in length.",
"title": ""
},
{
"docid": "2090537c798654c335963afba0c45a5b",
"text": "This paper introduces a novel transductive support vector machine (TSVM) model and compares it with the traditional inductive SVM on a key problem in bioinformatics - promoter recognition. While inductive reasoning is concerned with the development of a model (a function) to approximate data from the whole problem space (induction), and consecutively using this model to predict output values for a new input vector (deduction), in the transductive inference systems a model is developed for every new input vector based on some closest to the new vector data from an existing database and this model is used to predict only the output for this vector. The TSVM outperforms by far the inductive SVM models applied on the same problems. Analysis is given on the advantages and disadvantages of the TSVM. Hybrid TSVM-evolving connections systems are discussed as directions for future research.",
"title": ""
},
{
"docid": "cf0d47466adec1adebeb14f89f0009cb",
"text": "We developed a novel learning-based human detection system, which can detect people having different sizes and orientations, under a wide variety of backgrounds or even with crowds. To overcome the affects of geometric and rotational variations, the system automatically assigns the dominant orientations of each block-based feature encoding by using the rectangularand circulartype histograms of orientated gradients (HOG), which are insensitive to various lightings and noises at the outdoor environment. Moreover, this work demonstrated that Gaussian weight and tri-linear interpolation for HOG feature construction can increase detection performance. Particularly, a powerful feature selection algorithm, AdaBoost, is performed to automatically select a small set of discriminative HOG features with orientation information in order to achieve robust detection results. The overall computational time is further reduced significantly without any performance loss by using the cascade-ofrejecter structure, whose hyperplanes and weights of each stage are estimated by using the AdaBoost approach.",
"title": ""
},
{
"docid": "444364c2ab97bef660ab322420fc5158",
"text": "We present a telerobotics research platform that provides complete access to all levels of control via open-source electronics and software. The electronics employs an FPGA to enable a centralized computation and distributed I/O architecture in which all control computations are implemented in a familiar development environment (Linux PC) and low-latency I/O is performed over an IEEE-1394a (FireWire) bus at speeds up to 400 Mbits/sec. The mechanical components are obtained from retired first-generation da Vinci ® Surgical Systems. This system is currently installed at 11 research institutions, with additional installations underway, thereby creating a research community around a common open-source hardware and software platform.",
"title": ""
},
{
"docid": "122bc83bcd27b95092c64cf1ad8ee6a8",
"text": "Plants make the world, a greener and a better place to live in. Although all plants need water to survive, giving them too much or too little can cause them to die. Thus, we need to implement an automatic plant watering system that ensures that the plants are watered at regular intervals, with appropriate amount, whenever they are in need. This paper describes the object oriented design of an IoT based Automated Plant Watering System.",
"title": ""
},
{
"docid": "b1bb8eda4f7223a4c6dd8201ff5abfae",
"text": "Recommender systems are constructed to search the content of interest from overloaded information by acquiring useful knowledge from massive and complex data. Since the amount of information and the complexity of the data structure grow, it has become a more interesting and challenging topic to find an efficient way to process, model, and analyze the information. Due to the Global Positioning System (GPS) data recording the taxi's driving time and location, the GPS-equipped taxi can be regarded as the detector of an urban transport system. This paper proposes a Taxi-hunting Recommendation System (Taxi-RS) processing the large-scale taxi trajectory data, in order to provide passengers with a waiting time to get a taxi ride in a particular location. We formulated the data offline processing system based on HotSpotScan and Preference Trajectory Scan algorithms. We also proposed a new data structure for frequent trajectory graph. Finally, we provided an optimized online querying subsystem to calculate the probability and the waiting time of getting a taxi. Taxi-RS is built based on the real-world trajectory data set generated by 12 000 taxis in one month. Under the condition of guaranteeing the accuracy, the experimental results show that our system can provide more accurate waiting time in a given location compared with a naïve algorithm.",
"title": ""
},
{
"docid": "ee0d858955c3c45ac3d990d3ad9d56ed",
"text": "Survival analysis is a subfield of statistics where the goal is to analyze and model data where the outcome is the time until an event of interest occurs. One of the main challenges in this context is the presence of instances whose event outcomes become unobservable after a certain time point or when some instances do not experience any event during the monitoring period. This so-called censoring can be handled most effectively using survival analysis techniques. Traditionally, statistical approaches have been widely developed in the literature to overcome the issue of censoring. In addition, many machine learning algorithms have been adapted to deal with such censored data and tackle other challenging problems that arise in real-world data. In this survey, we provide a comprehensive and structured review of the statistical methods typically used and the machine learning techniques developed for survival analysis, along with a detailed taxonomy of the existing methods. We also discuss several topics that are closely related to survival analysis and describe several successful applications in a variety of real-world application domains. We hope that this article will give readers a more comprehensive understanding of recent advances in survival analysis and offer some guidelines for applying these approaches to solve new problems arising in applications involving censored data.",
"title": ""
},
{
"docid": "d8c4e6632f90c3dd864be93db881a382",
"text": "Document understanding techniques such as document clustering and multidocument summarization have been receiving much attention recently. Current document clustering methods usually represent the given collection of documents as a document-term matrix and then conduct the clustering process. Although many of these clustering methods can group the documents effectively, it is still hard for people to capture the meaning of the documents since there is no satisfactory interpretation for each document cluster. A straightforward solution is to first cluster the documents and then summarize each document cluster using summarization methods. However, most of the current summarization methods are solely based on the sentence-term matrix and ignore the context dependence of the sentences. As a result, the generated summaries lack guidance from the document clusters. In this article, we propose a new language model to simultaneously cluster and summarize documents by making use of both the document-term and sentence-term matrices. By utilizing the mutual influence of document clustering and summarization, our method makes; (1) a better document clustering method with more meaningful interpretation; and (2) an effective document summarization method with guidance from document clustering. Experimental results on various document datasets show the effectiveness of our proposed method and the high interpretability of the generated summaries.",
"title": ""
},
{
"docid": "803e720791105b2aa1ba802a9c0764a8",
"text": "Increasing evidence suggests that neural population responses have their own internal drive, or dynamics, that describe how the neural population evolves through time. An important prediction of neural dynamical models is that previously observed neural activity is informative of noisy yet-to-be-observed activity on single-trials, and may thus have a denoising effect. To investigate this prediction, we built and characterized dynamical models of single-trial motor cortical activity. We find these models capture salient dynamical features of the neural population and are informative of future neural activity on single trials. To assess how neural dynamics may beneficially denoise single-trial neural activity, we incorporate neural dynamics into a brain-machine interface (BMI). In online experiments, we find that a neural dynamical BMI achieves substantially higher performance than its non-dynamical counterpart. These results provide evidence that neural dynamics beneficially inform the temporal evolution of neural activity on single trials and may directly impact the performance of BMIs.",
"title": ""
}
] |
scidocsrr
|
8e01e82f5affbb6f12a7122d68f89bd7
|
From high heels to weed attics: a syntactic investigation of chick lit and literature
|
[
{
"docid": "ab677299ffa1e6ae0f65daf5de75d66c",
"text": "This paper proposes a new theory of the relationship between the sentence processing mechanism and the available computational resources. This theory--the Syntactic Prediction Locality Theory (SPLT)--has two components: an integration cost component and a component for the memory cost associated with keeping track of obligatory syntactic requirements. Memory cost is hypothesized to be quantified in terms of the number of syntactic categories that are necessary to complete the current input string as a grammatical sentence. Furthermore, in accordance with results from the working memory literature both memory cost and integration cost are hypothesized to be heavily influenced by locality (1) the longer a predicted category must be kept in memory before the prediction is satisfied, the greater is the cost for maintaining that prediction; and (2) the greater the distance between an incoming word and the most local head or dependent to which it attaches, the greater the integration cost. The SPLT is shown to explain a wide range of processing complexity phenomena not previously accounted for under a single theory, including (1) the lower complexity of subject-extracted relative clauses compared to object-extracted relative clauses, (2) numerous processing overload effects across languages, including the unacceptability of multiply center-embedded structures, (3) the lower complexity of cross-serial dependencies relative to center-embedded dependencies, (4) heaviness effects, such that sentences are easier to understand when larger phrases are placed later and (5) numerous ambiguity effects, such as those which have been argued to be evidence for the Active Filler Hypothesis.",
"title": ""
}
] |
[
{
"docid": "b1ef897890df4c719d85dd339f8dee70",
"text": "Repositories of health records are collections of events with varying number and sparsity of occurrences within and among patients. Although a large number of predictive models have been proposed in the last decade, they are not yet able to simultaneously capture cross-attribute and temporal dependencies associated with these repositories. Two major streams of predictive models can be found. On one hand, deterministic models rely on compact subsets of discriminative events to anticipate medical conditions. On the other hand, generative models offer a more complete and noise-tolerant view based on the likelihood of the testing arrangements of events to discriminate a particular outcome. However, despite the relevance of generative predictive models, they are not easily extensible to deal with complex grids of events. In this work, we rely on the Markov assumption to propose new predictive models able to deal with cross-attribute and temporal dependencies. Experimental results hold evidence for the utility and superior accuracy of generative models to anticipate health conditions, such as the need for surgeries. Additionally, we show that the proposed generative models are able to decode temporal patterns of interest (from the learned lattices) with acceptable completeness and precision levels, and with superior efficiency for voluminous repositories.",
"title": ""
},
{
"docid": "f1e36a749d456326faeda90bc744b70d",
"text": "In this paper, we propose epitomic variational autoencoder (eVAE), a probabilistic generative model of high dimensional data. eVAE is composed of a number of sparse variational autoencoders called ‘epitome’ such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. We show that the proposed model greatly overcomes the common problem in variational autoencoders (VAE) of model over-pruning. We substantiate that eVAE is efficient in using its model capacity and generalizes better than VAE, by presenting qualitative and quantitative results on MNIST and TFD datasets.",
"title": ""
},
{
"docid": "8230ddd7174a2562c0fe0f83b1bf7cf7",
"text": "Metaphors are fundamental to creative thought and expression. Newly coined metaphors regularly infiltrate our collective vocabulary and gradually become familiar, but it is unclear how this shift from novel to conventionalized meaning happens in the brain. We investigated the neural career of metaphors in a functional magnetic resonance imaging study using extensively normed new metaphors and simulated the ordinary, gradual experience of metaphor conventionalization by manipulating participants' exposure to these metaphors. Results showed that the conventionalization of novel metaphors specifically tunes activity within bilateral inferior prefrontal cortex, left posterior middle temporal gyrus, and right postero-lateral occipital cortex. These results support theoretical accounts attributing a role for the right hemisphere in processing novel, low salience figurative meanings, but also show that conventionalization of metaphoric meaning is a bilaterally-mediated process. Metaphor conventionalization entails a decreased neural load within semantic networks rather than a hemispheric or regional shift across brain areas.",
"title": ""
},
{
"docid": "e276068ede51c081c71a483b260e546c",
"text": "The selection of hyper-parameters plays an important role to the performance of least-squares support vector machines (LS-SVMs). In this paper, a novel hyper-parameter selection method for LS-SVMs is presented based on the particle swarm optimization (PSO). The proposed method does not need any priori knowledge on the analytic property of the generalization performance measure and can be used to determine multiple hyper-parameters at the same time. The feasibility of this method is examined on benchmark data sets. Different kinds of kernel families are investigated by using the proposed method. Experimental results show that the best or quasi-best test performance could be obtained by using the scaling radial basis kernel function (SRBF) and RBF kernel functions, respectively. & 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "910a416dc736ec3566583c57123ac87c",
"text": "Internet of Things (IoT) is one of the greatest technology revolutions in the history. Due to IoT potential, daily objects will be consciously worked in harmony with optimized performances. However, today, technology is not ready to fully bring its power to our daily life because of huge data analysis requirements in instant time. On the other hand, the powerful data management of cloud computing gives IoT an opportunity to make the revolution in our life. However, the traditional cloud computing server schedulers are not ready to provide services to IoT because IoT consists of a number of heterogeneous devices and applications which are far away from standardization. Therefore, to meet the expectations of users, the traditional cloud computing server schedulers should be improved to efficiently schedule and allocate IoT requests. There are several proposed scheduling algorithms for cloud computing in the literature. However, these scheduling algorithms are limited because of considering neither heterogeneous servers nor dynamic scheduling approach for different priority requests. Our objective is to propose Husnu S. Narman [email protected] 1 Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, 29634, USA 2 Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Zahir Raihan Rd, Dhaka, 1000, Bangladesh 3 School of Computer Science, University of Oklahoma, Norman, OK, 73019, USA dynamic dedicated server scheduling for heterogeneous and homogeneous systems to efficiently provide desired services by considering priorities of requests. Results show that the proposed scheduling algorithm improves throughput up to 40 % in heterogeneous and homogeneous cloud computing systems for IoT requests. Our proposed scheduling algorithm and related analysis will help cloud service providers build efficient server schedulers which are adaptable to homogeneous and heterogeneous environments byconsidering systemperformancemetrics, such as drop rate, throughput, and utilization in IoT.",
"title": ""
},
{
"docid": "5a1df710132da15c611c91a0550b1dbb",
"text": "This chapter is concerned with sound and complete algorithms for testing satisfiability, i.e., algorithms that are guaranteed to terminate with a correct decision on the satisfiability/unsatisfiability of the given CNF. One can distinguish between a few approaches on which complete satisfiability algorithms have been based. The first approach is based on existential quantification, where one successively eliminates variables from the CNF without changing the status of its satisfiability. When all variables have been eliminated, the satisfiability test is then reduced into a simple test on a trivial CNF. The second approach appeals to sound and complete inference rules, applying them successively until either a contradiction is found (unsatisfiable CNF) or until the CNF is closed under these rules without finding a contradiction (satisfiable CNF). The third approach is based on systematic search in the space of truth assignments, and is marked by its modest space requirements. The last approach we will discuss is based on combining search and inference, leading to algorithms that currently underly most modern complete SAT solvers. We start in the next section by establishing some technical preliminaries that will be used throughout the chapter. We will follow by a treatment of algorithms that are based on existential quantification in Section 3.3 and then algorithms based on inference rules in Section 3.4. Algorithms based on search are treated in Section 3.5, while those based on the combination of search and inference are treated in Section 3.6. Note that some of the algorithms presented here could fall into more than one class, depending on the viewpoint used. Hence, the classification presented in Sections 3.3-3.6 is only one of the many possibilities.",
"title": ""
},
{
"docid": "83e5f62d7f091260d4ae91c2d8f72d3d",
"text": "Document recognition and retrieval technologies complement one another, providing improved access to increasingly large document collections. While recognition and retrieval of textual information is fairly mature, with wide-spread availability of optical character recognition and text-based search engines, recognition and retrieval of graphics such as images, figures, tables, diagrams, and mathematical expressions are in comparatively early stages of research. This paper surveys the state of the art in recognition and retrieval of mathematical expressions, organized around four key problems in math retrieval (query construction, normalization, indexing, and relevance feedback), and four key problems in math recognition (detecting expressions, detecting and classifying symbols, analyzing symbol layout, and constructing a representation of meaning). Of special interest is the machine learning problem of jointly optimizing the component algorithms in a math recognition system, and developing effective indexing, retrieval and relevance feedback algorithms for math retrieval. Another important open problem is developing user interfaces that seamlessly integrate recognition and retrieval. Activity in these important research areas is increasing, in part because math notation provides an excellent domain for studying problems common to many document and graphics recognition and retrieval applications, and also because mature applications will likely provide substantial benefits for education, research, and mathematical literacy.",
"title": ""
},
{
"docid": "cc55fa9990cada5a26079251f9155eeb",
"text": "Despite the tremendous concern in the insurance industry over insurance fraud by customers, the federal Insurance Fraud Prevention Act primarily targets internal fraud, or insurer fraud, in which criminal acts such as embezzlement could trigger an insurer’s insolvency, rather than fraud perpetrated by policyholders such as filing false or inflated claims—insurance fraud. Fraud committed against insurers by executives and employees is potentially one of the costliest issues facing the industry and attracts increasing attention from regulators, legislators, and the industry. One book includes reports on some 140 insurance executives convicted of major fraud in recent years. This study investigates whether insurers’ weapons against insurance fraud are also used effectively to combat insurer fraud. Several variables are tested—characteristics of perpetrators, schemes employed, and types of detection and investigation techniques utilized—to compare the characteristics of insurer fraud with those of insurance fraud and also with those in non-insurance industries. A detailed survey of 8,000 members of the Association of Certified Fraud Examiners provides the database; chisquare statistics, the Median (Brown-Mood) test, and the Kruskal-Wallis test were used to measure for significant differences. Most of the authors’ expectations were supported by the analysis, but some surprises were found, such as the relative ineffectiveness of insurer internal control systems at identifying employee fraud. Internal whistleblowing also was not as prevalent in the insurance industry as in other organizations. Insurers were more likely to prosecute their employees for fraud than were other industries, however.",
"title": ""
},
{
"docid": "47897fc364551338fcaee76d71568e2e",
"text": "As Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to understand behavior patterns of end-hosts and network applications. This paper presents a novel approach based on behavioral graph analysis to study the behavior similarity of Internet end-hosts. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of end-hosts. By applying simple and efficient clustering algorithms on the similarity matrices and clustering coefficient of one-mode projection graphs, we perform network-aware clustering of end-hosts in the same network prefixes into different end-host behavior clusters and discover inherent clustered groups of Internet applications. Our experiment results based on real datasets show that end-host and application behavior clusters exhibit distinct traffic characteristics that provide improved interpretations on Internet traffic. Finally, we demonstrate the practical benefits of exploring behavior similarity in profiling network behaviors, discovering emerging network applications, and detecting anomalous traffic patterns.",
"title": ""
},
{
"docid": "3fd6d0ef0240b2fdd2a9c76a023ecab6",
"text": "In this work, an exponential spline method is developed and a nalyzed for approximating solutions of calculus of variati ons problems. The method uses a spline interpolant, which is con structed from exponential spline. It is proved to be secondrder convergent. Finally some illustrative examples are includ ed to demonstrate the applicability of the new technique. Nu merical results confirm the order of convergence predicted by the analysis.",
"title": ""
},
{
"docid": "018018f9fa28cd4c24a1f3e6f29cb63e",
"text": "In recent years, accidents in food quality & safety frequently occur, and more and more people have begun to think highly of food quality & safety and encourage food producers to be able to trace the origin of ingredients and process of production along the supply chain. With the development of IT, more and more practices have shown that the supply chain of agricultural products should rely on IT. The using of IT directly decides the degree of agricultural informatization and efficiency of agricultural supply chain management. In this paper, on the basis of introducing the meanings and characteristics of supply chain management and agricultural supply chain management, it also analyzes the information flow's attributes throughout the process of agricultural supply and the technological attributes of Internet of Things, finally, the designing method and architecture of integrated information platform of agricultural supply chain management based on internet of things was discussed in detail.",
"title": ""
},
{
"docid": "44d5f8816285d81a731761ad00157e6f",
"text": "Gunshot detection traditionally has been a task performed with acoustic signal processing. While this type of detection can give cities, civil services and training institutes a method to identify specific locations of gunshots, the nature of acoustic detection may not provide the fine-grained detection accuracy and sufficient metrics for performance assessment. If however you examine a different signature of a gunshot, the recoil, detection of the same event with accelerometers can provide you with persona and firearm model level detection abilities. The functionality of accelerometer sensors in wrist worn devices have increased significantly in recent time. From fitness trackers to smart watches, accelerometers have been put to use in various activity recognition and detection applications. In this paper, we design an approach that is able to account for the variations in firearm generated recoil, as recorded by a wrist worn accelerometer, and helps categorize the impulse forces. Our experiments show that not only can wrist worn accelerometers detect the differences in handgun rifle and shotgun gunshots, but the individual models of firearms can be distinguished from each other. The application of this framework could be extended in the future to include real time detection embedded in smart devices to assist in firearms training and also help in crime detection and prosecution.",
"title": ""
},
{
"docid": "c323c25c05f2461fb0c0ef7cbf655eb4",
"text": "While deep convolutional neural networks (CNN) have been successfully applied for 2D image analysis, it is still challenging to apply them to 3D anisotropic volumes, especially when the within-slice resolution is much higher than the between-slice resolution and when the amount of 3D volumes is relatively small. On one hand, direct learning of CNN with 3D convolution kernels suffers from the lack of data and likely ends up with poor generalization; insufficient GPU memory limits the model size or representational power. On the other hand, applying 2D CNN with generalizable features to 2D slices ignores between-slice information. Coupling 2D network with LSTM to further handle the between-slice information is not optimal due to the difficulty in LSTM learning. To overcome the above challenges, we propose a 3D Anisotropic Hybrid Network (AHNet) that transfers convolutional features learned from 2D images to 3D anisotropic volumes. Such a transfer inherits the desired strong generalization capability for withinslice information while naturally exploiting between-slice information for more effective modelling. The focal loss is further utilized for more effective end-to-end learning. We experiment with the proposed 3D AH-Net on two different medical image analysis tasks, namely lesion detection from a Digital Breast Tomosynthesis volume, and liver and liver tumor segmentation from a Computed Tomography volume and obtain the state-of-the-art results.",
"title": ""
},
{
"docid": "241cd26632a394e5d922be12ca875fe1",
"text": "Little is known about whether personality characteristics influence initial attraction. Because adult attachment differences influence a broad range of relationship processes, the authors examined their role in 3 experimental attraction studies. The authors tested four major attraction hypotheses--self similarity, ideal-self similarity, complementarity, and attachment security--and examined both actual and perceptual factors. Replicated analyses across samples, designs, and manipulations showed that actual security and self similarity predicted attraction. With regard to perceptual factors, ideal similarity, self similarity, and security all were significant predictors. Whereas perceptual ideal and self similarity had incremental predictive power, perceptual security's effects were subsumed by perceptual ideal similarity. Perceptual self similarity fully mediated actual attachment similarity effects, whereas ideal similarity was only a partial mediator.",
"title": ""
},
{
"docid": "e9199c0f3b08979c03e0c82399ac7160",
"text": "Background: ADHD can have a negative impact on occupational performance of a child, interfering with ADLs, IADLs, education, leisure, and play. However, at this time, a cumulative review of evidence based occupational therapy interventions for children with ADHD do not exist. Purpose: The purpose of this scholarly project was to complete a systematic review of what occupational therapy interventions are effective for school-aged children with ADHD. Methods: An extensive systematic review for level T, II, or II research articles was completed using CINAHL and OT Search. Inclusion, exclusion, subject terms, and words or phrases were determined with assistance from the librarian at the Harley French Library at the University of North Dakota. Results: The systematic review yielded !3 evidence-based articles with interventions related to cognition, motor, sensory, and play. Upon completion of the systematic review, articles were categorized based upon an initial literature search understanding common occupational therapy interventions for children with ADHD. Specifically, level I, II, and III occupational therapy research is available for interventions addressing cognition, motor, sensory, and play. Conclusion: Implications for practice and education include the need for foundational and continuing education opportunities reflecting evidenced-based interventions for ADHD. Further research is needed to solidify best practices for children with ADHD including more rigorous studies across interventions.",
"title": ""
},
{
"docid": "347e7b80b2b0b5cd5f0736d62fa022ae",
"text": "This article presents the results of an interview study on how people perceive and play social network games on Facebook. During recent years, social games have become the biggest genre of games if measured by the number of registered users. These games are designed to cater for large audiences in their design principles and values, a free-to-play revenue model and social network integration that make them easily approachable and playable with friends. Although these games have made the headlines and have been seen to revolutionize the game industry, we still lack an understanding of how people perceive and play them. For this article, we interviewed 18 Finnish Facebook users from a larger questionnaire respondent pool of 134 people. This study focuses on a user-centric approach, highlighting the emergent experiences and the meaning-making of social games players. Our findings reveal that social games are usually regarded as single player games with a social twist, and as suffering partly from their design characteristics, while still providing a wide spectrum of playful experiences for different needs. The free-to-play revenue model provides an easy access to social games, but people disagreed with paying for additional content for several reasons.",
"title": ""
},
{
"docid": "1aa39f265d476fca4c54af341b6f2bde",
"text": "Explaining the output of a complicated machine learning model like a deep neural network (DNN) is a central challenge in machine learning. Several proposed local explanation methods address this issue by identifying what dimensions of a single input are most responsible for a DNN’s output. The goal of this work is to assess the sensitivity of local explanations to DNN parameter values. Somewhat surprisingly, we find that DNNs with randomly-initialized weights produce explanations that are both visually and quantitatively similar to those produced by DNNs with learned weights. Our conjecture is that this phenomenon occurs because these explanations are dominated by the lower level features of a DNN, and that a DNN’s architecture provides a strong prior which significantly affects the representations learned at these lower layers.",
"title": ""
},
{
"docid": "3d2e170b4cd31d0e1a28c968f0b75cf6",
"text": "Fog Computing is a new variety of the cloud computing paradigm that brings virtualized cloud services to the edge of the network to control the devices in the IoT. We present a pattern for fog computing which describes its architecture, including its computing, storage and networking services. Fog computing is implemented as an intermediate platform between end devices and cloud computing data centers. The recent popularity of the Internet of Things (IoT) has made fog computing a necessity to handle a variety of devices. It has been recognized as an important platform to provide efficient, location aware, close to the edge, cloud services. Our model includes most of the functionality found in current fog architectures.",
"title": ""
},
{
"docid": "4b4dc34feba176a30bced5b7dbe4fe7b",
"text": "The Bitcoin ecosystem has suffered frequent thefts and losses affecting both businesses and individuals. The insider threat faced by a business is particularly serious. Due to the irreversibility, automation, and pseudonymity of transactions, Bitcoin currently lacks support for the sophisticated internal control systems deployed by modern businesses to deter fraud. We seek to bridge this gap. We show that a thresholdsignature scheme compatible with Bitcoin’s ECDSA signatures can be used to enforce complex yet useful security policies including: (1) shared control of a wallet, (2) secure bookkeeping, a Bitcoin-specific form of accountability, (3) secure delegation of authority, and (4) two-factor security for personal wallets.",
"title": ""
}
] |
scidocsrr
|
8c711d4e7d44ffa96acf681c2721de0a
|
Extractive multi-document summarization using population-based multicriteria optimization
|
[
{
"docid": "ea8b083238554866d36ac41b9c52d517",
"text": "A fully automatic document retrieval system operating on the IBM 7094 is described. The system is characterized by the fact that several hundred different methods are available to analyze documents and search requests. This feature is used in the retrieval process by leaving the exact sequence of operations initially unspecified, and adapting the search strategy to the needs of individual users. The system is used not only to simulate an actual operating environment, but also to test the effectiveness of the various available processing methods. Results obtained so far seem to indicate that some combination of analysis procedures can in general be relied upon to retrieve the wanted information. A typical search request is used as an example in the present report to illustrate systems operations and evaluation procedures .",
"title": ""
},
{
"docid": "c0a67a4d169590fa40dfa9d80768ef09",
"text": "Excerpts of technical papers and magazine articles that serve the purposes of conventional abstracts have been created entirely by automatic means. In the exploratory research described, the complete text of an article in machine-readable form i s scanned by a n IBM 704 data-processing machine and analyzed in accordance with a standard program. Statistical information derived from word frequency and distribution is used by the machine to compute a relative measure of significance, first for individual words and then for sentences. Sentences scoring highest in significance are extracted and printed out to become the \" auto-abstract. \" Introduction",
"title": ""
},
{
"docid": "779e169d273fd34e15baba72c9c9ca2d",
"text": "This paper proposes an optimization-based model for generic document summarization. The model generates a summary by extracting salient sentences from documents. This approach uses the sentence-to-document collection, the summary-to-document collection and the sentence-to-sentence relations to select salient sentences from given document collection and reduce redundancy in the summary. To solve the optimization problem has been created an improved differential evolution algorithm. The algorithm can adjust crossover rate adaptively according to the fitness of individuals. We implemented the proposed model on multi-document summarization task. Experiments have been performed on DUC2002 and DUC2004 data sets. The experimental results provide strong evidence that the proposed optimization-based approach is a viable method for document summarization. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "3b12764eb87f9942f04638378da82a8b",
"text": "The demand for an all-in-one phone with integrated personal information management and data access capabilities is beginning to accelerate. While personal digital assistants (PDAs) with built-in cellular, WiFi, and Voice-Over-IP technologies have the ability to serve these needs in a single package, the rate at which energy is consumed by PDA-based phones is very high. Thus, these devices can quickly drain their own batteries and become useless to their owner.In this paper, we introduce a technique to increase the battery lifetime of a PDA-based phone by reducing its idle power, the power a device consumes in a \"standby\" state. To reduce the idle power, we essentially shut down the device and its wireless network card when the device is not being used---the device is powered only when an incoming call is received. Using this technique, we can increase the battery lifetime by up to 115%.In this paper, we describe the design of our \"wake-on-wireless\" energy-saving strategy and the prototype device we implemented. To evaluate our technique, we compare it with alternative approaches. Our results show that our technique can provide a significant lifetime improvement over other technologies.",
"title": ""
},
{
"docid": "d7bc62e7fca922f9b97e42deff85d010",
"text": "In this paper, we propose an extractive multi-document summarization (MDS) system using joint optimization and active learning for content selection grounded in user feedback. Our method interactively obtains user feedback to gradually improve the results of a state-of-the-art integer linear programming (ILP) framework for MDS. Our methods complement fully automatic methods in producing highquality summaries with a minimum number of iterations and feedbacks. We conduct multiple simulation-based experiments and analyze the effect of feedbackbased concept selection in the ILP setup in order to maximize the user-desired content in the summary.",
"title": ""
},
{
"docid": "14a45e3e7aadee56b7d2e28c692aba9f",
"text": "Radiation therapy as a mode of cancer treatment is well-established. Telecobalt and telecaesium units were used extensively during the early days. Now, medical linacs offer more options for treatment delivery. However, such systems are prohibitively expensive and beyond the reach of majority of the worlds population living in developing and under-developed countries. In India, there is shortage of cancer treatment facilities, mainly due to the high cost of imported machines. Realizing the need of technology for affordable radiation therapy machines, Bhabha Atomic Research Centre (BARC), the premier nuclear research institute of Government of India, started working towards a sophisticated telecobalt machine. The Bhabhatron is the outcome of the concerted efforts of BARC and Panacea Medical Technologies Pvt. Ltd., India. It is not only less expensive, but also has a number of advanced features. It incorporates many safety and automation features hitherto unavailable in the most advanced telecobalt machine presently available. This paper describes various features available in Bhabhatron-II. The authors hope that this machine has the potential to make safe and affordable radiation therapy accessible to the common people in India as well as many other countries.",
"title": ""
},
{
"docid": "6f768934f02c0e559801a7b98d0fbbd7",
"text": "Voice-activated intelligent assistants, such as Siri, Google Now, and Cortana, are prevalent on mobile devices. However, it is challenging to evaluate them due to the varied and evolving number of tasks supported, e.g., voice command, web search, and chat. Since each task may have its own procedure and a unique form of correct answers, it is expensive to evaluate each task individually. This paper is the first attempt to solve this challenge. We develop consistent and automatic approaches that can evaluate different tasks in voice-activated intelligent assistants. We use implicit feedback from users to predict whether users are satisfied with the intelligent assistant as well as its components, i.e., speech recognition and intent classification. Using this approach, we can potentially evaluate and compare different tasks within and across intelligent assistants ac-cording to the predicted user satisfaction rates. Our approach is characterized by an automatic scheme of categorizing user-system interaction into task-independent dialog actions, e.g., the user is commanding, selecting, or confirming an action. We use the action sequence in a session to predict user satisfaction and the quality of speech recognition and intent classification. We also incorporate other features to further improve our approach, including features derived from previous work on web search satisfaction prediction, and those utilizing acoustic characteristics of voice requests. We evaluate our approach using data collected from a user study. Results show our approach can accurately identify satisfactory and unsatisfactory sessions.",
"title": ""
},
{
"docid": "fec16029c250505dabd2b4a9bd18b227",
"text": "This collection is a very welcome addition to the literature on automatic speech synthesis, also known as \"text-to-speech\" (TTS). It has been more than a decade since a comprehensive, edited collection of chapters on this topic has been published. Because much has changed in TTS research over the last ten years, this text will prove very useful to workers in the field. Along with Dutoit's recent book (An Introduction to Textto-Speech Synthesis [1997], also published by Kluwer), it is essential reading for anyone serious about TTS research. (Other recent works have included chapters on TTS, but their main focus has been on other aspects of speech processing, and thus the TTS details there are far fewer. While Bailly and Benoit's collection Talking Machines [1992] has synthesis as its sole focus, it derives from numerous presentations at a workshop and suffers accordingly due to its many uneven short chapters.) A more direct comparison can be made to Allen, Hunnicutt, and Klatt's book on MITalk (1987), especially as both that book and the one currently under review describe in detail specific TTS systems developed at two of the major research centres involved in speech synthesis over the years. As the foreword of the present book notes, the MITalk system was largely based on morphological decomposition of words and synthesis via a Klatt formant architecture, while the modular Bell system is distinguished by its regular relations for text analysis, its use of concatenation of diphone units, and its emphasis on the importance of careful selection of texts and recording conditions. While they cover similar ground, the newer Sproat book is quite different from Dutoit's. It has seven authors, all contributors to the Bell Labs system that is described in detail in the book. Often in multiauthor books, one finds a significant unevenness in coverage and style across chapters due to lack of coordination among the authors. This is much less apparent in Sproat's b o o k because all authors worked at the same lab and because one author (van Santen) was involved in seven of the chapters; at least one of the two principal authors (Sproat and van Santen) contributed to all nine chapters. Further distinguishing this book is its emphasis on multilingual TTS: the Bell system exists for ten diverse languages, and the book provides many specific examples of interesting problems in different languages. While several multilingual synthesizers are available commercially, most technical literature has focused on one language at a time. Given that all speech synthesis is based on the same human speech production mechanism and that the world's languages share many aspects of phonetics, it is prudent to examine how a uniform methodology can be applied to many different languages for TTS. Unlike speech coders, which normally function equally well for all languages without adjustment, speech recognizers and synthesizers necessarily need training for individual languages. One of the foci of this book is minimizing the work to",
"title": ""
},
{
"docid": "1b99fda09e6e4e6ae7e7ef6f30f46b4b",
"text": "T widespread implementation of customer relationship management technologies in business has allowed companies to increasingly focus on both acquiring and retaining customers. The challenge of designing incentive mechanisms that simultaneously focus on customer acquisition and customer retention comes from the fact that customer acquisition and customer retention are usually separate but intertwined tasks that make providing proper incentives more difficult. The present study develops incentive mechanisms that simultaneously address acquisition and retention of customers with an emphasis on the interactions between them. The main focus of this study is to examine the impact of the negative effect of acquisition on retention, i.e., the spoiling effect, on firm performance under direct selling and delegation of customer acquisition. Our main finding is that the negative effect of acquisition on retention has a significant impact on acquisition and retention efforts and firm profit. In particular, when the customer acquisition and retention are independent, the firm’s profit is higher under direct selling than under delegation; however, when acquisition spoils retention, interestingly, the firm’s profit may be higher under delegation. Our analysis also finds that the spoiling effect not only reduces the optimal acquisition effort but may also reduce retention effort under both direct selling and delegation. Comparing the optimal efforts under direct selling and delegation, the acquisition effort is always lower under delegation regardless of the spoiling effect, but the retention effort may be higher under delegation with the spoiling effect. Furthermore, when the customer antagonism effect from price promotions is considered, our main results hold regarding the firm’s preferences between direct selling and delegation, which demonstrates the robustness of our model.",
"title": ""
},
{
"docid": "bb8fe4145e1ea2337f5cc1a18a9a348f",
"text": "Automatic License Plate Recognition (ALPR) has been a frequent topic of research due to many practical applications. However, many of the current solutions are still not robust in real-world situations, commonly depending on many constraints. This paper presents a robust and efficient ALPR system based on the state-of-the-art YOLO object detector. The Convolutional Neural Networks (CNNs) are trained and finetuned for each ALPR stage so that they are robust under different conditions (e.g., variations in camera, lighting, and background). Specially for character segmentation and recognition, we design a two-stage approach employing simple data augmentation tricks such as inverted License Plates (LPs) and flipped characters. The resulting ALPR approach achieved impressive results in two datasets. First, in the SSIG dataset, composed of 2,000 frames from 101 vehicle videos, our system achieved a recognition rate of 93.53% and 47 Frames Per Second (FPS), performing better than both Sighthound and OpenALPR commercial systems (89.80% and 93.03%, respectively) and considerably outperforming previous results (81.80%). Second, targeting a more realistic scenario, we introduce a larger public dataset1 dataset, designed to ALPR. This dataset contains 150 videos and 4,500 frames captured when both camera and vehicles are moving and also contains different types of vehicles (cars, motorcycles, buses and trucks). In our proposed dataset, the trial versions of commercial systems achieved recognition rates below 70%. On the other hand, our system performed better, with recognition rate of 78.33% and 35 FPS.The UFPR-ALPR dataset is publicly available to the research community at https://web.inf.ufpr.br/vri/databases/ufpr-alpr/ subject to privacy restrictions.",
"title": ""
},
{
"docid": "ca1c232e84e7cb26af6852007f215715",
"text": "Word embedding-based methods have received increasing attention for their flexibility and effectiveness in many natural language-processing (NLP) tasks, including Word Similarity (WS). However, these approaches rely on high-quality corpus and neglect prior knowledge. Lexicon-based methods concentrate on human’s intelligence contained in semantic resources, e.g., Tongyici Cilin, HowNet, and Chinese WordNet, but they have the drawback of being unable to deal with unknown words. This article proposes a three-stage framework for measuring the Chinese word similarity by incorporating prior knowledge obtained from lexicons and statistics into word embedding: in the first stage, we utilize retrieval techniques to crawl the contexts of word pairs from web resources to extend context corpus. In the next stage, we investigate three types of single similarity measurements, including lexicon similarities, statistical similarities, and embedding-based similarities. Finally, we exploit simple combination strategies with math operations and the counter-fitting combination strategy using optimization method. To demonstrate our system’s efficiency, comparable experiments are conducted on the PKU-500 dataset. Our final results are 0.561/0.516 of Spearman/Pearson rank correlation coefficient, which outperform the state-of-the-art performance to the best of our knowledge. Experiment results on Chinese MC-30 and SemEval-2012 datasets show that our system also performs well on other Chinese datasets, which proves its transferability. Besides, our system is not language-specific and can be applied to other languages, e.g., English.",
"title": ""
},
{
"docid": "25226432d192bf7192cf6d8dbee3cab7",
"text": "According to the distributional inclusion hypothesis, entailment between words can be measured via the feature inclusions of their distributional vectors. In recent work, we showed how this hypothesis can be extended from words to phrases and sentences in the setting of compositional distributional semantics. This paper focuses on inclusion properties of tensors; its main contribution is a theoretical and experimental analysis of how feature inclusion works in different concrete models of verb tensors. We present results for relational, Frobenius, projective, and holistic methods and compare them to the simple vector addition, multiplication, min, and max models. The degrees of entailment thus obtained are evaluated via a variety of existing wordbased measures, such as Weed’s and Clarke’s, KL-divergence, APinc, balAPinc, and two of our previously proposed metrics at the phrase/sentence level. We perform experiments on three entailment datasets, investigating which version of tensor-based composition achieves the highest performance when combined with the sentence-level measures.",
"title": ""
},
{
"docid": "e74573560a8da7be758c619ba85202df",
"text": "This paper proposes two hybrid connectionist structural acoustical models for robust context independent phone like and word like units for speaker-independent recognition system. Such structure combines strength of Hidden Markov Models (HMM) in modeling stochastic sequences and the non-linear classification capability of Artificial Neural Networks (ANN). Two kinds of Neural Networks (NN) are investigated: Multilayer Perceptron (MLP) and Elman Recurrent Neural Networks (RNN). The hybrid connectionist-HMM systems use discriminatively trained NN to estimate the a posteriori probability distribution among subword units given the acoustic observations. We efficiently tested the performance of the conceived systems using the TIMIT database in clean and noisy environments with two perceptually motivated features: MFCC and PLP. Finally, the robustness of the systems is evaluated by using a new preprocessing stage for denoising based on wavelet transform. A significant improvement in performance is obtained with the proposed method.",
"title": ""
},
{
"docid": "03024b4232d8c233ecfc0c6c9751de0e",
"text": "Area X is a songbird basal ganglia nucleus that is required for vocal learning. Both Area X and its immediate surround, the medial striatum (MSt), contain cells displaying either striatal or pallidal characteristics. We used pathway-tracing techniques to compare directly the targets of Area X and MSt with those of the lateral striatum (LSt) and globus pallidus (GP). We found that the zebra finch LSt projects to the GP, substantia nigra pars reticulata (SNr) and pars compacta (SNc), but not the thalamus. The GP is reciprocally connected with the subthalamic nucleus (STN) and projects to the SNr and motor thalamus analog, the ventral intermediate area (VIA). In contrast to the LSt, Area X and surrounding MSt project to the ventral pallidum (VP) and dorsal thalamus via pallidal-like neurons. A dorsal strip of the MSt contains spiny neurons that project to the VP. The MSt, but not Area X, projects to the ventral tegmental area (VTA) and SNc, but neither MSt nor Area X projects to the SNr. Largely distinct populations of SNc and VTA dopaminergic neurons innervate Area X and surrounding the MSt. Finally, we provide evidence consistent with an indirect pathway from the cerebellum to the basal ganglia, including Area X. Area X projections thus differ from those of the GP and LSt, but are similar to those of the MSt. These data clarify the relationships among different portions of the oscine basal ganglia as well as among the basal ganglia of birds and mammals.",
"title": ""
},
{
"docid": "9246700eca378427ea2ea3c20a4377b3",
"text": "This paper shows that a perturbed form of gradient descent converges to a second-order stationary point in a number iterations which depends only poly-logarithmically on dimension (i.e., it is almost “dimension-free”). The convergence rate of this procedure matches the wellknown convergence rate of gradient descent to first-order stationary points, up to log factors. When all saddle points are non-degenerate, all second-order stationary points are local minima, and our result thus shows that perturbed gradient descent can escape saddle points almost for free. Our results can be directly applied to many machine learning applications, including deep learning. As a particular concrete example of such an application, we show that our results can be used directly to establish sharp global convergence rates for matrix factorization. Our results rely on a novel characterization of the geometry around saddle points, which may be of independent interest to the non-convex optimization community.",
"title": ""
},
{
"docid": "7e85b8528370f2c0f1427b2d4ce30bf6",
"text": "This paper deals with a new challenge for digital forensic experts - the forensic analysis of social networks. There is a lot of identity theft, theft of personal data, public defamation, cyber stalking and other criminal activities on social network sites. This paper will present a forensic analysis of social networks and cloud forensics in the internet environment. For the purpose of this research one case study is created like - a common practical scenario where the combination of identity theft and public defamation through Facebook activity is explored. Investigators must find the person who stole some others profile, who publish inappropriate and prohibited contents performing act of public defamation and humiliation of profile owner.",
"title": ""
},
{
"docid": "f8435db6c6ea75944d1c6b521e0f3dd3",
"text": "We present the design, fabrication process, and characterization of a multimodal tactile sensor made of polymer materials and metal thin film sensors. The multimodal sensor can detect the hardness, thermal conductivity, temperature, and surface contour of a contact object for comprehensive evaluation of contact objects and events. Polymer materials reduce the cost and the fabrication complexity for the sensor skin, while increasing mechanical flexibility and robustness. Experimental tests show the skin is able to differentiate between objects using measured properties. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "78a0898f35113547cdc3adb567ad7afb",
"text": "Phishing is a form of online identity theft. Phishers use social engineering to steal victims' personal identity data and financial account credentials. Social engineering schemes use spoofed e-mails to lure unsuspecting victims into counterfeit websites designed to trick recipients into divulging financial data such as credit card numbers, account usernames, passwords and social security numbers. This is called a deceptive phishing attack. In this paper, a thorough overview of a deceptive phishing attack and its countermeasure techniques, which is called anti-phishing, is presented. Firstly, technologies used by phishers and the definition, classification and future works of deceptive phishing attacks are discussed. Following with the existing anti-phishing techniques in literatures and research-stage technologies are shown, and a thorough analysis which includes the advantages and shortcomings of countermeasures is given. At last, we show the research of why people fall for phishing attack.",
"title": ""
},
{
"docid": "ee1293cc2e11543c5dad4473b0592f58",
"text": "Mobile ad hoc networks’ (MANETs) inherent power limitation makes power-awareness a critical requirement for MANET protocols. In this paper, we propose a new routing metric, the drain rate, which predicts the lifetime of a node as a function of current traffic conditions. We describe the Minimum Drain Rate (MDR) mechanism which uses a combination of the drain rate with remaining battery capacity to establish routes. MDR can be employed by any existing MANET routing protocol to achieve a dual goal: extend both nodal battery life and connection lifetime. Using the ns-2 simulator and the Dynamic Source Routing (DSR) protocol, we compared MDR to the Minimum Total Transmission Power Routing (MTPR) scheme and the Min-Max Battery Cost Routing (MMBCR) scheme and proved that MDR is the best approach to achieve the dual goal.",
"title": ""
},
{
"docid": "d1f6cde4534115d1f2db1a6294f83bf9",
"text": "Siblings play a key, supportive role in the lives of many lesbian and gay adults. Yet siblings are rarely considered in the literature regarding the coming-out process (D'Augelli et al., 1998; Hilton & Szymanski, 2011; LaSala, 2010; Savin-Williams & Dubé, 1998). To fill this gap in the research literature, we carried out a comparative case study in the country of Belgium between two sets of siblings-three Romani brothers with one sibling identifying as a gay male and three White sisters with one sibling identifying as a lesbian. These two cases were pulled from a larger qualitative study (Haxhe & D'Amore, 2014) of 102 native French-speaking Belgian participants. Findings of the present study revealed that siblings offered critical socio-emotional support in the coming out of their lesbian and gay sibling, particularly with disclosing to parents and with fostering self-acceptance.",
"title": ""
},
{
"docid": "3427d27d6c5c444a90a184183f991208",
"text": "Network virtualization is recognized as an enabling technology for the future Internet. It aims to overcome the resistance of the current Internet to architectural change. Application of this technology relies on algorithms that can instantiate virtualized networks on a substrate infrastructure, optimizing the layout for service-relevant metrics. This class of algorithms is commonly known as \"Virtual Network Embedding (VNE)\" algorithms. This paper presents a survey of current research in the VNE area. Based upon a novel classification scheme for VNE algorithms a taxonomy of current approaches to the VNE problem is provided and opportunities for further research are discussed.",
"title": ""
},
{
"docid": "9aad2d4dd17bb3906add18578df28580",
"text": "Likelihood ratio policy gradient methods have been some of the most successful reinforcement learning algorithms, especially for learning on physical systems. We describe how the likelihood ratio policy gradient can be derived from an importance sampling perspective. This derivation highlights how likelihood ratio methods under-use past experience by (i) using the past experience to estimate only the gradient of the expected return U(θ) at the current policy parameterization θ, rather than to obtain a more complete estimate of U(θ), and (ii) using past experience under the current policy only rather than using all past experience to improve the estimates. We present a new policy search method, which leverages both of these observations as well as generalized baselines—a new technique which generalizes commonly used baseline techniques for policy gradient methods. Our algorithm outperforms standard likelihood ratio policy gradient algorithms on several testbeds.",
"title": ""
},
{
"docid": "ceb59133deb7828edaf602308cb3450a",
"text": "Abstract While there has been a great deal of interest in the modelling of non-linearities and regime shifts in economic time series, there is no clear consensus regarding the forecasting abilities of these models. In this paper we develop a general approach to predict multiple time series subject to Markovian shifts in the regime. The feasibility of the proposed forecasting techniques in empirical research is demonstrated and their forecast accuracy is evaluated.",
"title": ""
}
] |
scidocsrr
|
7d4599fe80763cf2607d6ad2b6922c8c
|
White-Box Traceable Ciphertext-Policy Attribute-Based Encryption Supporting Any Monotone Access Structures
|
[
{
"docid": "428ecd77262fc57c5d0d19924a10f02a",
"text": "In an identity based encryption scheme, each user is identified by a unique identity string. An attribute based encryption scheme (ABE), in contrast, is a scheme in which each user is identified by a set of attributes, and some function of those attributes is used to determine decryption ability for each ciphertext. Sahai and Waters introduced a single authority attribute encryption scheme and left open the question of whether a scheme could be constructed in which multiple authorities were allowed to distribute attributes [SW05]. We answer this question in",
"title": ""
}
] |
[
{
"docid": "88bdaa1ee78dd24f562e632cdb5ed396",
"text": "We present a novel paraphrase fragment pair extraction method that uses a monolingual comparable corpus containing different articles about the same topics or events. The procedure consists of document pair extraction, sentence pair extraction, and fragment pair extraction. At each stage, we evaluate the intermediate results manually, and tune the later stages accordingly. With this minimally supervised approach, we achieve 62% of accuracy on the paraphrase fragment pairs we collected and 67% extracted from the MSR corpus. The results look promising, given the minimal supervision of the approach, which can be further scaled up.",
"title": ""
},
{
"docid": "2f22f99bd7e386811cee961ac292642f",
"text": "Recent findings that human serum contains stably expressed microRNAs (miRNAs) have revealed a great potential of serum miRNA signature as disease fingerprints to diagnosis. Here we used genome-wide serum miRNA expression analysis to investigate the value of serum miRNAs as biomarkers for the diagnosis of Alzheimer's disease (AD). Illumina HiSeq 2000 sequencing followed by individual quantitative reverse transcriptase polymerase chain reaction (qRT-PCR) assays was used to test the difference in levels of serum miRNAs between 50 AD patients and 50 controls in the screening stages. The detected serum miRNAs then were validated by qRT-PCR in 158 patients and 155 controls. MiR-98-5p, miR-885-5p, miR-483-3p, miR-342-3p, miR-191-5p, and miR-let-7d-5p displayed significantly different expression levels in AD patients compared with controls. Among the 6 miRNAs, miR-342-3p has the best sensitivity (81.5%) and specificity (70.1%) and was correlated to Mini-Mental State Examination score. This study identified six serum miRNAs that distinguish AD patients from healthy controls with high sensitivity and specificity. Serum miRNA panel (or miR-342-3p alone) may serve as a novel, noninvasive biomarker for AD.",
"title": ""
},
{
"docid": "916e10c8bd9f5aa443fa4d8316511c94",
"text": "A full-bridge LLC resonant converter with series-parallel connected transformers for an onboard battery charger of electric vehicles is proposed, which can realize zero voltage switching turn-on of power switches and zero current switching turn-off of rectifier diodes. In this converter, two same small transformers are employed instead of the single transformer in the traditional LLC resonant converter. The primary windings of these two transformers are series-connected to obtain equal primary current, while the secondary windings are parallel-connected to be provided with the same secondary voltage, so the power can be automatically balanced. Series-connection can reduce the turns of primary windings. Parallel-connection can reduce the current stress of the secondary windings and the conduction loss of rectifier diodes. Compared with the traditional LLC resonant converter with single transformer under same power level, the smaller low-profile cores can be used to reduce the transformers loss and improve heat dissipation. In this paper, the operating principle, steady state analysis, and design of the proposed converter are described, simulation and experimental prototype of the proposed LLC converter is established to verify the effectiveness of the proposed converter.",
"title": ""
},
{
"docid": "7a1e32dc80550704207c5e0c7e73da26",
"text": "Stock markets are affected by many uncertainties and interrelated economic and political factors at both local and global levels. The key to successful stock market forecasting is achieving best results with minimum required input data. To determine the set of relevant factors for making accurate predictions is a complicated task and so regular stock market analysis is very essential. More specifically, the stock market’s movements are analyzed and predicted in order to retrieve knowledge that could guide investors on when to buy and sell. It will also help the investor to make money through his investment in the stock market. This paper surveys large number of resources from research papers, web-sources, company reports and other available sources.",
"title": ""
},
{
"docid": "b59d49106614382cf97f276529d1ddd1",
"text": "core microarchitecture B. Sinharoy J. A. Van Norstrand R. J. Eickemeyer H. Q. Le J. Leenstra D. Q. Nguyen B. Konigsburg K. Ward M. D. Brown J. E. Moreira D. Levitan S. Tung D. Hrusecky J. W. Bishop M. Gschwind M. Boersma M. Kroener M. Kaltenbach T. Karkhanis K. M. Fernsler The POWER8i processor is the latest RISC (Reduced Instruction Set Computer) microprocessor from IBM. It is fabricated using the company’s 22-nm Silicon on Insulator (SOI) technology with 15 layers of metal, and it has been designed to significantly improve both single-thread performance and single-core throughput over its predecessor, the POWER7A processor. The rate of increase in processor frequency enabled by new silicon technology advancements has decreased dramatically in recent generations, as compared to the historic trend. This has caused many processor designs in the industry to show very little improvement in either single-thread or single-core performance, and, instead, larger numbers of cores are primarily pursued in each generation. Going against this industry trend, the POWER8 processor relies on a much improved core and nest microarchitecture to achieve approximately one-and-a-half times the single-thread performance and twice the single-core throughput of the POWER7 processor in several commercial applications. Combined with a 50% increase in the number of cores (from 8 in the POWER7 processor to 12 in the POWER8 processor), the result is a processor that leads the industry in performance for enterprise workloads. This paper describes the core microarchitecture innovations made in the POWER8 processor that resulted in these significant performance benefits.",
"title": ""
},
{
"docid": "616b6c0c4ecda05ae263efdc0ffcb081",
"text": "Grounded theory, as an evolving qualitative research method, is a product of its history as well as of its epistemology. Within the literature, there have been a number of discussions focusing on the differences between Glaser's (1978, 1992) and Strauss's (1987, 1990) versions of grounded theory. The purpose of this article is to add a level of depth and breadth to this discussion through specifically exploring the Glaser-Strauss debate by comparing the data analysis processes and procedures advocated by Glaser and by Strauss. To accomplish this task, the authors present the article in two sections. First, they provide relevant background information on grounded theory as a research method. Second, they pursue a more in-depth discussion of the positions of Glaser, using Glaser's work, and Strauss, using Strauss's and Strauss and Corbin's (1990) work, regarding the different phases of data analysis, specifically addressing the coding procedures, verification, and the issue of forcing versus emergence.",
"title": ""
},
{
"docid": "888e8f68486c08ffe538c46ba76de85c",
"text": "Neural ranking models for information retrieval (IR) use shallow or deep neural networks to rank search results in response to a query. Traditional learning to rank models employ machine learning techniques over hand-crafted IR features. By contrast, neural models learn representations of language from raw text that can bridge the gap between query and document vocabulary. Unlike classical IR models, these new machine learning based approaches are data-hungry, requiring large scale training data before they can be deployed. This tutorial introduces basic concepts and intuitions behind neural IR models, and places them in the context of traditional retrieval models. We begin by introducing fundamental concepts of IR and different neural and non-neural approaches to learning vector representations of text. We then review shallow neural IR methods that employ pre-trained neural term embeddings without learning the IR task end-to-end. We introduce deep neural networks next, discussing popular deep architectures. Finally, we review the current DNN models for information retrieval. We conclude with a discussion on potential future directions for neural IR.",
"title": ""
},
{
"docid": "69624d1ab7b438d5ff4b5192f492a11a",
"text": "1. SLICED PROGRAMMABLE NETWORKS OpenFlow [4] has been demonstrated as a way for researchers to run networking experiments in their production network. Last year, we demonstrated how an OpenFlow controller running on NOX [3] could move VMs seamlessly around an OpenFlow network [1]. While OpenFlow has potential [2] to open control of the network, only one researcher can innovate on the network at a time. What is required is a way to divide, or slice, network resources so that researchers and network administrators can use them in parallel. Network slicing implies that actions in one slice do not negatively affect other slices, even if they share the same underlying physical hardware. A common network slicing technique is VLANs. With VLANs, the administrator partitions the network by switch port and all traffic is mapped to a VLAN by input port or explicit tag. This coarse-grained type of network slicing complicates more interesting experiments such as IP mobility or wireless handover. Here, we demonstrate FlowVisor, a special purpose OpenFlow controller that allows multiple researchers to run experiments safely and independently on the same production OpenFlow network. To motivate FlowVisor’s flexibility, we demonstrate four network slices running in parallel: one slice for the production network and three slices running experimental code (Figure 1). Our demonstration runs on real network hardware deployed on our production network at Stanford and a wide-area test-bed with a mix of wired and wireless technologies.",
"title": ""
},
{
"docid": "96ee31337d66b8ccd3876c1575f9b10c",
"text": "Although different modeling techniques have been proposed during the last 300 years, the differential equation formalism proposed by Newton and Leibniz has been the tool of choice for modeling and problem solving Taylor (1996); Wainer (2009). Differential equations provide a formal mathematical method (sometimes also called an analytical method) for studying the entity of interest. Computational methods based on differential equations could not be easily applied in studying human-made dynamic systems (e.g., traffic controllers, robotic arms, automated factories, production plants, computer networks, VLSI circuits). These systems are usually referred to as discrete event systems because their states do not change continuously but, rather, because of the occurrence of events. This makes them asynchronous, inherently concurrent, and highly nonlinear, rendering their modeling and simulation different from that used in traditional approaches. In order to improve the model definition for this class of systems, a number of techniques were introduced, including Petri Nets, Finite State Machines, min-max algebra, Timed Automata, etc. Banks & Nicol. (2005); Cassandras (1993); Cellier & Kofman. (2006); Fishwick (1995); Law & Kelton (2000); Toffoli & Margolus. (1987). Wireless Sensor Network (WSN) is a discrete event system which consists of a network of sensor nodes equipped with sensing, computing, power, and communication modules to monitor certain phenomenon such as environmental data or object tracking Zhao & Guibas (2004). Emerging applications of wireless sensor networks are comprised of asset and warehouse *[email protected] †[email protected] ‡[email protected] 1",
"title": ""
},
{
"docid": "b70a70896a3d904c25adb126b584a858",
"text": "A case of a fatal cardiac episode resulting from an unusual autoerotic practice involving the use of a vacuum cleaner, is presented. Scene investigation and autopsy findings are discussed.",
"title": ""
},
{
"docid": "e3977392317f51a7cd1742e93a48bea2",
"text": "There is increasing amount of evidence pointing toward a high prevalence of psychiatric conditions among individuals with hypermobile type of Ehlers-Danlos syndrome (JHS/hEDS). A literature review confirms a strong association between anxiety disorders and JHSh/hEDS, and there is also limited but growing evidence that JHSh/hEDS is also associated with depression, eating, and neuro-developmental disorders as well as alcohol and tobacco misuse. The underlying mechanisms behind this association include genetic risks, autonomic nervous system dysfunction, increased exteroceptive and interoceptive mechanisms and decreased proprioception. Recent neuroimaging studies have also shown an increase response in emotion processing brain areas which could explain the high affective reactivity seen in JHS/hEDS. Management of these patients should include psychiatric and psychological approaches, not only to relieve the clinical conditions but also to improve abilities to cope through proper drug treatment, psychotherapy, and psychological rehabilitation adequately coupled with modern physiotherapy. A multidimensional approach to this \"neuroconnective phenotype\" should be implemented to ensure proper assessment and to guide for more specific treatments. Future lines of research should further explore the full dimension of the psychopathology associated with JHS/hEDS to define the nature of the relationship. © 2017 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "9c717907ec6af9a4edebae84e71ef3f1",
"text": "We study a model of fairness in secure computation in which an adversarial party that aborts on receiving output is forced to pay a mutually predefined monetary penalty. We then show how the Bitcoin network can be used to achieve the above notion of fairness in the two-party as well as the multiparty setting (with a dishonest majority). In particular, we propose new ideal functionalities and protocols for fair secure computation and fair lottery in this model. One of our main contributions is the definition of an ideal primitive, which we call F CR (CR stands for “claim-or-refund”), that formalizes and abstracts the exact properties we require from the Bitcoin network to achieve our goals. Naturally, this abstraction allows us to design fair protocols in a hybrid model in which parties have access to the F CR functionality, and is otherwise independent of the Bitcoin ecosystem. We also show an efficient realization of F CR that requires only two Bitcoin transactions to be made on the network. Our constructions also enjoy high efficiency. In a multiparty setting, our protocols only require a constant number of calls to F CR per party on top of a standard multiparty secure computation protocol. Our fair multiparty lottery protocol improves over previous solutions which required a quadratic number of Bitcoin transactions.",
"title": ""
},
{
"docid": "c67fbd1def37b9b207ae8a2457da43ee",
"text": "INTRODUCTION\nDiabetes is an increasingly important condition globally and robust estimates of its prevalence are required for allocating resources.\n\n\nMETHODS\nData sources from 1980 to April 2011 were sought and characterised. The Analytic Hierarchy Process (AHP) was used to select the most appropriate study or studies for each country, and estimates for countries without data were modelled. A logistic regression model was used to generate smoothed age-specific estimates which were applied to UN population estimates for 2011.\n\n\nRESULTS\nA total of 565 data sources were reviewed, of which 170 sources from 110 countries were selected. In 2011 there are 366 million people with diabetes, and this is expected to rise to 552 million by 2030. Most people with diabetes live in low- and middle-income countries, and these countries will also see the greatest increase over the next 19 years.\n\n\nDISCUSSION\nThis paper builds on previous IDF estimates and shows that the global diabetes epidemic continues to grow. Recent studies show that previous estimates have been very conservative. The new IDF estimates use a simple and transparent approach and are consistent with recent estimates from the Global Burden of Disease study. IDF estimates will be updated annually.",
"title": ""
},
{
"docid": "5e0663f759b23147f9d1a3eeb6ab4b04",
"text": "We describe the fabrication and characterization of matrix-addressable microlight-emitting diode (micro-LED) arrays based on InGaN, having elemental diameter of 20 /spl mu/m and array size of up to 128 /spl times/ 96 elements. The introduction of a planar topology prior to contact metallization is an important processing step in advancing the performance of these devices. Planarization is achieved by chemical-mechanical polishing of the SiO/sub 2/-deposited surface. In this way, the need for a single contact pad for each individual element can be eliminated. The resulting significant simplification in the addressing of the pixels opens the way to scaling to devices with large numbers of elements. Compared to conventional broad-area LEDs, the micrometer-scale devices exhibit superior light output and current handling capabilities, making them excellent candidates for a range of uses including high-efficiency and robust microdisplays.",
"title": ""
},
{
"docid": "8030903c8f1402044bc5bce9daa1644d",
"text": "We propose a generalization of exTNFS algorithm recently introduced by Kim and Barbulescu (CRYPTO 2016). The algorithm, exTNFS, is a state-of-the-art algorithm for discrete logarithm in Fpn in the medium prime case, but it only applies when n = ηκ is a composite with nontrivial factors η and κ such that gcd(η, κ) = 1. Our generalization, however, shows that exTNFS algorithm can be also adapted to the setting with an arbitrary composite n maintaining its best asymptotic complexity. We show that one can solve discrete logarithm in medium case in the running time of Lpn(1/3, 3 √ 48/9) (resp. Lpn(1/3, 1.71) if multiple number fields are used), where n is an arbitrary composite. This should be compared with a recent variant by Sarkar and Singh (Asiacrypt 2016) that has the fastest running time of Lpn(1/3, 3 √ 64/9) (resp. Lpn(1/3, 1.88)) when n is a power of prime 2. When p is of special form, the complexity is further reduced to Lpn(1/3, 3 √ 32/9). On the practical side, we emphasize that the keysize of pairing-based cryptosystems should be updated following to our algorithm if the embedding degree n remains composite.",
"title": ""
},
{
"docid": "702a4a841f24f3b9464989360ac44b41",
"text": "Small-cell lung cancer (SCLC) is an aggressive malignancy associated with a poor prognosis. First-line treatment has remained unchanged for decades, and a paucity of effective treatment options exists for recurrent disease. Nonetheless, advances in our understanding of SCLC biology have led to the development of novel experimental therapies. Poly [ADP-ribose] polymerase (PARP) inhibitors have shown promise in preclinical models, and are under clinical investigation in combination with cytotoxic therapies and inhibitors of cell-cycle checkpoints.Preclinical data indicate that targeting of histone-lysine N-methyltransferase EZH2, a regulator of chromatin remodelling implicated in acquired therapeutic resistance, might augment and prolong chemotherapy responses. High expression of the inhibitory Notch ligand Delta-like protein 3 (DLL3) in most SCLCs has been linked to expression of Achaete-scute homologue 1 (ASCL1; also known as ASH-1), a key transcription factor driving SCLC oncogenesis; encouraging preclinical and clinical activity has been demonstrated for an anti-DLL3-antibody–drug conjugate. The immune microenvironment of SCLC seems to be distinct from that of other solid tumours, with few tumour-infiltrating lymphocytes and low levels of the immune-checkpoint protein programmed cell death 1 ligand 1 (PD-L1). Nonetheless, immunotherapy with immune-checkpoint inhibitors holds promise for patients with this disease, independent of PD-L1 status. Herein, we review the progress made in uncovering aspects of the biology of SCLC and its microenvironment that are defining new therapeutic strategies and offering renewed hope for patients.",
"title": ""
},
{
"docid": "c2ac1c1f08e7e4ccba14ea203acba661",
"text": "This paper describes an approach to determine a layout for the order picking area in warehouses, such that the average travel distance for the order pickers is minimized. We give analytical formulas by which the average length of an order picking route can be calculated for two different routing policies. The optimal layout can be determined by using such formula as an objective function in a non-linear programming model. The optimal number of aisles in an order picking area appears to depend strongly on the required storage space and the pick list size.",
"title": ""
},
{
"docid": "b8702cb8d18ae53664f3dfff95152764",
"text": "Word Sense Disambiguation is a longstanding task in Natural Language Processing, lying at the core of human language understanding. However, the evaluation of automatic systems has been problematic, mainly due to the lack of a reliable evaluation framework. In this paper we develop a unified evaluation framework and analyze the performance of various Word Sense Disambiguation systems in a fair setup. The results show that supervised systems clearly outperform knowledge-based models. Among the supervised systems, a linear classifier trained on conventional local features still proves to be a hard baseline to beat. Nonetheless, recent approaches exploiting neural networks on unlabeled corpora achieve promising results, surpassing this hard baseline in most test sets.",
"title": ""
},
{
"docid": "53c836280ad99b28c892ef85f31a5985",
"text": "This paper focuses on the design of 1 bit full adder circuit using Gate Diffusion Input Logic. The proposed adder schematics are developed using DSCH2 CAD tool, and their layouts are generated with Microwind 3 VLSI CAD tool. A 1 bit adder circuits are analyzed using standard CMOS 120nm features with corresponding voltage of 1.2V. The Simulated results of the proposed adder is compared with those of Pass transistor, Transmission Function, and CMOS based adder circuits. The proposed adder dissipates low power and responds faster.",
"title": ""
},
{
"docid": "03b3d8220753570a6b2f21916fe4f423",
"text": "Recent systems have been developed for sentiment classification, opinion recogni tion, and opinion analysis (e.g., detect ing polarity and strength). We pursue an other aspect of opinion analysis: identi fying the sources of opinions, emotions, and sentiments. We view this problem as an information extraction task and adopt a hybrid approach that combines Con ditional Random Fields (Lafferty et al., 2001) and a variation of AutoSlog (Riloff, 1996a). While CRFs model source iden tification as a sequence tagging task, Au toSlog learns extraction patterns. Our re sults show that the combination of these two methods performs better than either one alone. The resulting system identifies opinion sources with precision and recall using a head noun matching measure, and precision and recall using an overlap measure.",
"title": ""
}
] |
scidocsrr
|
f2fee38fbb95b7deb8e3a5bbed612027
|
Capturing user reading behaviors for personalized document summarization
|
[
{
"docid": "ae7405600f7cf3c7654cc2db73a22340",
"text": "The usual approach for automatic summarization is sentence extraction, where key sentences from the input documents are selected based on a suite of features. While word frequency often is used as a feature in summarization, its impact on system performance has not been isolated. In this paper, we study the contribution to summarization of three factors related to frequency: content word frequency, composition functions for estimating sentence importance from word frequency, and adjustment of frequency weights based on context. We carry out our analysis using datasets from the Document Understanding Conferences, studying not only the impact of these features on automatic summarizers, but also their role in human summarization. Our research shows that a frequency based summarizer can achieve performance comparable to that of state-of-the-art systems, but only with a good composition function; context sensitivity improves performance and significantly reduces repetition.",
"title": ""
},
{
"docid": "85feabca6a73d83be10a75c98d8cb046",
"text": "We propose a new recommendation algorithm for online documents, images and videos, which is personalized. Our idea is to rely on the attention time of individual users captured through commodity eye-tracking as the essential clue. The prediction of user interest over a certain online item (a document, image or video) is based on the user's attention time acquired using vision-based commodity eye-tracking during his previous reading, browsing or video watching sessions over the same type of online materials. After acquiring a user's attention times over a collection of online materials, our algorithm can predict the user's probable attention time over a new online item through data mining. Based on our proposed algorithm, we have developed a new online content recommender system for documents, images and videos. The recommendation results produced by our algorithm are evaluated by comparing with those manually labeled by users as well as by commercial search engines including Google (Web) Search, Google Image Search and YouTube.",
"title": ""
}
] |
[
{
"docid": "b1e0fa6b41fb697db8dfe5520b79a8e6",
"text": "The problem of computing the minimum-angle bounding cone of a set of three-dimensional vectors has numero cations in computer graphics and geometric modeling. One such application is bounding the tangents of space cur vectors normal to a surface in the computation of the intersection of two surfaces. No optimal-time exact solution to this problem has been yet given. This paper presents a roadmap for a few strate provide optimal or near-optimal (time-wise) solutions to this problem, which are also simple to implement. Specifica worst-case running time is required, we provide an O ( logn)-time Voronoi-diagram-based algorithm, where n is the number of vectors whose optimum bounding cone is sought. Otherwise, i f one is willing to accept an, in average, efficient algorithm, we show that the main ingredient of the algorithm of Shirman and Abi-Ezzi [Comput. Graphics Forum 12 (1993) 261–272 implemented to run in optimal (n) expected time. Furthermore, if the vectors (as points on the sphere of directions) are to occupy no more than a hemisphere, we show how to simplify this ingredient (by reducing the dimension of the p without affecting the asymptotic expected running time. Both versions of this algorithm are based on computing (as an problem) the minimum spanning circle (respectively, ball) of a two-dimensional (respectively, three-dimensional) set o 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a3e6d006a56913285d1eb6f0a8e1ce55",
"text": "This paper updates and builds on ‘Modelling with Stakeholders’ Voinov and Bousquet, 2010 which demonstrated the importance of, and demand for, stakeholder participation in resource and environmental modelling. This position paper returns to the concepts of that publication and reviews the progress made since 2010. A new development is the wide introduction and acceptance of social media and web applications, which dramatically changes the context and scale of stakeholder interactions and participation. Technology advances make it easier to incorporate information in interactive formats via visualization and games to augment participatory experiences. Citizens as stakeholders are increasingly demanding to be engaged in planning decisions that affect them and their communities, at scales from local to global. How people interact with and access models and data is rapidly evolving. In turn, this requires changes in how models are built, packaged, and disseminated: citizens are less in awe of experts and external authorities, and they are increasingly aware of their own capabilities to provide inputs to planning processes, including models. The continued acceleration of environmental degradation and natural resource depletion accompanies these societal changes, even as there is a growing acceptance of the need to transition to alternative, possibly very different, life styles. Substantive transitions cannot occur without significant changes in human behaviour and perceptions. The important and diverse roles that models can play in guiding human behaviour, and in disseminating and increasing societal knowledge, are a feature of stakeholder processes today. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "70d7c838e7b5c4318e8764edb5a70555",
"text": "This research developed and tested a model of turnover contagion in which the job embeddedness and job search behaviors of coworkers influence employees’ decisions to quit. In a sample of 45 branches of a regional bank and 1,038 departments of a national hospitality firm, multilevel analysis revealed that coworkers’ job embeddedness and job search behaviors explain variance in individual “voluntary turnover” over and above that explained by other individual and group-level predictors. Broadly speaking, these results suggest that coworkers’ job embeddedness and job search behaviors play critical roles in explaining why people quit their jobs. Implications are discussed.",
"title": ""
},
{
"docid": "18bc3abbd6a4f51fdcfbafcc280f0805",
"text": "Complex disease genetics has been revolutionised in recent years by the advent of genome-wide association (GWA) studies. The chronic inflammatory bowel diseases (IBDs), Crohn's disease and ulcerative colitis have seen notable successes culminating in the discovery of 99 published susceptibility loci/genes (71 Crohn's disease; 47 ulcerative colitis) to date. Approximately one-third of loci described confer susceptibility to both Crohn's disease and ulcerative colitis. Amongst these are multiple genes involved in IL23/Th17 signalling (IL23R, IL12B, JAK2, TYK2 and STAT3), IL10, IL1R2, REL, CARD9, NKX2.3, ICOSLG, PRDM1, SMAD3 and ORMDL3. The evolving genetic architecture of IBD has furthered our understanding of disease pathogenesis. For Crohn's disease, defective processing of intracellular bacteria has become a central theme, following gene discoveries in autophagy and innate immunity (associations with NOD2, IRGM, ATG16L1 are specific to Crohn's disease). Genetic evidence has also demonstrated the importance of barrier function to the development of ulcerative colitis (HNF4A, LAMB1, CDH1 and GNA12). However, when the data are analysed in more detail, deeper themes emerge including the shared susceptibility seen with other diseases. Many immune-mediated diseases overlap in this respect, paralleling the reported epidemiological evidence. However, in several cases the reported shared susceptibility appears at odds with the clinical picture. Examples include both type 1 and type 2 diabetes mellitus. In this review we will detail the presently available data on the genetic overlap between IBD and other diseases. The discussion will be informed by the epidemiological data in the published literature and the implications for pathogenesis and therapy will be outlined. This arena will move forwards very quickly in the next few years. Ultimately, we anticipate that these genetic insights will transform the landscape of common complex diseases such as IBD.",
"title": ""
},
{
"docid": "0bb5bbdf7043eed23cafdd54df68c709",
"text": "We present two studies of online ephemerality and anonymity based on the popular discussion board /b/ at 4chan.org: a website with over 7 million users that plays an influential role in Internet culture. Although researchers and practitioners often assume that user identity and data permanence are central tools in the design of online communities, we explore how /b/ succeeds despite being almost entirely anonymous and extremely ephemeral. We begin by describing /b/ and performing a content analysis that suggests the community is dominated by playful exchanges of images and links. Our first study uses a large dataset of more than five million posts to quantify ephemerality in /b/. We find that most threads spend just five seconds on the first page and less than five minutes on the site before expiring. Our second study is an analysis of identity signals on 4chan, finding that over 90% of posts are made by fully anonymous users, with other identity signals adopted and discarded at will. We describe alternative mechanisms that /b/ participants use to establish status and frame their interactions.",
"title": ""
},
{
"docid": "874b14b3c3e15b43de3310327affebaf",
"text": "We present the Accelerated Quadratic Proxy (AQP) - a simple first-order algorithm for the optimization of geometric energies defined over triangular and tetrahedral meshes.\n The main stumbling block of current optimization techniques used to minimize geometric energies over meshes is slow convergence due to ill-conditioning of the energies at their minima. We observe that this ill-conditioning is in large part due to a Laplacian-like term existing in these energies. Consequently, we suggest to locally use a quadratic polynomial proxy, whose Hessian is taken to be the Laplacian, in order to achieve a preconditioning effect. This already improves stability and convergence, but more importantly allows incorporating acceleration in an almost universal way, that is independent of mesh size and of the specific energy considered.\n Experiments with AQP show it is rather insensitive to mesh resolution and requires a nearly constant number of iterations to converge; this is in strong contrast to other popular optimization techniques used today such as Accelerated Gradient Descent and Quasi-Newton methods, e.g., L-BFGS. We have tested AQP for mesh deformation in 2D and 3D as well as for surface parameterization, and found it to provide a considerable speedup over common baseline techniques.",
"title": ""
},
{
"docid": "8e465d1434932f21db514c49650863bb",
"text": "Context aware recommender systems (CARS) adapt the recommendations to the specific situation in which the items will be consumed. In this paper we present a novel context-aware recommendation algorithm that extends Matrix Factorization. We model the interaction of the contextual factors with item ratings introducing additional model parameters. The performed experiments show that the proposed solution provides comparable results to the best, state of the art, and more complex approaches. The proposed solution has the advantage of smaller computational cost and provides the possibility to represent at different granularities the interaction between context and items. We have exploited the proposed model in two recommendation applications: places of interest and music.",
"title": ""
},
{
"docid": "5cc3ce9628b871d57f086268ae1510e0",
"text": "Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power system, and to help the customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using Deep Reinforcement Learning, a hybrid type of methods that combines Reinforcement Learning with Deep Learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and Deep Policy Gradient, both of them being extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly-dimensional database includes information about photovoltaic power generation, electric vehicles as well as buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide realtime feedback to consumers to encourage more efficient use of electricity.",
"title": ""
},
{
"docid": "ff67540fcba29de05415c77744d3a21d",
"text": "Using Youla Parametrization and Linear Matrix Inequalities (LMI) a Multiobjective Robust Control (MRC) design for continuous linear time invariant (LTI) systems with bounded uncertainties is described. The design objectives can be a combination of H∞-, H2-performances, constraints on the control signal, etc.. Based on an initial stabilizing controller all stabilizing controllers for the uncertain system can be described by the Youla parametrization. Given this representation, all objectives can be formulated by independent Lyapunov functions, increasing the degree of freedom for the control design.",
"title": ""
},
{
"docid": "4c2c19b22607c2cc5ba2ebc8ca1c47dc",
"text": "We present our approach for robotic perception in cluttered scenes that led to winning the recent Amazon Robotics Challenge (ARC) 2017. Next to small objects with shiny and transparent surfaces, the biggest challenge of the 2017 competition was the introduction of unseen categories. In contrast to traditional approaches which require large collections of annotated data and many hours of training, the task here was to obtain a robust perception pipeline with only few minutes of data acquisition and training time. To that end, we present two strategies that we explored. One is a deep metric learning approach that works in three separate steps: semantic-agnostic boundary detection, patch classification and pixel-wise voting. The other is a fully-supervised semantic segmentation approach with efficient dataset collection. We conduct an extensive analysis of the two methods on our ARC 2017 dataset. Interestingly, only few examples of each class are sufficient to fine-tune even very deep convolutional neural networks for this specific task.",
"title": ""
},
{
"docid": "db114a47e7e3d6cc7196d1f73c143bf2",
"text": "OBJECT\nSeveral methods are used for stereotactically guided implantation of electrodes into the subthalamic nucleus (STN) for continuous high-frequency stimulation in the treatment of Parkinson's disease (PD). The authors present a stereotactic magnetic resonance (MR) method relying on three-dimensional (3D) T1-weighted images for surgical planning and multiplanar T2-weighted images for direct visualization of the STN, coupled with electrophysiological recording and stimulation guidance.\n\n\nMETHODS\nTwelve patients with advanced PD were enrolled in this study of bilateral STN implantation. Both STNs were visible as 3D ovoid biconvex hypointense structures located in the upper mesencephalon. The coordinates of the centers of the STNs were determined with reference to the patient's anterior commissure-posterior commissure line by using a new landmark, the anterior border of the red nucleus. Electrophysiological monitoring through five parallel tracks was performed simultaneously to define the functional target accurately. Microelectrode recording identified high-frequency, spontaneous, movement-related activity and tremor-related cells within the STNs. Acute STN macrostimulation improved contralateral rigidity and akinesia, suppressed tremor when present, and could induce dyskinesias. The central track, which was directed at the predetermined target by using MR imaging, was selected for implantation of 19 of 24 electrodes. No surgical complications were noted.\n\n\nCONCLUSIONS\nAt evaluation 6 months after surgery, continuous STN stimulation was shown to have improved parkinsonian motor disability by 64% and 78% in the \"off' and \"on\" medication states, respectively. Antiparkinsonian drug treatment was reduced by 70% in 10 patients and withdrawn in two patients. The severity of levodopa-induced dyskinesias was reduced by 83% and motor fluctuations by 88%. Continuous high-frequency stimulation of the STN applied through electrodes implanted with the aid of 3D MR imaging and electrophysiological guidance is a safe and effective therapy for patients suffering from severe, advanced levodopa-responsive PD.",
"title": ""
},
{
"docid": "0d4d401707c77d0e8ac06bba838f00b7",
"text": "Four studies examined how impulse-control beliefs--beliefs regarding one's ability to regulate visceral impulses, such as hunger, drug craving, and sexual arousal-influence the self-control process. The findings provide evidence for a restraint bias: a tendency for people to overestimate their capacity for impulse control. This biased perception of restraint had important consequences for people's self-control strategies. Inflated impulse-control beliefs led people to overexpose themselves to temptation, thereby promoting impulsive behavior. In Study 4, for example, the impulse-control beliefs of recovering smokers predicted their exposure to situations in which they would be tempted to smoke. Recovering smokers with more inflated impulse-control beliefs exposed themselves to more temptation, which led to higher rates of relapse 4 months later. The restraint bias offers unique insight into how erroneous beliefs about self-restraint promote impulsive behavior.",
"title": ""
},
{
"docid": "1d5cd4756e424f3d282545f029c1e9bb",
"text": "Anomaly detection systems deployed for monitoring in oil and gas industries are mostly WSN based systems or SCADA systems which all suffer from noteworthy limitations. WSN based systems are not homogenous or incompatible systems. They lack coordinated communication and transparency among regions and processes. On the other hand, SCADA systems are expensive, inflexible, not scalable, and provide data with long delay. In this paper, a novel IoT based architecture is proposed for Oil and gas industries to make data collection from connected objects as simple, secure, robust, reliable and quick. Moreover, it is suggested that how this architecture can be applied to any of the three categories of operations, upstream, midstream and downstream. This can be achieved by deploying a set of IoT based smart objects (devices) and cloud based technologies in order to reduce complex configurations and device programming. Our proposed IoT architecture supports the functional and business requirements of upstream, midstream and downstream oil and gas value chain of geologists, drilling contractors, operators, and other oil field services. Using our proposed IoT architecture, inefficiencies and problems can be picked and sorted out sooner ultimately saving time and money and increasing business productivity.",
"title": ""
},
{
"docid": "c404e6ecb21196fec9dfeadfcb5d4e4b",
"text": "The goal of leading indicators for safety is to identify the potential for an accident before it occurs. Past efforts have focused on identifying general leading indicators, such as maintenance backlog, that apply widely in an industry or even across industries. Other recommendations produce more system-specific leading indicators, but start from system hazard analysis and thus are limited by the causes considered by the traditional hazard analysis techniques. Most rely on quantitative metrics, often based on probabilistic risk assessments. This paper describes a new and different approach to identifying system-specific leading indicators and provides guidance in designing a risk management structure to generate, monitor and use the results. The approach is based on the STAMP (SystemTheoretic Accident Model and Processes) model of accident causation and tools that have been designed to build on that model. STAMP extends current accident causality to include more complex causes than simply component failures and chains of failure events or deviations from operational expectations. It incorporates basic principles of systems thinking and is based on systems theory rather than traditional reliability theory.",
"title": ""
},
{
"docid": "08c97484fe3784e2f1fd42606b915f83",
"text": "In the present study we manipulated the importance of performing two event-based prospective memory tasks. In Experiment 1, the event-based task was assumed to rely on relatively automatic processes, whereas in Experiment 2 the event-based task was assumed to rely on a more demanding monitoring process. In contrast to the first experiment, the second experiment showed that importance had a positive effect on prospective memory performance. In addition, the occurrence of an importance effect on prospective memory performance seemed to be mainly due to the features of the prospective memory task itself, and not to the characteristics of the ongoing tasks that only influenced the size of the importance effect. The results suggest that importance instructions may improve prospective memory if the prospective task requires the strategic allocation of attentional monitoring resources.",
"title": ""
},
{
"docid": "aad262b19db8dd6c6caf34e7966c433a",
"text": "Cloud computing is now a well-consolidated paradigm for on-demand services provisioning on a pay-as-you-go model. Elasticity, one of the major benefits required for this computing model, is the ability to add and remove resources “on the fly” to handle the load variation. Although many works in literature have surveyed cloud computing and its features, there is a lack of a detailed analysis about elasticity for the cloud. As an attempt to fill this gap, we propose this survey on cloud computing elasticity based on an adaptation of a classic systematic review. We address different aspects of elasticity, such as definitions, metrics and tools for measuring, evaluation of the elasticity, and existing solutions. Finally, we present some open issues and future direcEmanuel Ferreira Coutinho Master and Doctorate in Computer Science (MDCC) Virtual UFC Institute Federal University of Ceara (UFC) Brazil Tel.: +55-85-8875-1977 E-mail: [email protected] Flávio R. C. Sousa Teleinformatics Engineering Department (DETI) Federal University of Ceara (UFC) Brazil E-mail: [email protected] Paulo A. L. Rego Master and Doctorate in Computer Science (MDCC) Federal University of Ceara (UFC) Brazil E-mail: [email protected] Danielo G. Gomes Teleinformatics Engineering Department (DETI) Federal University of Ceara (UFC) Brazil E-mail: [email protected] José N. de Souza Master and Doctorate in Computer Science (MDCC) Federal University of Ceara (UFC) Brazil E-mail: [email protected] 2 Emanuel Ferreira Coutinho et al. tions. To the best of our knowledge, this is the first study on cloud computing elasticity using a systematic review approach.",
"title": ""
},
{
"docid": "c675a2f1fed4ccb5708be895190b02cd",
"text": "Decompilation is important for many security applications; it facilitates the tedious task of manual malware reverse engineering and enables the use of source-based security tools on binary code. This includes tools to find vulnerabilities, discover bugs, and perform taint tracking. Recovering high-level control constructs is essential for decompilation in order to produce structured code that is suitable for human analysts and sourcebased program analysis techniques. State-of-the-art decompilers rely on structural analysis, a pattern-matching approach over the control flow graph, to recover control constructs from binary code. Whenever no match is found, they generate goto statements and thus produce unstructured decompiled output. Those statements are problematic because they make decompiled code harder to understand and less suitable for program analysis. In this paper, we present DREAM, the first decompiler to offer a goto-free output. DREAM uses a novel patternindependent control-flow structuring algorithm that can recover all control constructs in binary programs and produce structured decompiled code without any goto statement. We also present semantics-preserving transformations that can transform unstructured control flow graphs into structured graphs. We demonstrate the correctness of our algorithms and show that we outperform both the leading industry and academic decompilers: Hex-Rays and Phoenix. We use the GNU coreutils suite of utilities as a benchmark. Apart from reducing the number of goto statements to zero, DREAM also produced more compact code (less lines of code) for 72.7% of decompiled functions compared to Hex-Rays and 98.8% compared to Phoenix. We also present a comparison of Hex-Rays and DREAM when decompiling three samples from Cridex, ZeusP2P, and SpyEye malware families.",
"title": ""
},
{
"docid": "9c1beecda61e50dd278e73c55ca703c8",
"text": "Power MOSFET designs have been moving to higher performance particularly in the medium voltage area. (60V to 300V) New designs require lower Specific On-resistance while not sacrificing Unclamped Inductive Switching (UIS) capability or increasing turn-off losses. Two charge balance technologies currently address these needs, the PN junction and the Shielded Gate Charge Balance device topologies. This paper will study the impact of drift region as well as other design parameters that influence the shielded gate class of charge balance devices. The optimum design for maximizing UIS capability and minimizing the impact on other design parameters such as RDSON and switching performance are addressed. It will be shown through TCAD simulation one can design devices to have a stable avalanche point that is not influenced by small variations within a die or die-to-die that result from normal processing. Finally, measured and simulated data will be presented showing a fabricated device with near theoretical UIS capability.",
"title": ""
},
{
"docid": "33906623c1ac445e18a30805d2a122cf",
"text": "Diagnostic problems abound for individuals, organizations, and society. The stakes are high, often life and death. Such problems are prominent in the fields of health care, public safety, business, environment, justice, education, manufacturing, information processing, the military, and government. Particular diagnostic questions are raised repetitively, each time calling for a positive or negative decision about the presence of a given condition or the occurrence (often in the future) of a given event. Consider the following illustrations: Is a cancer present? Will this individual commit violence? Are there explosives in this luggage? Is this aircraft fit to fly? Will the stock market advance today? Is this assembly-line item flawed? Will an impending storm strike? Is there oil in the ground here? Is there an unsafe radiation level in my house? Is this person lying? Is this person using drugs? Will this applicant succeed? Will this book have the information I need? Is that plane intending to attack this ship? Is this applicant legally disabled? Does this tax return justify an audit? Each time such a question is raised, the available evidence is assessed by a person or a device or a combination of the two, and a choice is then made between the two alternatives, yes or no. The evidence may be a x-ray, a score on a psychiatric test, a chemical analysis, and so on. In considering just yes–no alternatives, such diagnoses do not exhaust the types of diagnostic questions that exist. Other questions, for example, a differential diagnosis in medicine, may require considering a half dozen or more possible alternatives. Decisions of the yes–no type, however, are prevalent and important, as the foregoing examples suggest, and they are the focus of our analysis. We suggest that diagnoses of this type rest on a general process with common characteristics across fields, and that the process warrants scientific analysis as a discipline in its own right (Swets, 1988, 1992). The main purpose of this article is to describe two ways, one obvious and one less obvious, in which diagnostic performance can be improved. The more obvious way to improve diagnosis is to improve its accuracy, that is, its ability to distinguish between the two diagnostic alternatives and to select the correct one. The less obvious way to improve diagnosis is to increase the utility of the diagnostic decisions that are made. That is, apart from improving accuracy, there is a need to produce decisions that are in tune both with the situational probabilities of the alternative diagnostic conditions and with the benefits and costs, respectively, of correct and incorrect decisions. Methods exist to achieve both goals. These methods depend on a measurement technique that separately and independently quantifies the two aspects of diagnostic performance, namely, its accuracy and the balance it provides among the various possible types of decision outcomes. We propose that together the method for measuring diagnostic performance and the methods for improving it constitute the fundamentals of a science of diagnosis. We develop the idea that this incipient discipline has been demonstrated to improve diagnosis in several fields, but is nonetheless virtually unknown and unused in others. We consider some possible reasons for the disparity between the general usefulness of the methods and their lack of general use, and we advance some ideas for reducing this disparity. To anticipate, we develop two successful examples of these methods in some detail: the prognosis of violent behavior and the diagnosis of breast and prostate cancer. We treat briefly other successful examples, such as weather forecasting and admission to a selective school. We also develop in detail two examples of fields that would markedly benefit from application of the methods, namely the detection of cracks in airplane wings and the detection of the virus of AIDS. Briefly treated are diagnoses of dangerous conditions for in-flight aircraft and of behavioral impairments that qualify as disabilities in individuals.",
"title": ""
},
{
"docid": "a76d5685b383e45778417d5eccdd8b6c",
"text": "The advent of both Cloud computing and Internet of Things (IoT) is changing the way of conceiving information and communication systems. Generally, we talk about IoT Cloud to indicate a new type of distributed system consisting of a set of smart devices interconnected with a remote Cloud infrastructure, platform, or software through the Internet and able to provide IoT as a Service (IoTaaS). In this paper, we discuss the near future evolution of IoT Clouds towards federated ecosystems, where IoT providers cooperate to offer more flexible services. Moreover, we present a general three-layer IoT Cloud Federation architecture, highlighting new business opportunities and challenges.",
"title": ""
}
] |
scidocsrr
|
2db0164f18f4dbeba9a67c9617d3c7bb
|
Government innovation through social media
|
[
{
"docid": "e5c8f2b732504aa32d37d4f5f1c3f295",
"text": "The revolution in information and communication technologies (ICT) has been changing not only the daily lives of people but also the interactions between governments and citizens. The digital government or electronic government (e-government) has started as a new form of public organization that supports and redefines the existing and new information, communication and transaction-related interactions with stakeholders (e.g., citizens and businesses) through ICT, especially through the Internet and Web technologies, with the purpose of improving government performance and processes [1].",
"title": ""
}
] |
[
{
"docid": "11b687aab787bc65f31d3f3037c2d1ed",
"text": "A review of the literature on the influence of beavers on the environment has been presented with regard to following aspects: (1) specific features of the ecology of beavers crucial for understanding their effects on the environment: (2) changes in the physical characteristics of habitats due to the activity of beavers (beavers as engineers); (3) the role of the beaver as a phytophage; (4) long-term changes of vegetation in beavers’ habitats and the possible consequences of these changes for beavers.",
"title": ""
},
{
"docid": "fb81f9419861a20b2e6e45ba04bb0ce1",
"text": "It has been said for decades (if not centuries) that more and more information is becoming available and that tools are needed to handle it. Only recently, however, does it seem that a sufficient quantity of this information is electronically available to produce a widespread need for automatic summarization. Consequently, this research area has enjoyed a resurgence of interest in the past few years, as illustrated by a 1997 ACL Workshop, a 1998 AAAI Spring Symposium and in the same year SUMMAC: a TREC-like TIPSTER-funded summarization evaluation conference. Not unexpectedly, there is now a book to add to this list: Advances in Automatic Summarization, a collection of papers edited by Inderjeet Mani and Mark T. Maybury and published by The MIT Press. Half of it is a historical record: thirteen previously published papers, including classics such as Luhn’s 1958 word-counting sentence-extraction paper, Edmundson’s 1969 use of cue words and phrases, and Kupiec, Pedersen, and Chen’s 1995 trained summarizer. The other half of the book holds new papers, which attempt to cover current issues and point to future trends. It starts with a paper by Karen Spärck Jones, which acts as an overall introduction. In it, the summarization process and the uses of summaries are broken down into their constituent parts and each of these is discussed (it reminded me of a much earlier Spärck Jones paper on categorization [1970]). Despite its comprehensiveness and authority, I must confess to finding this opener heavy going at times. The rest of the papers are grouped into six sections, each of which is prefaced with two or three well-written pages from the editors. These introductions contain valuable commentary on the coming papers—even pointing out a possible flaw in the evaluation part of one. The opening section holds three papers on so-called classical approaches. Here one finds the oft-cited papers of Luhn, Edmundson, and Pollock and Zamora. As a package, these papers provide a novice with a good idea of how basic summarization works. My only quibble was in their reproduction. In Luhn’s paper, an article from Scientific American is summarized and it would have been beneficial to have this included in the book as well. Some of the figures in another paper contained very small fonts and were hard to read; fixing this for a future print run is probably worth thinking about. The next section holds papers on corpus-based approaches to summarization, starting with Kupiec et al.’s paper about a summarizer trained on an existing corpus of manually abstracted documents. Two new papers building upon the Kupiec et al. work follow this. Exploiting the discourse structure of a document is the topic of the next section. Of the five papers here, I thought Daniel Marcu’s was the best, nicely describing summarization work so far and then clearly explaining his system, which is based on Rhetorical Structure Theory. The following section on knowledge-rich approaches to summarization covers such things as Wendy Lehnert’s work on breaking",
"title": ""
},
{
"docid": "e3374d5fd1abf4f747a05341d4d04ec6",
"text": "The written medium through which we commonly learn about relevant news are news articles. Since there is an abundance of news articles that are written daily, the readers have a common problem of discovering the content of interest and still not be overwhelmed with the amount of it. In this paper we present a system called Event Registry which is able to group articles about an event across languages and extract from the articles core event information in a structured form. In this way, the amount of content that the reader has to check is significantly reduced while additionally providing the reader with a global coverage of each event. Since all event information is structured this also provides extensive and fine-grained options for information searching and filtering that are not available with current news aggregators.",
"title": ""
},
{
"docid": "6c4944ebd75404a0f3b2474e346677f1",
"text": "Wireless industry nowadays is facing two major challenges: 1) how to support the vertical industry applications so that to expand the wireless industry market and 2) how to further enhance device capability and user experience. In this paper, we propose a technology framework to address these challenges. The proposed technology framework is based on end-to-end vertical and horizontal slicing, where vertical slicing enables vertical industry and services and horizontal slicing improves system capacity and user experience. The technology development on vertical slicing has already started in late 4G and early 5G and is mostly focused on slicing the core network. We envision this trend to continue with the development of vertical slicing in the radio access network and the air interface. Moving beyond vertical slicing, we propose to horizontally slice the computation and communication resources to form virtual computation platforms for solving the network capacity scaling problem and enhancing device capability and user experience. In this paper, we explain the concept of vertical and horizontal slicing and illustrate the slicing techniques in the air interface, the radio access network, the core network and the computation platform. This paper aims to initiate the discussion on the long-range technology roadmap and spur development on the solutions for E2E network slicing in 5G and beyond.",
"title": ""
},
{
"docid": "73c0360dfcf421d71a258b5b6959572e",
"text": "Text representation plays a crucial role in classical text mining, where the primary focus was on static text. Nevertheless, well-studied static text representations including TFIDF are not optimized for non-stationary streams of information such as news, discussion board messages, and blogs. We therefore introduce a new temporal representation for text streams based on bursty features. Our bursty text representation differs significantly from traditional schemes in that it 1) dynamically represents documents over time, 2) amplifies a feature in proportional to its burstiness at any point in time, and 3) is topic independent. Our bursty text representation model was evaluated against a classical bagof-words text representation on the task of clustering TDT3 topical text streams. It was shown to consistently yield more cohesive clusters in terms of cluster purity and cluster/class entropies. This new temporal bursty text representation can be extended to most text mining tasks involving a temporal dimension, such as modeling of online blog pages.",
"title": ""
},
{
"docid": "fc8ab3792af939fd982fbc3a95ecb364",
"text": "A critical step in treating or eradicating weed infestations amongst vegetable crops is the ability to accurately and reliably discriminate weeds from crops. In recent times, high spatial resolution hyperspectral imaging data from ground based platforms have shown particular promise in this application. Using spectral vegetation signatures to discriminate between crop and weed species has been demonstrated on several occasions in the literature over the past 15 years. A number of authors demonstrated successful per-pixel classification with accuracies of over 80%. However, the vast majority of the related literature uses supervised methods, where training datasets have been manually compiled. In practice, static training data can be particularly susceptible to temporal variability due to physiological or environmental change. A self-supervised training method that leverages prior knowledge about seeding patterns in vegetable fields has recently been introduced in the context of RGB imaging, allowing the classifier to continually update weed appearance models as conditions change. This paper combines and extends these methods to provide a self-supervised framework for hyperspectral crop/weed discrimination with prior knowledge of seeding patterns using an autonomous mobile ground vehicle. Experimental results in corn crop rows demonstrate the system's performance and limitations.",
"title": ""
},
{
"docid": "be9fc2798c145abe70e652b7967c3760",
"text": "Given semantic descriptions of object classes, zero-shot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided. We propose to tackle this problem from the perspective of manifold learning. Our main idea is to align the semantic space that is derived from external information to the model space that concerns itself with recognizing visual features. To this end, we introduce a set of \"phantom\" object classes whose coordinates live in both the semantic space and the model space. Serving as bases in a dictionary, they can be optimized from labeled data such that the synthesized real object classifiers achieve optimal discriminative performance. We demonstrate superior accuracy of our approach over the state of the art on four benchmark datasets for zero-shot learning, including the full ImageNet Fall 2011 dataset with more than 20,000 unseen classes.",
"title": ""
},
{
"docid": "739b17d28d4e2196ca3cf734eee3ab93",
"text": "There is a common problem in artificial intelligence (AI) and information security. In AI, an expert system needs to be able to justify and explain a decision to the user. In information security, experts need to be able to explain to the public why a system is secure. In both cases, an important goal of explanation is to acquire or maintain the users’ trust. In this paper, I investigate the relation between explanation and trust in the context of computing science. This analysis draws on literature study and concept analysis, using elements from system theory as well as actor-network theory. I apply the conceptual framework to both AI and information security, and show the benefit of the framework for both fields by means of examples. The main focus is on expert systems (AI) and electronic voting systems (security). Finally, I discuss consequences of the analysis for ethics in terms of (un)informed consent and dissent, and the associated division of responsibilities.",
"title": ""
},
{
"docid": "6c2d0a9d2e542a2778a7d798ce33dded",
"text": "Grounded theory has frequently been referred to, but infrequently applied in business research. This article addresses such a deficiency by advancing two focal aims. Firstly, it seeks to de-mystify the methodology known as grounded theory by applying this established research practice within the comparatively new context of business research. Secondly, in so doing, it integrates naturalistic examples drawn from the author’s business research, hence explicating the efficacy of grounded theory methodology in gaining deeper understanding of business bounded phenomena. It is from such a socially focused methodology that key questions of what is happening and why leads to the generation of substantive theories and underpinning",
"title": ""
},
{
"docid": "36b310b4fcd58c54879ebcddb537eafe",
"text": "Semantic similarity of text plays an important role in many NLP tasks. It requires using both local information like lexical semantics and structural information like syntactic structures. Recent progress in word representation provides good resources for lexical semantics, and advances in natural language analysis tools make it possible to efficiently generate syntactic and semantic annotations. However, how to combine them to capture the semantics of text is still an open question. Here, we propose a new alignment-based approach to learn semantic similarity. It uses a hybrid representation, attributed relational graphs, to encode lexical, syntactic and semantic information. Alignment of two such graphs combines local and structural information to support similarity estimation. To improve alignment, we introduced structural constraints inspired by a cognitive theory of similarity and analogy. Usually only similarity labels are given in training data and the true alignments are unknown, so we address the learning problem using two approaches: alignment as feature extraction and alignment as latent variable. Our approach is evaluated on the paraphrase identification task and achieved results competitive with the state-of-theart.",
"title": ""
},
{
"docid": "9058505c04c1dc7c33603fd8347312a0",
"text": "Fear appeals are a polarizing issue, with proponents confident in their efficacy and opponents confident that they backfire. We present the results of a comprehensive meta-analysis investigating fear appeals' effectiveness for influencing attitudes, intentions, and behaviors. We tested predictions from a large number of theories, the majority of which have never been tested meta-analytically until now. Studies were included if they contained a treatment group exposed to a fear appeal, a valid comparison group, a manipulation of depicted fear, a measure of attitudes, intentions, or behaviors concerning the targeted risk or recommended solution, and adequate statistics to calculate effect sizes. The meta-analysis included 127 articles (9% unpublished) yielding 248 independent samples (NTotal = 27,372) collected from diverse populations. Results showed a positive effect of fear appeals on attitudes, intentions, and behaviors, with the average effect on a composite index being random-effects d = 0.29. Moderation analyses based on prominent fear appeal theories showed that the effectiveness of fear appeals increased when the message included efficacy statements, depicted high susceptibility and severity, recommended one-time only (vs. repeated) behaviors, and targeted audiences that included a larger percentage of female message recipients. Overall, we conclude that (a) fear appeals are effective at positively influencing attitude, intentions, and behaviors; (b) there are very few circumstances under which they are not effective; and (c) there are no identified circumstances under which they backfire and lead to undesirable outcomes.",
"title": ""
},
{
"docid": "73a02535ca36f6233319536f70975366",
"text": "Structured decorative patterns are common ornamentations in a variety of media like books, web pages, greeting cards and interior design. Creating such art from scratch using conventional software is time consuming for experts and daunting for novices. We introduce DecoBrush, a data-driven drawing system that generalizes the conventional digital \"painting\" concept beyond the scope of natural media to allow synthesis of structured decorative patterns following user-sketched paths. The user simply selects an example library and draws the overall shape of a pattern. DecoBrush then synthesizes a shape in the style of the exemplars but roughly matching the overall shape. If the designer wishes to alter the result, DecoBrush also supports user-guided refinement via simple drawing and erasing tools. For a variety of example styles, we demonstrate high-quality user-constrained synthesized patterns that visually resemble the exemplars while exhibiting plausible structural variations.",
"title": ""
},
{
"docid": "efced3407e46faf9fa43ce299add28f4",
"text": "This is a pilot study of the use of “Flash cookies” by popular websites. We find that more than 50% of the sites in our sample are using Flash cookies to store information about the user. Some are using it to “respawn” or re-instantiate HTTP cookies deleted by the user. Flash cookies often share the same values as HTTP cookies, and are even used on government websites to assign unique values to users. Privacy policies rarely disclose the presence of Flash cookies, and user controls for effectuating privacy preferences are",
"title": ""
},
{
"docid": "f4be6b2bf1cd462ec758fe37b098eef1",
"text": "Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in nonstationary tasks. Here, we extend the idea in three directions, addressing proper minibatch parallelization, including reweighted updates for sparse or orthogonal gradients, improving robustness on non-smooth loss functions, in the process replacing the diagonal Hessian estimation procedure that may not always be available by a robust finite-difference approximation. The final algorithm integrates all these components, has linear complexity and is hyper-parameter free.",
"title": ""
},
{
"docid": "aaa07bc9a07ead770f7e17e9aa82e151",
"text": "Person re-identification (Re-ID) aims to match the image frames which contain the same person in the surveillance videos. Most of the Re-ID algorithms conduct supervised training in some small labeled datasets, so directly deploying these trained models to the real-world large camera networks may lead to a poor performance due to underfitting. The significant difference between the source training dataset and the target testing dataset makes it challenging to incrementally optimize the model. To address this challenge, we propose a novel solution by transforming the unlabeled images in the target domain to fit the original classifier by using our proposed similarity preserved generative adversarial networks model, SimPGAN. Specifically, SimPGAN adopts the generative adversarial networks with the cycle consistency constraint to transform the unlabeled images in the target domain to the style of the source domain. Meanwhile, SimPGAN uses the similarity consistency loss, which is measured by a siamese deep convolutional neural network, to preserve the similarity of the transformed images of the same person. Comprehensive experiments based on multiple real surveillance datasets are conducted, and the results show that our algorithm is better than the state-of-the-art cross-dataset unsupervised person Re-ID algorithms.",
"title": ""
},
{
"docid": "aef55dbadd2ae6509907b2632c88227a",
"text": "In this paper we consider a new type of cryptographic scheme, which can decode concealed images without any cryptographic computations. The scheme is perfectly secure and very easy to implement. We extend it into a visual variant of the k out of n secret sharing problem, in which a dealer provides a transparency to each one of the n users; any k of them can see the image by stacking their transparencies, but any k 1 of them gain no information about it. A preliminary version of this paper appeared in Eurocrypt 94. y Dept. of Applied Mathematics and Computer Science, Weizmann Institute of Science, Rehovot 76100, Israel. E-mail: [email protected]. Research supported by an Alon Fellowship and a grant from the Israel Science Foundation administered by the Israeli Academy of Sciences. z Dept. of Applied Mathematics and Computer Science, Weizmann Institute of Science, Rehovot 76100, Israel. E-mail: [email protected].",
"title": ""
},
{
"docid": "1255bb7d89a30314dc41dbcf7ac9a174",
"text": "Gainesville, Florida, 10 March 2 012. Today, the Mobile Location- Based Services Summit hosted a panel entitled \"What Was Wrong with First-Generation Location-Based Services?\" The panel chair, Sumi Helal of the University of Florida, invited two world-class experts in LBS history and technology to discuss the topic: Paolo Bellavista of the University of Bologna and Axel Kupper of the University of Munich. The panel discussed the popularity of today's LBSs and analyzed their distinguishing aspects in comparison with first-generation LBSs. The panel was anything but controversial, with all panelists in total agreement on what initially went wrong and why today's LBSs work. They analyzed how the failure unfolded to set the stage for a major paradigm shift in LBS business and technology and noted the milestones that shaped today's LBSs.",
"title": ""
},
{
"docid": "abc4dccce4f4dbb0e8ccd17cbac04adc",
"text": "In this introduction and review—like in the book which follows—we explore the hypothesis that adaptive growth is a means of producing brain-like machines. The emulation of neural development can incorporate desirable characteristics of natural neural systems into engineered designs. The introduction begins with a review of neural development and neural models. Next, artificial development— the use of a developmentally-inspired stage in engineering design—is introduced. Several strategies for performing this “meta-design” for artificial neural systems are reviewed. This work is divided into three main categories: bio-inspired representations; developmental systems; and epigenetic simulations. Several specific network biases and their benefits to neural network design are identified in these contexts. In particular, several recent studies show a strong synergy, sometimes interchangeability, between developmental and epigenetic processes—a topic that has remained largely under-explored in the literature. T. Kowaliw (B) Institut des Systèmes Complexes Paris Île-de-France, CNRS, Paris, France e-mail: [email protected] N. Bredeche Sorbonne Universités, UPMC University Paris 06, UMR 7222 ISIR,F-75005 Paris, France e-mail: [email protected] N. Bredeche CNRS, UMR 7222 ISIR,F-75005 Paris, France S. Chevallier Versailles Systems Engineering Laboratory (LISV), University of Versailles, Velizy, France e-mail: [email protected] R. Doursat School of Biomedical Engineering, Drexel University, Philadelphia, USA e-mail: [email protected] T. Kowaliw et al. (eds.), Growing Adaptive Machines, 1 Studies in Computational Intelligence 557, DOI: 10.1007/978-3-642-55337-0_1, © Springer-Verlag Berlin Heidelberg 2014",
"title": ""
},
{
"docid": "39fa66b86ca91c54a2d2020f04ecc7ba",
"text": "We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.",
"title": ""
},
{
"docid": "4faa5fd523361d472fc0bea8508c58f8",
"text": "This paper reviews the current state of laser scanning from airborne and terrestrial platforms for geometric reconstruction of object shape and size. The current performance figures of sensor systems are presented in an overview. Next, their calibration and the orientation of the acquired point clouds is discussed. For airborne deployment this is usually one step, whereas in the terrestrial case laboratory calibration and registration of point clouds are (still) two distinct, independent steps. As laser scanning is an active measurement technology, the interaction of the emitted energy with the object surface has influences on the range measurement. This has to be considered in order to explain geometric phenomena in the data. While the problems, e.g. multiple scattering, are understood well, there is currently a lack of remedies. Then, in analogy to the processing chain, segmentation approaches for laser scanning data are reviewed. Segmentation is a task relevant for almost all applications. Likewise, DTM (digital terrain model) reconstruction is relevant for many applications of airborne laser scanning, and is therefore discussed, too. This paper reviews the main processing steps necessary for many applications of laser scanning.",
"title": ""
}
] |
scidocsrr
|
92f27171e77957b767863663922db45e
|
Efficient and Robust Question Answering from Minimal Context over Documents
|
[
{
"docid": "22accfa74592e8424bdfe74224365425",
"text": "In the SQuaD reading comprehension task systems are given a paragraph from Wikipedia and have to answer a question about it. The answer is guaranteed to be contained within the paragraph. There are 107,785 such paragraph-question-answer tuples in the dataset. Human performance on this task achieves 91.2% accuracy (F1), and the current state-of-the-art system obtains a respectably close 84.7%. Not so fast though! If we adversarially add a single sentence to those paragraphs, in such a way that the added sentences do not contradict the correct answer, nor do they confuse humans, the accuracy of the published models studied plummets from an average of 75% to just 36%.",
"title": ""
}
] |
[
{
"docid": "6b97ad3fc20e56f28ae5bf7c6fd0eb57",
"text": "We propose a new model of steganography based on a list of pseudo-randomly sorted sequences of characters. Given a list L of m columns containing n distinct strings each, with low or no semantic relationship between columns taken two by two, and a secret message s ∈ {0, 1}∗, our model embeds s in L block by block, by generating, for each column of L, a permutation number and by reordering strings contained in it according to that number. Where, letting l be average bit length of a string, the embedding capacity is given by [(m − 1) ∗ log2(n! − 1)/n ∗ l]. We’ve shown that optimal efficiency of the method can be obtained with the condition that (n >> l). The results which has been obtained by experiments, show that our model performs a better hiding process than some of the important existing methods, in terms of hiding capacity.",
"title": ""
},
{
"docid": "81cae27233c3e6a56f382dfb28c996c2",
"text": "Robust face recognition (FR) is an active topic in computer vision and biometrics, while face occlusion is one of the most challenging problems for robust FR. Recently, the representation (or coding) based FR schemes with sparse coding coefficients and coding residual have demonstrated good robustness to face occlusion; however, the high complexity of l1-minimization makes them less useful in practical applications. In this paper we propose a novel coding residual map learning scheme for fast and robust FR based on the fact that occluded pixels usually have higher coding residuals when representing an occluded face image over the non-occluded training samples. A dictionary is learned to code the training samples, and the distribution of coding residuals is computed. Consequently, a residual map is learned to detect the occlusions by adaptive thresholding. Finally the face image is identified by masking the detected occlusion pixels from face representation. Experiments on benchmark databases show that the proposed scheme has much lower time complexity but comparable FR accuracy with other popular approaches.",
"title": ""
},
{
"docid": "d5d96493b34cfbdf135776e930ec5979",
"text": "We propose an approach for the static analysis of probabilistic programs that sense, manipulate, and control based on uncertain data. Examples include programs used in risk analysis, medical decision making and cyber-physical systems. Correctness properties of such programs take the form of queries that seek the probabilities of assertions over program variables. We present a static analysis approach that provides guaranteed interval bounds on the values (assertion probabilities) of such queries. First, we observe that for probabilistic programs, it is possible to conclude facts about the behavior of the entire program by choosing a finite, adequate set of its paths. We provide strategies for choosing such a set of paths and verifying its adequacy. The queries are evaluated over each path by a combination of symbolic execution and probabilistic volume-bound computations. Each path yields interval bounds that can be summed up with a \"coverage\" bound to yield an interval that encloses the probability of assertion for the program as a whole. We demonstrate promising results on a suite of benchmarks from many different sources including robotic manipulators and medical decision making programs.",
"title": ""
},
{
"docid": "a45b4d0237fdcfedf973ec639b1a1a36",
"text": "We investigated the brain systems engaged during propositional speech (PrSp) and two forms of non- propositional speech (NPrSp): counting and reciting overlearned nursery rhymes. Bilateral cerebral and cerebellar regions were involved in the motor act of articulation, irrespective of the type of speech. Three additional, left-lateralized regions, adjacent to the Sylvian sulcus, were activated in common: the most posterior part of the supratemporal plane, the lateral part of the pars opercularis in the posterior inferior frontal gyrus and the anterior insula. Therefore, both NPrSp and PrSp were dependent on the same discrete subregions of the anatomically ill-defined areas of Wernicke and Broca. PrSp was also dependent on a predominantly left-lateralized neural system distributed between multi-modal and amodal regions in posterior inferior parietal, anterolateral and medial temporal and medial prefrontal cortex. The lateral prefrontal and paracingulate cortical activity observed in previous studies of cued word retrieval was not seen with either NPrSp or PrSp, demonstrating that normal brain- language representations cannot be inferred from explicit metalinguistic tasks. The evidence from this study indicates that normal communicative speech is dependent on a number of left hemisphere regions remote from the classic language areas of Wernicke and Broca. Destruction or disconnection of discrete left extrasylvian and perisylvian cortical regions, rather than the total extent of damage to perisylvian cortex, will account for the qualitative and quantitative differences in the impaired speech production observed in aphasic stroke patients.",
"title": ""
},
{
"docid": "da69ac86355c5c514f7e86a48320dcb3",
"text": "Current approaches to semantic parsing, the task of converting text to a formal meaning representation, rely on annotated training data mapping sentences to logical forms. Providing this supervision is a major bottleneck in scaling semantic parsers. This paper presents a new learning paradigm aimed at alleviating the supervision burden. We develop two novel learning algorithms capable of predicting complex structures which only rely on a binary feedback signal based on the context of an external world. In addition we reformulate the semantic parsing problem to reduce the dependency of the model on syntactic patterns, thus allowing our parser to scale better using less supervision. Our results surprisingly show that without using any annotated meaning representations learning with a weak feedback signal is capable of producing a parser that is competitive with fully supervised parsers.",
"title": ""
},
{
"docid": "33ef514ef6ea291ad65ed6c567dbff37",
"text": "In this paper, we present an improved feedforward sequential memory networks (FSMN) architecture, namely Deep-FSMN (DFSMN), by introducing skip connections between memory blocks in adjacent layers. These skip connections enable the information flow across different layers and thus alleviate the gradient vanishing problem when building very deep structure. As a result, DFSMN significantly benefits from these skip connections and deep structure. We have compared the performance of DFSMN to BLSTM both with and without lower frame rate (LFR) on several large speech recognition tasks, including English and Mandarin. Experimental results shown that DFSMN can consistently outperform BLSTM with dramatic gain, especially trained with LFR using CD-Phone as modeling units. In the 20000 hours Fisher (FSH) task, the proposed DFSMN can achieve a word error rate of 9.4% by purely using the cross-entropy criterion and decoding with a 3-gram language model, which achieves a 1.5% absolute improvement compared to the BLSTM. In a 20000 hours Mandarin recognition task, the LFR trained DFSMN can achieve more than 20% relative improvement compared to the LFR trained BLSTM. Moreover, we can easily design the lookahead filter order of the memory blocks in DFSMN to control the latency for real-time applications.",
"title": ""
},
{
"docid": "4829d8c0dd21f84c3afbe6e1249d6248",
"text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.",
"title": ""
},
{
"docid": "959a43b6b851a4a255466296efac7299",
"text": "Technology in football has been debated by pundits, players and fans all over the world for the past decade. FIFA has recently commissioned the use of ‘Hawk-Eye’ and ‘Goal Ref’ goal line technology systems at the 2014 World Cup in Brazil. This paper gives an in depth evaluation of the possible technologies that could be used in football and determines the potential benefits and implications these systems could have on the officiating of football matches. The use of technology in other sports is analyzed to come to a conclusion as to whether officiating technology should be used in football. Will football be damaged by the loss of controversial incidents such as Frank Lampard’s goal against Germany at the 2010 World Cup? Will cost, accuracy and speed continue to prevent the use of officiating technology in football? Time will tell, but for now, any advancement in the use of technology in football will be met by some with discontent, whilst others see it as moving the sport into the 21 century.",
"title": ""
},
{
"docid": "e5b2aa76e161661ea613912ba40695bd",
"text": "Three meanings of “information” are distinguished: “Information-as-process”; “information-as-knowledge”; and “information-as-thing,” the attributive use of “information” to denote things regarded as informative. The nature and characteristics of “information-asthing” are discussed, using an indirect approach (“What things are informative?“). Varieties of “informationas-thing” include data, text, documents, objects, and events. On this view “information” includes but extends beyond communication. Whatever information storage and retrieval systems store and retrieve is necessarily “information-as-thing.” These three meanings of “information,” along with “information processing,” offer a basis for classifying disparate information-related activities (e.g., rhetoric, bibliographic retrieval, statistical analysis) and, thereby, suggest a topography for “information science.”",
"title": ""
},
{
"docid": "44de39859665488f8df950007d7a01c6",
"text": "Topic models provide insights into document collections, and their supervised extensions also capture associated document-level metadata such as sentiment. However, inferring such models from data is often slow and cannot scale to big data. We build upon the “anchor” method for learning topic models to capture the relationship between metadata and latent topics by extending the vector-space representation of word-cooccurrence to include metadataspecific dimensions. These additional dimensions reveal new anchor words that reflect specific combinations of metadata and topic. We show that these new latent representations predict sentiment as accurately as supervised topic models, and we find these representations more quickly without sacrificing interpretability. Topic models were introduced in an unsupervised setting (Blei et al., 2003), aiding in the discovery of topical structure in text: large corpora can be distilled into human-interpretable themes that facilitate quick understanding. In addition to illuminating document collections for humans, topic models have increasingly been used for automatic downstream applications such as sentiment analysis (Titov and McDonald, 2008; Paul and Girju, 2010; Nguyen et al., 2013). Unfortunately, the structure discovered by unsupervised topic models does not necessarily constitute the best set of features for tasks such as sentiment analysis. Consider a topic model trained on Amazon product reviews. A topic model might discover a topic about vampire romance. However, we often want to go deeper, discovering facets of a topic that reflect topic-specific sentiment, e.g., “buffy” and “spike” for positive sentiment vs. “twilight” and “cullen” for negative sentiment. Techniques for discovering such associations, called supervised topic models (Section 2), both produce interpretable topics and predict metadata values. While unsupervised topic models now have scalable inference strategies (Hoffman et al., 2013; Zhai et al., 2012), supervised topic model inference has not received as much attention and often scales poorly. The anchor algorithm is a fast, scalable unsupervised approach for finding “anchor words”—precise words with unique co-occurrence patterns that can define the topics of a collection of documents. We augment the anchor algorithm to find supervised sentiment-specific anchor words (Section 3). Our algorithm is faster and just as effective as traditional schemes for supervised topic modeling (Section 4). 1 Anchors: Speedy Unsupervised Models The anchor algorithm (Arora et al., 2013) begins with a V × V matrix Q̄ of word co-occurrences, where V is the size of the vocabulary. Each word type defines a vector Q̄i,· of length V so that Q̄i,j encodes the conditional probability of seeing word j given that word i has already been seen. Spectral methods (Anandkumar et al., 2012) and the anchor algorithm are fast alternatives to traditional topic model inference schemes because they can discover topics via these summary statistics (quadratic in the number of types) rather than examining the whole dataset (proportional to the much larger number of tokens). The anchor algorithm takes its name from the idea of anchor words—words which unambiguously identify a particular topic. For instance, “wicket” might be an anchor word for the cricket topic. Thus, for any anchor word a, Q̄a,· will look like a topic distribution. Q̄wicket,· will have high probability for “bowl”, “century”, “pitch”, and “bat”; these words are related to cricket, but they cannot be anchor words because they are also related to other topics. Because these other non-anchor words could be topically ambiguous, their co-occurrence must be explained through some combination of anchor words; thus for non-anchor word i,",
"title": ""
},
{
"docid": "8f9c8188fb22c4aee1f7b066d24e3793",
"text": "The objective of unsupervised domain adaptation is to leverage features from a labeled source domain and learn a classifier for an unlabeled target domain, with a similar but different data distribution. Most deep learning approaches to domain adaptation consist of two steps: (i) learn features that preserve a low risk on labeled samples (source domain) and (ii) make the features from both domains to be as indistinguishable as possible, so that a classifier trained on the source can also be applied on the target domain. In general, the classifiers in step (i) consist of fully-connected layers applied directly on the indistinguishable features learned in (ii). In this paper, we propose a different way to do the classification, using similarity learning. The proposed method learns a pairwise similarity function in which classification can be performed by computing similarity between prototype representations of each category. The domain-invariant features and the categorical prototype representations are learned jointly and in an end-to-end fashion. At inference time, images from the target domain are compared to the prototypes and the label associated with the one that best matches the image is outputed. The approach is simple, scalable and effective. We show that our model achieves state-of-the-art performance in different unsupervised domain adaptation scenarios.",
"title": ""
},
{
"docid": "b6bd380108803bec62dae716d9e0a83e",
"text": "With the advent of statistical modeling in sports, predicting the outcome of a game has been established as a fundamental problem. Cricket is one of the most popular team games in the world. With this article, we embark on predicting the outcome of a One Day International (ODI) cricket match using a supervised learning approach from a team composition perspective. Our work suggests that the relative team strength between the competing teams forms a distinctive feature for predicting the winner. Modeling the team strength boils down to modeling individual player’s batting and bowling performances, forming the basis of our approach. We use career statistics as well as the recent performances of a player to model him. Player independent factors have also been considered in order to predict the outcome of a match. We show that the k-Nearest Neighbor (kNN) algorithm yields better results as compared to other classifiers.",
"title": ""
},
{
"docid": "c9284c30e686c1fe1b905b776b520e0e",
"text": "Two decades since the idea of using software diversity for security was put forward, ASLR is the only technique to see widespread deployment. This is puzzling since academic security researchers have published scores of papers claiming to advance the state of the art in the area of code randomization. Unfortunately, these improved diversity techniques are generally less deployable than integrity-based techniques, such as control-flow integrity, due to their limited compatibility with existing optimization, development, and distribution practices. This paper contributes yet another diversity technique called pagerando. Rather than trading off practicality for security, we first and foremost aim for deployability and interoperability. Most code randomization techniques interfere with memory sharing and deduplication optimization across processes and virtual machines, ours does not. We randomize at the granularity of individual code pages but never rewrite page contents. This also avoids incompatibilities with code integrity mechanisms that only allow signed code to be mapped into memory and prevent any subsequent changes. On Android, pagerando fully adheres to the default SELinux policies. All practical mitigations must interoperate with unprotected legacy code, our implementation transparently interoperates with unmodified applications and libraries. To support our claims of practicality, we demonstrate that our technique can be integrated into and protect all shared libraries shipped with stock Android 6.0. We also consider hardening of non-shared libraries and executables and other concerns that must be addressed to put software diversity defenses on par with integrity-based mitigations such as CFI.",
"title": ""
},
{
"docid": "9caaf7c3c2e01e8625fc566db4913df1",
"text": "It is established that driver distraction is the result of sharing cognitive resources between the primary task (driving) and any other secondary task. In the case of holding conversations, a human passenger who is aware of the driving conditions can choose to interrupt his speech in situations potentially requiring more attention from the driver, but in-car information systems typically do not exhibit such sensitivity. We have designed and tested such a system in a driving simulation environment. Unlike other systems, our system delivers information via speech (calendar entries with scheduled meetings) but is able to react to signals from the environment to interrupt when the driver needs to be fully attentive to the driving task and subsequently resume its delivery. Distraction is measured by a secondary short-term memory task. In both tasks, drivers perform significantly worse when the system does not adapt its speech, while they perform equally well to control conditions (no concurrent task) when the system intelligently interrupts and resumes.",
"title": ""
},
{
"docid": "458633abcbb030b9e58e432d5b539950",
"text": "In many computer vision tasks, we expect a particular behavior of the output with respect to rotations of the input image. If this relationship is explicitly encoded, instead of treated as any other variation, the complexity of the problem is decreased, leading to a reduction in the size of the required model. In this paper, we propose the Rotation Equivariant Vector Field Networks (RotEqNet), a Convolutional Neural Network (CNN) architecture encoding rotation equivariance, invariance and covariance. Each convolutional filter is applied at multiple orientations and returns a vector field representing magnitude and angle of the highest scoring orientation at every spatial location. We develop a modified convolution operator relying on this representation to obtain deep architectures. We test RotEqNet on several problems requiring different responses with respect to the inputs’ rotation: image classification, biomedical image segmentation, orientation estimation and patch matching. In all cases, we show that RotEqNet offers extremely compact models in terms of number of parameters and provides results in line to those of networks orders of magnitude larger.",
"title": ""
},
{
"docid": "74c7895313a2f98a5dd4e5c9d5c664bf",
"text": "The research was conducted to identify the presence of protein by indicating amide groups and measuring its level in food through specific groups of protein using FTIR (Fourier Transformed Infrared) method. The scanning process was conducted on wavenumber 400—4000 cm -1 . The determination of functional group was being done by comparing wavenumber of amide functional groups of the protein samples to existing standard. Protein level was measured by comparing absorbance of protein specific functional groups to absorbance of fatty acid functional groups. Result showed the FTIR spectrums of all samples were on 557-3381 cm -1 wavenumber range. The amides detected were Amide III, IV, and VI with absorbance between trace until 0.032%. The presence of protein can be detected in samples animal and vegetable cheese, butter, and milk through functional groups of amide III, IV, and VI were on 1240-1265 cm -1 , 713-721 cm -1 , and 551-586 cm -1 wavenumber respectively . Urine was detected through functional groups of amide III and IV were on 1639 cm -1 and 719 cm -1 wavenumber. The protein level of animal cheese, vegetable cheese, butter, and milk were 1.01%, 1.0%, 0.86%, and 1.55% respectively.",
"title": ""
},
{
"docid": "5ab4bb5923bf589436651783a6627a0d",
"text": "A capacity fade prediction model has been developed for Li-ion cells based on a semi-empirical approach. Correlations for variation of capacity fade parameters with cycling were obtained with two different approaches. The first approach takes into account only the active material loss, while the second approach includes rate capability losses too. Both methods use correlations for variation of the film resistance with cycling. The state of charge (SOC) of the limiting electrode accounts for the active material loss. The diffusion coefficient of the limiting electrode was the parameter to account for the rate capability losses during cycling. © 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "d010a2f8240ff9f6704cde917cb85cf0",
"text": "OBJECTIVE\nAlthough psychological modulation of immune function is now a well-established phenomenon, much of the relevant literature has been published within the last decade. This article speculates on future directions for psychoneuroimmunology research, after reviewing the history of the field.\n\n\nMETHODS\nThis review focuses on human psychoneuroimmunology studies published since 1939, particularly those that have appeared in Psychosomatic Medicine. Studies were clustered according to key themes, including stressor duration and characteristics (laboratory stressors, time-limited naturalistic stressors, or chronic stress), as well as the influences of psychopathology, personality, and interpersonal relationships; the responsiveness of the immune system to behavioral interventions is also addressed. Additionally, we describe trends in populations studied and the changing nature of immunological assessments. The final section focuses on health outcomes and future directions for the field.\n\n\nRESULTS\nThere are now sufficient data to conclude that immune modulation by psychosocial stressors or interventions can lead to actual health changes, with the strongest direct evidence to date in infectious disease and wound healing. Furthermore, recent medical literature has highlighted a spectrum of diseases whose onset and course may be influenced by proinflammatory cytokines, from cardiovascular disease to frailty and functional decline; proinflammatory cytokine production can be directly stimulated by negative emotions and stressful experiences and indirectly stimulated by chronic or recurring infections. Accordingly, distress-related immune dysregulation may be one core mechanism behind a diverse set of health risks associated with negative emotions.\n\n\nCONCLUSIONS\nWe suggest that psychoneuroimmunology may have broad implications for the basic biological sciences and medicine.",
"title": ""
},
{
"docid": "833786dcf2288f21343d60108819fe49",
"text": "This paper describes an audio event detection system which automatically classifies an audio event as ambient noise, scream or gunshot. The classification system uses two parallel GMM classifiers for discriminating screams from noise and gunshots from noise. Each classifier is trained using different features, appropriately chosen from a set of 47 audio features, which are selected according to a 2-step process. First, feature subsets of increasing size are assembled using filter selection heuristics. Then, a classifier is trained and tested with each feature subset. The obtained classification performance is used to determine the optimal feature vector dimension. This allows a noticeable speed-up w.r.t. wrapper feature selection methods. In order to validate the proposed detection algorithm, we carried out extensive experiments on a rich set of gunshots and screams mixed with ambient noise at different SNRs. Our results demonstrate that the system is able to guarantee a precision of 90% at a false rejection rate of 8%.",
"title": ""
},
{
"docid": "4e6ca2d20e904a0eb72fcdcd1164a5e2",
"text": "Fraudulent activities (e.g., suspicious credit card transaction, financial reporting fraud, and money laundering) are critical concerns to various entities including bank, insurance companies, and public service organizations. Typically, these activities lead to detrimental effects on the victims such as a financial loss. Over the years, fraud analysis techniques underwent a rigorous development. However, lately, the advent of Big data led to vigorous advancement of these techniques since Big Data resulted in extensive opportunities to combat financial frauds. Given that the massive amount of data that investigators need to sift through, massive volumes of data integrated from multiple heterogeneous sources (e.g., social media, blogs) to find fraudulent patterns is emerging as a feasible approach.",
"title": ""
}
] |
scidocsrr
|
b3c23786892b5df799932ae9f42ba066
|
Where in the World are You? Geolocation and Language Identification in Twitter
|
[
{
"docid": "f8e20046f9ad2e4ef63339f7c611e815",
"text": "We propose and evaluate a probabilistic framework for estimating a Twitter user's city-level location based purely on the content of the user's tweets, even in the absence of any other geospatial cues. By augmenting the massive human-powered sensing capabilities of Twitter and related microblogging services with content-derived location information, this framework can overcome the sparsity of geo-enabled features in these services and enable new location-based personalized information services, the targeting of regional advertisements, and so on. Three of the key features of the proposed approach are: (i) its reliance purely on tweet content, meaning no need for user IP information, private login information, or external knowledge bases; (ii) a classification component for automatically identifying words in tweets with a strong local geo-scope; and (iii) a lattice-based neighborhood smoothing model for refining a user's location estimate. The system estimates k possible locations for each user in descending order of confidence. On average we find that the location estimates converge quickly (needing just 100s of tweets), placing 51% of Twitter users within 100 miles of their actual location.",
"title": ""
},
{
"docid": "d438d948601b22f7de6ec9ecaaf04c63",
"text": "Location plays an essential role in our lives, bridging our online and offline worlds. This paper explores the interplay between people's location, interactions, and their social ties within a large real-world dataset. We present and evaluate Flap, a system that solves two intimately related tasks: link and location prediction in online social networks. For link prediction, Flap infers social ties by considering patterns in friendship formation, the content of people's messages, and user location. We show that while each component is a weak predictor of friendship alone, combining them results in a strong model, accurately identifying the majority of friendships. For location prediction, Flap implements a scalable probabilistic model of human mobility, where we treat users with known GPS positions as noisy sensors of the location of their friends. We explore supervised and unsupervised learning scenarios, and focus on the efficiency of both learning and inference. We evaluate Flap on a large sample of highly active users from two distinct geographical areas and show that it (1) reconstructs the entire friendship graph with high accuracy even when no edges are given; and (2) infers people's fine-grained location, even when they keep their data private and we can only access the location of their friends. Our models significantly outperform current comparable approaches to either task.",
"title": ""
},
{
"docid": "81387b0f93b68e8bd6a56a4fd81477e9",
"text": "We analyze microblog posts generated during two recent, concurrent emergency events in North America via Twitter, a popular microblogging service. We focus on communications broadcast by people who were \"on the ground\" during the Oklahoma Grassfires of April 2009 and the Red River Floods that occurred in March and April 2009, and identify information that may contribute to enhancing situational awareness (SA). This work aims to inform next steps for extracting useful, relevant information during emergencies using information extraction (IE) techniques.",
"title": ""
}
] |
[
{
"docid": "6a6238bb56eacc7d8ecc8f15f753b745",
"text": "Privacy-preservation has emerged to be a major concern in devising a data mining system. But, protecting the privacy of data mining input does not guarantee a privacy-preserved output. This paper focuses on preserving the privacy of data mining output and particularly the output of classification task. Further, instead of static datasets, we consider the classification of continuously arriving data streams: a rapidly growing research area. Due to the challenges of data stream classification such as vast volume, a mixture of labeled and unlabeled instances throughout the stream and timely classifier publication, enforcing privacy-preservation techniques becomes even more challenging. In order to achieve this goal, we propose a systematic method for preserving output-privacy in data stream classification that addresses several applications like loan approval, credit card fraud detection, disease outbreak or biological attack detection. Specifically, we propose an algorithm named Diverse and k-Anonymized HOeffding Tree (DAHOT) that is an amalgamation of popular data stream classification algorithm Hoeffding tree and a variant of k-anonymity and l-diversity principles. The empirical results on real and synthetic data streams verify the effectiveness of DAHOT as compared to its bedrock Hoeffding tree and two other techniques, one that learns sanitized decision trees from sampled data stream and other technique that uses ensemble-based classification. DAHOT guarantees to preserve the private patterns while classifying the data streams accurately.",
"title": ""
},
{
"docid": "cb0368397f1d8590516fe6f6d4296225",
"text": "In this paper, an ultra-wideband circular printed monopole antenna is presented. The antenna performance will be studied using the two-readymade software package IE3D and CST. The regular monopole antenna will be tuning by adding two C-shaped conductors near the antenna feeder to control the rejection band inside the WLAN frequency range (5.15–5.825GHz). The simulation results using the two software packages will be compared to each other. An empirical formula for the variation of the rejection band frequency against the mean length of C-shaped conductors is derived. The effect of the variation of the C-shaped conductor mean lengths on the center frequency of the rejection band will be discussed. The WLAN band rejection will be controlled by using two groups of PIN diodes. The circuit will be designed on RT/Duriod substrate (εr=2.2, h=1.57 mm, tanδ = 0.00019), where the simulations results using IE3D and CST are in good agreement with measurement results.",
"title": ""
},
{
"docid": "842e7c5b825669855617133b0067efc9",
"text": "This research proposes a robust method for disc localization and cup segmentation that incorporates masking to avoid misclassifying areas as well as forming the structure of the cup based on edge detection. Our method has been evaluated using two fundus image datasets, namely: D-I and D-II comprising of 60 and 38 images, respectively. The proposed method of disc localization achieves an average Fscore of 0.96 and average boundary distance of 7.7 for D-I, and 0.96 and 9.1, respectively, for D-II. The cup segmentation method attains an average Fscore of 0.88 and average boundary distance of 13.8 for D-I, and 0.85 and 18.0, respectively, for D-II. The estimation errors (mean ± standard deviation) of our method for the value of vertical cup-to-disc diameter ratio against the result of the boundary by the expert of DI and D-II have similar value, namely 0.04 ± 0.04. Overall, the result of ourmethod indicates its robustness for glaucoma evaluation. B Anindita Septiarini [email protected] Agus Harjoko [email protected] Reza Pulungan [email protected] Retno Ekantini [email protected] 1 Department of Computer Science and Electronics, Faculty of Mathematics and Natural Sciences, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia 2 Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia 3 Department of Computer Science, Mulawarman University, Samarinda 75123, Indonesia",
"title": ""
},
{
"docid": "114affaf4e25819aafa1c11da26b931f",
"text": "We propose a coherent mathematical model for human fingerprint images. Fingerprint structure is represented simply as a hologram - namely a phase modulated fringe pattern. The holographic form unifies analysis, classification, matching, compression, and synthesis of fingerprints in a self-consistent formalism. Hologram phase is at the heart of the method; a phase that uniquely decomposes into two parts via the Helmholtz decomposition theorem. Phase also circumvents the infinite frequency singularities that always occur at minutiae. Reliable analysis is possible using a recently discovered two-dimensional demodulator. The parsimony of this model is demonstrated by the reconstruction of a fingerprint image with an extreme compression factor of 239.",
"title": ""
},
{
"docid": "deef530ef2132c758477c88e0399be46",
"text": "A survey of Field-Programmable Gate Array (FPGA) architectures and the programming technologies used to customize them is presented. Programming technologies are compared on the basis of their vola fility, size, parasitic capacitance, resistance, and process technology complexity. FPGA architectures are divided into two constituents: logic block architectures and routing architectures. A classijcation of logic blocks based on their granularity is proposed and several logic blocks used in commercially available FPGA ’s are described. A brief review of recent results on the effect of logic block granularity on logic density and pe$ormance of an FPGA is then presented. Several commercial routing architectures are described in the contest of a general routing architecture model. Finally, recent results on the tradeoff between the fleibility of an FPGA routing architecture its routability and density are reviewed.",
"title": ""
},
{
"docid": "3ee6568a390b60b60c862c790b037bf5",
"text": "In the commercial software development organizations, increased complexity of products, shortened development cycles and higher customer expectations of quality have placed a major responsibility on the areas of software debugging, testing, and verification. As this issue of the IBM Systems Journal illustrates, the technology is improving on all the three fronts. However, we observe that due to the informal nature of software development as a whole, the prevalent practices in the industry are still quite immature even in areas where there is existing technology. In addition, the technology and tools in the more advanced aspects are really not ready for a large scale commercial use.",
"title": ""
},
{
"docid": "579536fe3f52f4ed244f06210a9c2cd1",
"text": "OBJECTIVE\nThis review integrates recent advances in attachment theory, affective neuroscience, developmental stress research, and infant psychiatry in order to delineate the developmental precursors of posttraumatic stress disorder.\n\n\nMETHOD\nExisting attachment, stress physiology, trauma, and neuroscience literatures were collected using Index Medicus/Medline and Psychological Abstracts. This converging interdisciplinary data was used as a theoretical base for modelling the effects of early relational trauma on the developing central and autonomic nervous system activities that drive attachment functions.\n\n\nRESULTS\nCurrent trends that integrate neuropsychiatry, infant psychiatry, and clinical psychiatry are generating more powerful models of the early genesis of a predisposition to psychiatric disorders, including PTSD. Data are presented which suggest that traumatic attachments, expressed in episodes of hyperarousal and dissociation, are imprinted into the developing limbic and autonomic nervous systems of the early maturing right brain. These enduring structural changes lead to the inefficient stress coping mechanisms that lie at the core of infant, child, and adult posttraumatic stress disorders.\n\n\nCONCLUSIONS\nDisorganised-disoriented insecure attachment, a pattern common in infants abused in the first 2 years of life, is psychologically manifest as an inability to generate a coherent strategy for coping with relational stress. Early abuse negatively impacts the developmental trajectory of the right brain, dominant for attachment, affect regulation, and stress modulation, thereby setting a template for the coping deficits of both mind and body that characterise PTSD symptomatology. These data suggest that early intervention programs can significantly alter the intergenerational transmission of posttraumatic stress disorders.",
"title": ""
},
{
"docid": "ed8d116bf4ade5003506914bbb1db750",
"text": "User interfaces in modeling have traditionally followed the WIMP (Window, Icon, Menu, Pointer) paradigm. Though functional and very powerful, they can also be cumbersome and daunting to a novice user, and creating a complex model requires considerable expertise and effort. A recent trend is toward more accessible and natural interfaces, which has lead to sketch-based interfaces for modeling (SBIM). The goal is to allow sketches—hasty freehand drawings—to be used in the modeling process, from rough model creation through to fine detail construction. Mapping a 2D sketch to a 3D modeling operation is a difficult task, rife with ambiguity. To wit, we present a categorization based on how a SBIM application chooses to interpret a sketch, of which there are three primary methods: to create a 3D model, to add details to an existing model, or to deform and manipulate a model. Additionally, in this paper we introduce a survey of sketch-based interfaces focused on 3D geometric modeling applications. The canonical and recent works are presented and classified, including techniques for sketch acquisition, filtering, and interpretation. The survey also provides an overview of some specific applications of SBIM and a discussion of important challenges and open problems for researchers to tackle in the coming years. & 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9d6f492242f1a5eb5a4bec7c8f0060aa",
"text": "Given the challenge of gathering labeled training data, zero-shot classification, which transfers information from observed classes to recognize unseen classes, has become increasingly popular in the computer vision community. Most existing zero-shot learning methods require a user to first provide a set of semantic visual attributes for each class as side information before applying a two-step prediction procedure that introduces an intermediate attribute prediction problem. In this paper, we propose a novel zero-shot classification approach that automatically learns label embeddings from the input data in a semi-supervised large-margin learning framework. The proposed framework jointly considers multi-class classification over all classes (observed and unseen) and tackles the target prediction problem directly without introducing intermediate prediction problems. It also has the capacity to incorporate semantic label information from different sources when available. To evaluate the proposed approach, we conduct experiments on standard zero-shot data sets. The empirical results show the proposed approach outperforms existing state-of-the-art zero-shot learning methods.",
"title": ""
},
{
"docid": "5a0c1ac2103a2804442872c6ed861ae8",
"text": "In this paper, we report findings from an exploratory study concerning the security of 15 different wireless technologies used in aviation. 242 aviation professionals and experts from 24 different countries completed an on-line questionnaire about their use and perceptions of each of these technologies. We examine the respondents’ familiarity with and reliance on each technology, with particular regard to their security. Furthermore, we analyse respondents’ perceptions of the possible impact of a wireless attack on the air traffic control system, from both a safety and a business point of view. We deepen these insights with statistical analysis comparing five different stakeholder groups: pilots, air traffic controllers, aviation authorities, aviation engineers, and private pilots.",
"title": ""
},
{
"docid": "ec7e1688c34ee4f6698fb0a2b5a11260",
"text": "Cell segmentation in microscopy imagery is essential for many bioimage applications such as cell tracking. To segment cells from the background accurately, we present a pixel classification approach that is independent of cell type or imaging modality. We train a set of Bayesian classifiers from clustered local training image patches. Each Bayesian classifier is an expert to make decision in its specific domain. The decision from the mixture of experts determines how likely a new pixel is a cell pixel. We demonstrate the effectiveness of this approach on four cell types with diverse morphologies under different microscopy imaging modalities.",
"title": ""
},
{
"docid": "5ac08a4d385dd44fa00db842a8d9f283",
"text": "Understanding what interests and delights users is critical to effective behavioral targeting, especially in information-poor contexts. As users interact with content and advertising, their passive behavior can reveal their interests towards advertising. Two issues are critical for building effective targeting methods: what metric to optimize for and how to optimize. More specifically, we first attempt to understand what the learning objective should be for behavioral targeting so as to maximize advertiser's performance. While most popular advertising methods optimize for user clicks, as we will show, maximizing clicks does not necessarily imply maximizing purchase activities or transactions, called conversions, which directly translate to advertiser's revenue. In this work we focus on conversions which makes a more relevant metric but also the more challenging one. Second is the issue of how to represent and combine the plethora of user activities such as search queries, page views, ad clicks to perform the targeting. We investigate several sources of user activities as well as methods for inferring conversion likelihood given the activities. We also explore the role played by the temporal aspect of user activities for targeting, e.g., how recent activities compare to the old ones. Based on a rigorous offline empirical evaluation over 200 individual advertising campaigns, we arrive at what we believe are best practices for behavioral targeting. We deploy our approach over live user traffic to demonstrate its superiority over existing state-of-the-art targeting methods.",
"title": ""
},
{
"docid": "c200b79726ca0b441bc1311975bf0008",
"text": "This article introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90nm to 22nm and beyond. At microarchitectural level, McPAT includes models for the fundamental components of a complete chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, and integrated system components such as memory controllers and Ethernet controllers. At circuit level, McPAT supports detailed modeling of critical-path timing, area, and power. At technology level, McPAT models timing, area, and power for the device types forecast in the ITRS roadmap. McPAT has a flexible XML interface to facilitate its use with many performance simulators.\n Combined with a performance simulator, McPAT enables architects to accurately quantify the cost of new ideas and assess trade-offs of different architectures using new metrics such as Energy-Delay-Area2 Product (EDA2P) and Energy-Delay-Area Product (EDAP). This article explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting trade-offs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies from cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks for manycore designs at the 22nm technology shows that 8-core clustering gives the best energy-delay product, whereas when die area is taken into account, 4-core clustering gives the best EDA2P and EDAP.",
"title": ""
},
{
"docid": "9185a7823e699c758dde3a81f7d6d86d",
"text": "Reading text from photographs is a challenging problem that has received a significant amount of attention. Two key components of most systems are (i) text detection from images and (ii) character recognition, and many recent methods have been proposed to design better feature representations and models for both. In this paper, we apply methods recently developed in machine learning -- specifically, large-scale algorithms for learning the features automatically from unlabeled data -- and show that they allow us to construct highly effective classifiers for both detection and recognition to be used in a high accuracy end-to-end system.",
"title": ""
},
{
"docid": "6e9edeffb12cf8e50223a933885bcb7c",
"text": "Reversible data hiding in encrypted images (RDHEI) is an effective technique to embed data in the encrypted domain. An original image is encrypted with a secret key and during or after its transmission, it is possible to embed additional information in the encrypted image, without knowing the encryption key or the original content of the image. During the decoding process, the secret message can be extracted and the original image can be reconstructed. In the last few years, RDHEI has started to draw research interest. Indeed, with the development of cloud computing, data privacy has become a real issue. However, none of the existing methods allow us to hide a large amount of information in a reversible manner. In this paper, we propose a new reversible method based on MSB (most significant bit) prediction with a very high capacity. We present two approaches, these are: high capacity reversible data hiding approach with correction of prediction errors and high capacity reversible data hiding approach with embedded prediction errors. With this method, regardless of the approach used, our results are better than those obtained with current state of the art methods, both in terms of reconstructed image quality and embedding capacity.",
"title": ""
},
{
"docid": "4e2ca9943ba585211f8e5eb9de4c8675",
"text": "This paper describes progress made on the T-Wing tail-sitter UAV programme currently being undertaken via a collaborative research agreement between Sonacom Pty Ltd and the University of Sydney. This vehicle is being developed in response to a perceived requirement for a more flexible surveillance and remote sensing platform than is currently available. Missions for such a platform include coastal surveillance, defence intelligence gathering and environmental monitoring. The use of an unmanned air-vehicle (UAV) with a vertical takeoff and landing (VTOL) capability that can still enjoy efficient horizontal flight promises significant advantages over other vehicles for such missions. One immediate advantage is the potential to operate from small patrol craft and frigates equipped with helipads. In this role such a vehicle could be used for maritime surveillance; sonobuoy or other store deployment; communication relay; convoy protection; and support for ground and helicopter operations. The programme currently being undertaken involves building a 50-lb fully autonomous VTOL tail-sitter UAV to demonstrate successful operation near the ground in windy conditions and to perform the transition maneuvers between vertical and horizontal flight. This will then allow the development of a full-size prototype vehicle, (The “Mirli”) to be undertaken as a prelude to commercial production. The Need for a Tail-Sitter UAV Defence Applications Although conflicts over the last 20 years have demonstrated the importance of UAV systems in facilitating real-time intelligence gathering, it is clear that most current systems still do not possess the operational flexibility that is desired by force commanders. One of the reasons for this is that most UAVs have adopted relatively conventional aircraft configurations. This leads directly to operational limitations because it either necessitates take-off and landing from large fixed runways; or the use of specialized launch and recovery methods such catapults, rockets, nets, parachutes and airbags. One potential solution to these operational difficulties is a tail-sitter VTOL UAV. Such a vehicle has few operational requirements other than a small clear area for take-off and landing. While other VTOL concepts share this operational advantage over conventional vehicles the tail-sitter has some other unique benefits. In comparison to helicopters, a tailsitter vehicle does not suffer the same performance penalties in terms of dash-speed, range and endurance because it spends the majority of its mission in a more efficient airplane flight mode. The only other VTOL concepts that combine vertical and horizontal flight are the tiltrotor and tilt-wing, however, both involve significant extra mechanical complexity in comparison to the tail-sitter vehicle, which has fixed wings and nacelles. A further simplification can be made in comparison to other VTOL designs by the use of prop-wash over wing and fin mounted control surfaces to effect control during vertical flight, thus obviating the need for cyclic rotor control. For naval forces, a tail-sitter VTOL UAV has enormous potential as an aircraft that can be deployed from small ships and used for long-range reconnaissance and surveillance; over† Department of Aeronautical Engineering, University of Sydney ‡ Sonacom Pty Ltd the-horizon detection of low-flying missiles and aircraft; deployment of remote acoustic sensors; and as a platform for aerial support and communications. The vehicle could also be used in anti-submarine activities and anti-surface operations and is ideal for battlefield monitoring over both sea and land. The obvious benefit in comparison to a conventional UAV is the operational flexibility provided by the vertical launch and recovery of the vehicle. The US Navy and Marine Corps who anticipate spending approximately US$350m on their VTUAV program have clearly recognized this fact. Figure 1: A Typical Naval UAV Mission: Monitoring Acoustic Sensors For ground based forces a tail-sitter vehicle is also attractive because it allows UAV systems to be quickly deployed from small cleared areas with a minimum of support equipment. This makes the UAVs less vulnerable to attacks on fixed bases without the need to set-up catapult launchers or recovery nets. It is envisaged that ground forces would mainly use small VTOL UAVs as reconnaissance and communication relay platforms. Civilian Applications Besides the defence requirements, there are also many civilian applications for which a VTOL UAV is admirably suited. Coastal surveillance to protect national borders from illegal immigrants and illicit drugs is clearly an area where such vehicles could be used. The VTOL characteristics in this role are an advantage, as they allow such vehicles to be based in remote areas without the fixed infrastructure of airstrips, or to be operated from small coastal patrol vessels. Further applications are also to be found in mineral exploration and environmental monitoring in remote locations. While conventional vehicles could of course accomplish such tasks their effectiveness may be limited if forced to operate from bases a long way from the area of interest. Tail-Sitters: A Historical Perspective Although tail-sitter vehicles have been investigated over the last 50 years as a means to combine the operational advantages of vertical flight enjoyed by helicopters with the better horizontal flight attributes of conventional airplanes, no successful tail-sitter vehicles have ever been produced. One of the primary reasons for this is that tail-sitters such as the Convair XF-Y1 and Lockheed XF-V1 (Figure 2) experimental vehicles of the 1950s proved to be very difficult to pilot during vertical flight and the transition maneuvers. Figure 2: Convair XF-Y1 and Lockheed XF-V1 Tail-Sitter Aircraft. 2 With the advent of modern computing technology and improvements in sensor reliability, capability and cost it is now possible to overcome these piloting disadvantages by transitioning the concept to that of an unmanned vehicle. With the pilot replaced by modern control systems it should be possible to realise the original promise of the tail-sitter configuration. The tail-sitter aircraft considered in this paper differs substantially from its earlier counterparts and is most similar in configuration to the Boeing Heliwing vehicle of the early 1990s. This vehicle had a 1450-lb maximum takeoff weight (MTOW) with a 200-lb payload, 5-hour endurance and 180 kts maximum speed and used twin rotors powered by a single 240 SHP turbine engine. A picture of the Heliwing is shown in Figure 3. Figure 3: Boeing Heliwing Vehicle",
"title": ""
},
{
"docid": "2e99cd85bb172d545648f18a76a0ff14",
"text": "In this work, the use of type-2 fuzzy logic systems as a novel approach for predicting permeability from well logs has been investigated and implemented. Type-2 fuzzy logic system is good in handling uncertainties, including uncertainties in measurements and data used to calibrate the parameters. In the formulation used, the value of a membership function corresponding to a particular permeability value is no longer a crisp value; rather, it is associated with a range of values that can be characterized by a function that reflects the level of uncertainty. In this way, the model will be able to adequately account for all forms of uncertainties associated with predicting permeability from well log data, where uncertainties are very high and the need for stable results are highly desirable. Comparative studies have been carried out to compare the performance of the proposed type-2 fuzzy logic system framework with those earlier used methods, using five different industrial reservoir data. Empirical results from simulation show that type-2 fuzzy logic approach outperformed others in general and particularly in the area of stability and ability to handle data in uncertain situations, which are common characteristics of well logs data. Another unique advantage of the newly proposed model is its ability to generate, in addition to the normal target forecast, prediction intervals as its by-products without extra computational cost. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f6c56abce40b67850b37f611e92c2340",
"text": "How do users generate an illusion of presence in a rich and consistent virtual environment from an impoverished, incomplete, and often inconsistent set of sensory cues? We conducted an experiment to explore how multimodal perceptual cues are integrated into a coherent experience of virtual objects and spaces. Specifically, we explored whether inter-modal integration contributes to generating the illusion of presence in virtual environments. To discover whether intermodal integration might play a role in presence, we looked for evidence of intermodal integration in the form of cross-modal interactionsperceptual illusions in which users use sensory cues in one modality to fill in the missing components of perceptual experience. One form of cross-modal interaction, a cross-modal transfer, is defined as a form of synesthesia, that is, a perceptual illusion in which stimulation to a sensory modality connected to the interface (such as the visual modality) is accompanied by perceived stimulation to an unconnected sensory modality that receives no apparent stimulation from the virtual environment (such as the haptic modality). Users of our experimental virtual environment who manipulated the visual analog of a physical force, a virtual spring, reported haptic sensations of physical resistance, even though the interface included no haptic displays. A path model of the data suggested that this cross-modal illusion was correlated with and dependent upon the sensation of spatial and sensory presence. We conclude that this is evidence that presence may derive from the process of multi-modal integration and, therefore, may be associated with other illusions, such as cross-modal transfers, that result from the process of creating a coherent mental model of the space. Finally, we suggest that this perceptual phenomenon might be used to improve user experiences with multimodal interfaces, specifically by supporting limited sensory displays (such as haptic displays) with appropriate synesthetic stimulation to other sensory modalities (such as visual and auditory analogs of haptic forces).",
"title": ""
},
{
"docid": "ec5bdd52fa05364923cb12b3ff25a49f",
"text": "A system to prevent subscription fraud in fixed telecommunications with high impact on long-distance carriers is proposed. The system consists of a classification module and a prediction module. The classification module classifies subscribers according to their previous historical behavior into four different categories: subscription fraudulent, otherwise fraudulent, insolvent and normal. The prediction module allows us to identify potential fraudulent customers at the time of subscription. The classification module was implemented using fuzzy rules. It was applied to a database containing information of over 10,000 real subscribers of a major telecom company in Chile. In this database, a subscription fraud prevalence of 2.2% was found. The prediction module was implemented as a multilayer perceptron neural network. It was able to identify 56.2% of the true fraudsters, screening only 3.5% of all the subscribers in the test set. This study shows the feasibility of significantly preventing subscription fraud in telecommunications by analyzing the application information and the customer antecedents at the time of application. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "578130d8ef9d18041c84ed226af8c84a",
"text": "Ranking and scoring are ubiquitous. We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics. The final output is a ranking that represents the relative quality of the individuals. While automatic and therefore seemingly objective, rankers can, and often do, discriminate against individuals and systematically disadvantage members of protected groups. This warrants a careful study of the fairness of a ranking scheme, to enable data science for social good applications, among others.\n In this paper we propose fairness measures for ranked outputs. We develop a data generation procedure that allows us to systematically control the degree of unfairness in the output, and study the behavior of our measures on these datasets. We then apply our proposed measures to several real datasets, and detect cases of bias. Finally, we show preliminary results of incorporating our ranked fairness measures into an optimization framework, and show potential for improving fairness of ranked outputs while maintaining accuracy.\n The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/FairRank.",
"title": ""
}
] |
scidocsrr
|
ebddb58f5dc1e5b0c2a05cbf646cf9d7
|
Novel High Gain Low Noise CMOS Instrumentation Amplifier for Biomedical Applications
|
[
{
"docid": "0e0e802a0e5635ac12f603f160d8d4de",
"text": "This paper describes the design and simulation of a fully-differential, high gain, high speed CMOS Operational Transconductance Amplifier (OTA). The op-amp is designed for unity gain sampler stage of 14bit 12.5Ms/s pipeline analog-to digital converter. The design is implemented using a folding cascode topology with the addition of gain boosting amplifiers for increased gain. Common-mode feedback (CMFB) is used to stable the designed OTA against temperature and other process variations. This design has been implemented in 0.13μm IBM RF mixed signal CMOS Technology. The Spectre simulation shows the DC gain of 91.5 dB and a unity-gain frequency of 714.5MHz with phase margin of 62° (double 7.5-pF load) while consuming 9 mW power. For the normal corner, the settling time to 1/2 LSB of 14bit A/D converter accuracy is 40 ns.",
"title": ""
}
] |
[
{
"docid": "42d1368bf2c5e659f9e9a215e1ebbd4c",
"text": "The quality of computer generated images of three-dimensional scenes depends on the shading technique used to paint the objects on the cathode-ray tube screen. The shading algorithm itself depends in part on the method for modeling the object, which also determines the hidden surface algorithm. The various methods of object modeling, shading, and hidden surface removal are thus strongly interconnected. Several shading techniques corresponding to different methods of object modeling and the related hidden surface algorithms are presented here. Human visual perception and the fundamental laws of optics are considered in the development of a shading rule that provides better quality and increased realism in generated images.",
"title": ""
},
{
"docid": "64588dd8ef9310b3682e56a9c74ce292",
"text": "Diagnostic testing can be used to discriminate subjects with a target disorder from subjects without it. Several indicators of diagnostic performance have been proposed, such as sensitivity and specificity. Using paired indicators can be a disadvantage in comparing the performance of competing tests, especially if one test does not outperform the other on both indicators. Here we propose the use of the odds ratio as a single indicator of diagnostic performance. The diagnostic odds ratio is closely linked to existing indicators, it facilitates formal meta-analysis of studies on diagnostic test performance, and it is derived from logistic models, which allow for the inclusion of additional variables to correct for heterogeneity. A disadvantage is the impossibility of weighing the true positive and false positive rate separately. In this article the application of the diagnostic odds ratio in test evaluation is illustrated.",
"title": ""
},
{
"docid": "61411c55041f40c3b0c63f3ebd4c621f",
"text": "This paper presents an application of neural network approach for the prediction of peak ground acceleration (PGA) using the strong motion data from Turkey, as a soft computing technique to remove uncertainties in attenuation equations. A training algorithm based on the Fletcher–Reeves conjugate gradient back-propagation was developed and employed for three sample sets of strong ground motion. The input variables in the constructed artificial neural network (ANN) model were the magnitude, the source-to-site distance and the site conditions, and the output was the PGA. The generalization capability of ANN algorithms was tested with the same training data. To demonstrate the authenticity of this approach, the network predictions were compared with the ones from regressions for the corresponding attenuation equations. The results indicated that the fitting between the predicted PGA values by the networks and the observed ones yielded high correlation coefficients (R). In addition, comparisons of the correlations by the ANN and the regression method showed that the ANN approach performed better than the regression. Even though the developed ANN models suffered from optimal configuration about the generalization capability, they can be conservatively used to well understand the influence of input parameters for the PGA predictions. © 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4035f8cf9891a1cbd823f71bbe186672",
"text": "A significant challenge in applying model-based testing on software systems is that manually designing the test models requires considerable amount of effort and deep expertise in formal modeling. When an existing system is being modeled and tested, there are various techniques to automate the process of producing the models based on the implementation. Some approaches aim to fully automated creation of the models, while others aim to automate the first steps to create an initial model to serve as a basis to start the manual modeling process. Especially graphical user interface (GUI) applications, including mobile and Web applications, have been a good domain for model extraction, reverse engineering, and specification mining approaches. In this chapter, we survey various automated modeling techniques, with a special focus on GUI models and their usefulness in analyzing and testing of the modeled GUI applications.",
"title": ""
},
{
"docid": "b074ba4ae329ffad0da3216dc84b22b9",
"text": "A recent research trend in Artificial Intelligence (AI) is the combination of several programs into one single, stronger, program; this is termed portfolio methods. We here investigate the application of such methods to Game Playing Programs (GPPs). In addition, we consider the case in which only one GPP is available by decomposing this single GPP into several ones through the use of parameters or even simply random seeds. These portfolio methods are trained in a learning phase. We propose two different offline approaches. The simplest one, BestArm, is a straightforward optimization of seeds or parameters; it performs quite well against the original GPP, but performs poorly against an opponent which repeats games and learns. The second one, namely Nash-portfolio, performs similarly in a “one game” test, and is much more robust against an opponent who learns. We also propose an online learning portfolio, which tests several of the GPP repeatedly and progressively switches to the best one using a bandit algorithm.",
"title": ""
},
{
"docid": "aaf69cb42fc9d17cf0ae3b80a55f12d6",
"text": "Bringing Blockchain technology and business process management together, we follow the Design Science Research approach and design, implement, and evaluate a Blockchain prototype for crossorganizational workflow management together with a German bank. For the use case of a documentary letter of credit we describe the status quo of the process, identify areas of improvement, implement a Blockchain solution, and compare both workflows. The prototype illustrates that the process, as of today paper-based and with high manual effort, can be significantly improved. Our research reveals that a tamper-proof process history for improved auditability, automation of manual process steps and the decentralized nature of the system can be major advantages of a Blockchain solution for crossorganizational workflow management. Further, our research provides insights how Blockchain technology can be used for business process management in general.",
"title": ""
},
{
"docid": "18bd983abb9af56667ff476364c821f8",
"text": "Lie is a false statement made with deliberate intent to deceive, this is intentional untruth. People use different technologies of lie detection as Pattern recognition is a science to discover if an individual is telling the truth or lying. Patterns can describe some characteristics of liars, in this work to face and speech specifically. Face recognition take patttern of face and speech recognition of voice or speech to text. So this paper pretends realize a compendium on lie detection techniques and pattern recognition face and speech. It permits to review the actual state of the tecnhologies realized in these recognitions. Also It presents an analysis of tecnhologies using some of these techniques to resum the result.",
"title": ""
},
{
"docid": "4e37fee25234a84a32b2ffc721ade2f8",
"text": "Over the last decade, the deep neural networks are a hot topic in machine learning. It is breakthrough technology in processing images, video, speech, text and audio. Deep neural network permits us to overcome some limitations of a shallow neural network due to its deep architecture. In this paper we investigate the nature of unsupervised learning in restricted Boltzmann machine. We have proved that maximization of the log-likelihood input data distribution of restricted Boltzmann machine is equivalent to minimizing the cross-entropy and to special case of minimizing the mean squared error. Thus the nature of unsupervised learning is invariant to different training criteria. As a result we propose a new technique called “REBA” for the unsupervised training of deep neural networks. In contrast to Hinton’s conventional approach to the learning of restricted Boltzmann machine, which is based on linear nature of training rule, the proposed technique is founded on nonlinear training rule. We have shown that the classical equations for RBM learning are a special case of the proposed technique. As a result the proposed approach is more universal in contrast to the traditional energy-based model. We demonstrate the performance of the REBA technique using wellknown benchmark problem. The main contribution of this paper is a novel view and new understanding of an unsupervised learning in deep neural networks.",
"title": ""
},
{
"docid": "ad076495666725ed3fd871c04d6b6794",
"text": "Elite endurance athletes possess a high capacity for whole-body maximal fat oxidation (MFO). The aim was to investigate the determinants of a high MFO in endurance athletes. The hypotheses were that augmented MFO in endurance athletes is related to concomitantly increments of skeletal muscle mitochondrial volume density (MitoVD ) and mitochondrial fatty acid oxidation (FAOp ), that is, quantitative mitochondrial adaptations as well as intrinsic FAOp per mitochondria, that is, qualitative adaptations. Eight competitive male cross-country skiers and eight untrained controls were compared in the study. A graded exercise test was performed to determine MFO, the intensity where MFO occurs (FatMax ), and V ˙ O 2 Max . Skeletal muscle biopsies were obtained to determine MitoVD (electron microscopy), FAOp , and OXPHOSp (high-resolution respirometry). The following were higher (P < 0.05) in endurance athletes compared to controls: MFO (mean [95% confidence intervals]) (0.60 g/min [0.50-0.70] vs 0.32 [0.24-0.39]), FatMax (46% V ˙ O 2 Max [44-47] vs 35 [34-37]), V ˙ O 2 Max (71 mL/min/kg [69-72] vs 48 [47-49]), MitoVD (7.8% [7.2-8.5] vs 6.0 [5.3-6.8]), FAOp (34 pmol/s/mg muscle ww [27-40] vs 21 [17-25]), and OXPHOSp (108 pmol/s/mg muscle ww [104-112] vs 69 [68-71]). Intrinsic FAOp (4.0 pmol/s/mg muscle w.w/MitoVD [2.7-5.3] vs 3.3 [2.7-3.9]) and OXPHOSp (14 pmol/s/mg muscle ww/MitoVD [13-15] vs 11 [10-13]) were, however, similar in the endurance athletes and untrained controls. MFO and MitoVD correlated (r2 = 0.504, P < 0.05) in the endurance athletes. A strong correlation between MitoVD and MFO suggests that expansion of MitoVD might be rate-limiting for MFO in the endurance athletes. In contrast, intrinsic mitochondrial changes were not associated with augmented MFO.",
"title": ""
},
{
"docid": "815819dc633c8434eb4e8c02b3c88186",
"text": "Volume and weight limitations for components in hybrid electrical vehicle (HEV) propulsion systems demand highly-compact and highly-efficient power electronics. The application of silicon carbide (SiC) semiconductor technology in conjunction with high temperature (HT) operation allows the power density of the DC-DC converters and inverters to be increased. Elevated ambient temperatures of above 200degC also affects the gate drives attached to the power semiconductors. This paper focuses on the selection of HT components and discusses different gate drive topologies for SiC JFETs with respect to HT operation capability, limitations, dynamic performance and circuit complexity. An experimental performance comparison of edge-triggered and phase-difference HT drivers with a conventional room temperature JFET gate driver is given. The proposed edge-triggered gate driver offers high switching speeds and a cost effective implementation. Switching tests at 200degC approve an excellent performance at high temperature and a low temperature drift of the driver output voltage.",
"title": ""
},
{
"docid": "03dc23b2556e21af9424500e267612bb",
"text": "File fragment classification is an important and difficult problem in digital forensics. Previous works in this area mainly relied on specific byte sequences in file headers and footers, or statistical analysis and machine learning algorithms on data from the middle of the file. This paper introduces a new approach to classify file fragment based on grayscale image. The proposed method treats a file fragment as a grayscale image, and uses image classification method to classify file fragment. Furthermore, two models based on file-unbiased and type-unbiased are proposed to verify the validity of the proposed method. Compared with previous works, the experimental results are promising. An average classification accuracy of 39.7% in file-unbiased model and 54.7% in type-unbiased model are achieved on 29 file types.",
"title": ""
},
{
"docid": "051afcc588dc8888699fd2e627d935ac",
"text": "Objective: Evaluation of dietary intakes and lifestyle factors of German vegans.Design: Cross-sectional study.Settings: Germany.Subjects: Subjects were recruited through journal advertisements. Of 868 volunteers, only 154 participated in all study segments (pre- and main questionnaire, two 9-day food frequency questionnaires, blood sampling) and fulfilled the following study criteria: vegan dietary intake at least 1 year prior to study start, minimum age of 18 y, no pregnancy or childbirth during the last 12 months.Interventions: No interventions.Results: All the 154 subjects had a comparatively low BMI (median 21.2 kg/m2), with an extremely low mean consumption of alcohol (0.77±3.14 g/day) and tobacco (96.8% were nonsmokers). Mean energy intake (total collective: 8.23±2.77 MJ) was higher in strict vegans than in moderate ones. Mean carbohydrate, fat, and protein intakes in proportion to energy (total collective: 57.1:29.7:11.6%) agreed with current recommendations. Recommended intakes for vitamins and minerals were attained through diet, except for calcium (median intake: 81.1% of recommendation), iodine (median: 40.6%), and cobalamin (median: 8.8%). For the male subgroup, the intake of a small amount of food of animal origin improved vitamin and mineral nutrient densities (except for zinc), whereas this was not the case for the female subgroup (except for calcium).Conclusion: In order to reach favourable vitamin and mineral intakes, vegans should consider taking supplements containing riboflavin, cobalamin, calcium, and iodine. Intake of total energy and protein should also be improved.Sponsorship: EDEN Foundation, Bad Soden, Germany; Stoll VITA Foundation, Waldshut-Tiengen, Germany",
"title": ""
},
{
"docid": "866a8e2669de31df1235637988cdb254",
"text": "Industry 4.0, or Digital Manufacturing, is a vision of interconnected services to facilitate innovation in the manufacturing sector. A fundamental requirement of innovation is the ability to be able to visualise manufacturing data, in order to discover new insight for increased competitive advantage. This article describes the enabling technologies that facilitate In-Transit Analytics, which is a necessary precursor for Industrial Internet of Things (IIoT) visualisation.",
"title": ""
},
{
"docid": "7ac1249e901e558443bc8751b11c9427",
"text": "Despite the growing popularity of leasing as an alternative to purchasing a vehicle, there is very little research on how consumers choose among various leasing and ̄nancing (namely buying) contracts and how this choice a®ects the brand they choose. In this paper therefore, we develop a structural model of the consumer's choice of automobile brand and the related decision of whether to lease or buy it. We conceptualize the leasing and buying of the same vehicle as two di®erent goods, each with its own costs and bene ̄ts. The di®erences between the two types of contracts are summarized along three dimensions: (i) the \\net price\" or ̄nancial cost of the contract, (ii) maintenance and repair costs and (iii) operating costs, which depend on the consumer's driving behavior. Based on consumer utility maximization, we derive a nested logit of brand and contract choice that captures the tradeo®s among all three costs. The model is estimated on a dataset of new car purchases from the near luxury segment of the automobile market. The optimal choice of brand and contract is determined by the consumer's implicit interest rate and the number of miles she expects to drive, both of which are estimated as parameters of the model. The empirical results yield several interesting ̄ndings. We ̄nd that (i) cars that deteriorate faster are more likely to be leased than bought, (ii) the estimated implicit interest rate is higher than the market rate, which implies that consumers do not make e±cient tradeo®s between the net price and operating costs and may often incorrectly choose to lease and (iii) the estimate of the annual expected mileage indicates that most consumers would incur substantial penalties if they lease, which explains why buying or ̄nancing continues to be more popular than leasing. This research also provides several interesting managerial insights into the e®ectiveness of various promotional instruments. We examine this issue by looking at (i) sales response to a promotion, (ii) the ability of the promotion to draw sales from other brands and (iii) its overall pro ̄tability. We ̄nd, for example that although the sales response to a cash rebate on a lease is greater than an equivalent increase in the residual value, under certain conditions and for certain brands, a residual value promotion yields higher pro ̄ts. These ̄ndings are of particular value to manufacturers in the prevailing competitive environment, which is marked by the extensive use of large rebates and 0% APR o®ers.",
"title": ""
},
{
"docid": "4434ad83cad1b8dc353f24fdf12a606c",
"text": "Open source tools have recently reached a level of maturity which makes them suitable for building large-scale real-world systems. At the same time, the field of machine learning has developed a large body of powerful learning algorithms for diverse applications. However, the true potential of these methods is not used, since existing implementations are not openly shared, resulting in software with low usability, and weak interoperability. We argue that this situation can be significantly improved by increasing incentives for researchers to publish their software under an open source model. Additionally, we outline the problems authors are faced with when trying to publish algorithmic implementations of machine learning methods. We believe that a resource of peer reviewed software accompanied by short articles would be highly valuable to both the machine learning and the general scientific community.",
"title": ""
},
{
"docid": "f73cd33c8dfc9791558b239aede6235b",
"text": "Web clustering engines organize search results by topic, thus offering a complementary view to the flat-ranked list returned by conventional search engines. In this survey, we discuss the issues that must be addressed in the development of a Web clustering engine, including acquisition and preprocessing of search results, their clustering and visualization. Search results clustering, the core of the system, has specific requirements that cannot be addressed by classical clustering algorithms. We emphasize the role played by the quality of the cluster labels as opposed to optimizing only the clustering structure. We highlight the main characteristics of a number of existing Web clustering engines and also discuss how to evaluate their retrieval performance. Some directions for future research are finally presented.",
"title": ""
},
{
"docid": "1e0eade3cc92eb79160aeac35a3a26d1",
"text": "Global environmental concerns and the escalating demand for energy, coupled with steady progress in renewable energy technologies, are opening up new opportunities for utilization of renewable energy vailable online 12 January 2011",
"title": ""
},
{
"docid": "e07a731a2c4fa39be27a13b5b5679593",
"text": "Ocean acidification is rapidly changing the carbonate system of the world oceans. Past mass extinction events have been linked to ocean acidification, and the current rate of change in seawater chemistry is unprecedented. Evidence suggests that these changes will have significant consequences for marine taxa, particularly those that build skeletons, shells, and tests of biogenic calcium carbonate. Potential changes in species distributions and abundances could propagate through multiple trophic levels of marine food webs, though research into the long-term ecosystem impacts of ocean acidification is in its infancy. This review attempts to provide a general synthesis of known and/or hypothesized biological and ecosystem responses to increasing ocean acidification. Marine taxa covered in this review include tropical reef-building corals, cold-water corals, crustose coralline algae, Halimeda, benthic mollusks, echinoderms, coccolithophores, foraminifera, pteropods, seagrasses, jellyfishes, and fishes. The risk of irreversible ecosystem changes due to ocean acidification should enlighten the ongoing CO(2) emissions debate and make it clear that the human dependence on fossil fuels must end quickly. Political will and significant large-scale investment in clean-energy technologies are essential if we are to avoid the most damaging effects of human-induced climate change, including ocean acidification.",
"title": ""
},
{
"docid": "3aaffdda034c762ad36954386d796fb9",
"text": "KNTU CDRPM is a cable driven redundant parallel manipulator, which is under investigation for possible high speed and large workspace applications. This newly developed mechanisms have several advantages compared to the conventional parallel mechanisms. Its rotational motion range is relatively large, its redundancy improves safety for failure in cables, and its design is suitable for long-time high acceleration motions. In this paper, collision-free workspace of the manipulator is derived by applying fast geometrical intersection detection method, which can be used for any fully parallel manipulator. Implementation of the algorithm on the Neuron design of the KNTU CDRPM leads to significant results, which introduce a new style of design of a spatial cable-driven parallel manipulators. The results are elaborated in three presentations; constant-orientation workspace, total orientation workspace and orientation workspace.",
"title": ""
}
] |
scidocsrr
|
1dbb6e12eb20c12571cad8daa0271cbc
|
Machine Learning for Wireless Networks with Artificial Intelligence: A Tutorial on Neural Networks
|
[
{
"docid": "7785c16b3d0515057c8a0ec0ed55b5de",
"text": "Most ad hoc mobile devices today operate on batteries. Hence, power consumption becomes an important issue. To maximize the lifetime of ad hoc mobile networks, the power consumption rate of each node must be evenly distributed, and the overall transmission power for each connection request must be minimized. These two objectives cannot be satisfied simultaneously by employing routing algorithms proposed in previous work. In this article we present a new power-aware routing protocol to satisfy these two constraints simultaneously; we also compare the performance of different types of power-related routing algorithms via simulation. Simulation results confirm the need to strike a balance in attaining service availability performance of the whole network vs. the lifetime of ad hoc mobile devices.",
"title": ""
}
] |
[
{
"docid": "333b3349cdcb6ddf44c697e827bcfe62",
"text": "Harmful cyanobacterial blooms, reflecting advanced eutrophication, are spreading globally and threaten the sustainability of freshwater ecosystems. Increasingly, non-nitrogen (N(2))-fixing cyanobacteria (e.g., Microcystis) dominate such blooms, indicating that both excessive nitrogen (N) and phosphorus (P) loads may be responsible for their proliferation. Traditionally, watershed nutrient management efforts to control these blooms have focused on reducing P inputs. However, N loading has increased dramatically in many watersheds, promoting blooms of non-N(2) fixers, and altering lake nutrient budgets and cycling characteristics. We examined this proliferating water quality problem in Lake Taihu, China's 3rd largest freshwater lake. This shallow, hyper-eutrophic lake has changed from bloom-free to bloom-plagued conditions over the past 3 decades. Toxic Microcystis spp. blooms threaten the use of the lake for drinking water, fisheries and recreational purposes. Nutrient addition bioassays indicated that the lake shifts from P limitation in winter-spring to N limitation in cyanobacteria-dominated summer and fall months. Combined N and P additions led to maximum stimulation of growth. Despite summer N limitation and P availability, non-N(2) fixing blooms prevailed. Nitrogen cycling studies, combined with N input estimates, indicate that Microcystis thrives on both newly supplied and previously-loaded N sources to maintain its dominance. Denitrification did not relieve the lake of excessive N inputs. Results point to the need to reduce both N and P inputs for long-term eutrophication and cyanobacterial bloom control in this hyper-eutrophic system.",
"title": ""
},
{
"docid": "81fd8d4c38a65c5d0df0c849e8c080fc",
"text": "The paper presents two types of one cycle current control method for Triple Active Bridge(TAB) phase-shifted DC-DC converter integrating Renewable Energy Source(RES), Energy Storage System(ESS) and a output dc bus. The main objective of the current control methods is to control the transformer current in each cycle so that dc transients are eliminated during phase angle change from one cycle to the next cycle. In the proposed current control methods, the transformer currents are sampled within a switching cycle and the phase shift angles for the next switching cycle are generated based on sampled current values and current references. The discussed one cycle control methods also provide an inherent power decoupling feature for the three port phase shifted triple active bridge converter. Two different methods, (a) sampling and updating twice in a switching cycle and (b) sampling and updating once in a switching cycle, are explained in this paper. The current control methods are experimentally verified using digital implementation technique on a laboratory made hardware prototype.",
"title": ""
},
{
"docid": "020ee6cc73f38e738a27d51d8a832bc2",
"text": "The growing interest in natural alternatives to synthetic petroleum-based dyes for food applications necessitates looking at nontraditional sources of natural colors. Certain sorghum varieties accumulate large amounts of poorly characterized pigments in their nongrain tissue. We used High Performance Liquid Chromatography-Tandem Mass Spectroscopy to characterize sorghum leaf sheath pigments and measured the stability of isolated pigments in the presence of bisulfite at pH 1.0 to 7.0 over a 4-wk period. Two new 3-deoxyanthocyanidin compounds were identified: apigeninidin-flavene dimer and apigenin-7-O-methylflavene dimer. The dimeric molecules had near identical UV-Vis absorbance profiles at pH 1.0 to 7.0, with no obvious sign of chalcone or quinoidal base formation even at the neutral pH, indicating unusually strong resistance to hydrophilic attack. The dimeric 3-deoxyanthocyanidins were also highly resistant to nucleophilic attack by SO(2); for example, apigeninidin-flavene dimer lost less than 20% of absorbance, compared to apigeninidin monomer, which lost more than 80% of absorbance at λ(max) within 1 h in the presence of SO(2). The increased molecular complexity of the dimeric 3-deoxyanthocyanidins compared to their monomers may be responsible for their unusual stability in the presence of bisulfite; these compounds present new interesting opportunities for food applications.",
"title": ""
},
{
"docid": "4d0b163e7c4c308696fa5fd4d93af894",
"text": "Modeling agent behavior is central to understanding the emergence of complex phenomena in multiagent systems. Prior work in agent modeling has largely been task-specific and driven by handengineering domain-specific prior knowledge. We propose a general learning framework for modeling agent behavior in any multiagent system using only a handful of interaction data. Our framework casts agent modeling as a representation learning problem. Consequently, we construct a novel objective inspired by imitation learning and agent identification and design an algorithm for unsupervised learning of representations of agent policies. We demonstrate empirically the utility of the proposed framework in (i) a challenging highdimensional competitive environment for continuous control and (ii) a cooperative environment for communication, on supervised predictive tasks, unsupervised clustering, and policy optimization using deep reinforcement learning.",
"title": ""
},
{
"docid": "bbdf68b20aed9801ece9dc2adaa46ba5",
"text": "Coflow is a collection of parallel flows, while a job consists of a set of coflows. A job is completed if all of the flows completes in the coflows. Therefore, the completion time of a job is affected by the latest flows in the coflows. To guarantee the job completion time and service performance, the job deadline and the dependency of coflows needs to be considered in the scheduling process. However, most existing methods ignore the dependency of coflows which is important to guarantee the job completion. In this paper, we take the dependency of coflows into consideration. To guarantee job completion for performance, we formulate a deadline and dependency-based model called MTF scheduler model. The purpose of MTF model is to minimize the overall completion time with the constraints of deadline and network capacity. Accordingly, we propose our method to schedule dependent coflows. Especially, we consider the dependent coflows as an entirety and propose a valuable coflow scheduling first MTF algorithm. We conduct extensive simulations to evaluate MTF method which outperforms the conventional short job first method as well as guarantees the job deadline.",
"title": ""
},
{
"docid": "ab2d496a4a91f7221a827a65191976f1",
"text": "We analyze attacks that take advantage of the data length information leaked by HTTP transactions over the TLS protocol, in order to link clients with particular resources they might access on a web site. The threat model considered is a public news site that tries to protect the patterns of requests and submissions of its users by encrypting the HTTP connections using TLS, against an attacker that can observe all traffic. We show how much information an attacker can infer about single requests and submissions knowing only their length. A Hidden Markov Model is then presented that analyzes sequences of requests and finds the most plausible resources accessed. We note that Anonymizing systems such as the Safe Web service could be the victim of such attacks, and discuss some techniques that can be used to counter them.",
"title": ""
},
{
"docid": "a5c0ad9c841245e57bb71b19b4ad24b1",
"text": "HTTP video streaming, such as Flash video, is widely deployed to deliver stored media. Owing to TCP's reliable service, the picture and sound quality would not be degraded by network impairments, such as high delay and packet loss. However, the network impairments can cause rebuffering events which would result in jerky playback and deform the video's temporal structure. These quality degradations could adversely affect users' quality of experience (QoE). In this paper, we investigate the relationship among three levels of quality of service (QoS) of HTTP video streaming: network QoS, application QoS, and user QoS (i.e., QoE). Our ultimate goal is to understand how the network QoS affects the QoE of HTTP video streaming. Our approach is to first characterize the correlation between the application and network QoS using analytical models and empirical evaluation. The second step is to perform subjective experiments to evaluate the relationship between application QoS and QoE. Our analysis reveals that the frequency of rebuffering is the main factor responsible for the variations in the QoE.",
"title": ""
},
{
"docid": "435c6eb000618ef63a0f0f9f919bc0b4",
"text": "Selective sampling is an active variant of online learning in which the learner is allowed to adaptively query the label of an observed example. The goal of selective sampling is to achieve a good trade-off between prediction performance and the number of queried labels. Existing selective sampling algorithms are designed for vector-based data. In this paper, motivated by the ubiquity of graph representations in real-world applications, we propose to study selective sampling on graphs. We first present an online version of the well-known Learning with Local and Global Consistency method (OLLGC). It is essentially a second-order online learning algorithm, and can be seen as an online ridge regression in the Hilbert space of functions defined on graphs. We prove its regret bound in terms of the structural property (cut size) of a graph. Based on OLLGC, we present a selective sampling algorithm, namely Selective Sampling with Local and Global Consistency (SSLGC), which queries the label of each node based on the confidence of the linear function on graphs. Its bound on the label complexity is also derived. We analyze the low-rank approximation of graph kernels, which enables the online algorithms scale to large graphs. Experiments on benchmark graph datasets show that OLLGC outperforms the state-of-the-art first-order algorithm significantly, and SSLGC achieves comparable or even better results than OLLGC while querying substantially fewer nodes. Moreover, SSLGC is overwhelmingly better than random sampling.",
"title": ""
},
{
"docid": "ffb65e7e1964b9741109c335f37ff607",
"text": "To build a redundant medium-voltage converter, the semiconductors must be able to turn OFF different short circuits. The most challenging one is a hard turn OFF of a diode which is called short-circuit type IV. Without any protection measures this short circuit destroys the high-voltage diode. Therefore, a novel three-level converter with an increased short-circuit inductance is used. In this paper several short-circuit measurements on a 6.5 kV diode are presented which explain the effect of the protection measures. Moreover, the limits of the protection scheme are presented.",
"title": ""
},
{
"docid": "c1a44605e8e9b76a76bf5a2dd3539310",
"text": "This paper presents a stereo matching approach for a novel multi-perspective panoramic stereo vision system, making use of asynchronous and non-simultaneous stereo imaging towards real-time 3D 360° vision. The method is designed for events representing the scenes visual contrast as a sparse visual code allowing the stereo reconstruction of high resolution panoramic views. We propose a novel cost measure for the stereo matching, which makes use of a similarity measure based on event distributions. Thus, the robustness to variations in event occurrences was increased. An evaluation of the proposed stereo method is presented using distance estimation of panoramic stereo views and ground truth data. Furthermore, our approach is compared to standard stereo methods applied on event-data. Results show that we obtain 3D reconstructions of 1024 × 3600 round views and outperform depth reconstruction accuracy of state-of-the-art methods on event data.",
"title": ""
},
{
"docid": "d1a9ac5a11d1f9fbd9b9ee24a199cb70",
"text": "In this paper, we proposed a new robust twin support vector machine (called R-TWSVM) via second order cone programming formulations for classification, which can deal with data with measurement noise efficiently. Preliminary experiments confirm the robustness of the proposed method and its superiority to the traditional robust SVM in both computation time and classification accuracy. Remarkably, since there are only inner products about inputs in our dual problems, this makes us apply kernel trick directly for nonlinear cases. Simultaneously we does not need to solve the extra inverse of matrices, which is totally different with existing TWSVMs. In addition, we also show that the TWSVMs are the special case of our robust model and simultaneously give a new dual form of TWSVM by degenerating R-TWSVM, which successfully overcomes the existing shortcomings of TWSVM. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "eff4f126e50447f872109549d060fbc8",
"text": "Many combinatorial problems are NP-complete for general graphs. However, when restricted to series–parallel graphs or partial k-trees, many of these problems can be solved in polynomial time, mostly in linear time. On the other hand, very few problems are known to be NP-complete for series–parallel graphs or partial k-trees. These include the subgraph isomorphism problem and the bandwidth problem. However, these problems are NP-complete even for trees. In this paper, we show that the edge-disjoint paths problem is NP-complete for series–parallel graphs and for partial 2-trees although the problem is trivial for trees and can be solved for outerplanar graphs in polynomial time. ? 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "22d687204c9e8829d2ee6da4eeea104e",
"text": "In speech based emotion recognition, both acoustic features extraction and features classification are usually time consuming,which obstruct the system to be real time. In this paper, we proposea novel feature selection (FSalgorithm to filter out the low efficiency features towards fast speech emotion recognition.Firstly, each acoustic feature's discriminative ability, time consumption and redundancy are calculated. Then, we map the original feature space into a nonlinear one to select nonlinear features,which can exploit the underlying relationship among the original features. Thirdly, high discriminative nonlinear feature with low time consumption is initially preserved. Finally, a further selection is followed to obtain low redundant features based on these preserved features. The final selected nonlinear features are used in features' extraction and features' classification in our approach, we call them qualified features. The experimental results demonstrate that recognition time consumption can be dramatically reduced in not only the extraction phase but also the classification phase. Moreover, a competitive of recognition accuracy has been observed in the speech emotion recognition.",
"title": ""
},
{
"docid": "75bd4eca2d60dfbe7426914b178cd76a",
"text": "While precision and recall have served the information extraction community well as two separate measures of system performance, we show that the F -measure, the weighted harmonic mean of precision and recall, exhibits certain undesirable behaviors. To overcome these limitations, we define an error measure, the slot error rate, which combines the different types of error directly, without having to resort to precision and recall as preliminary measures. The slot error rate is analogous to the word error rate that is used for measuring speech recognition performance; it is intended to be a measure of the cost to the user for the system to make the different types of errors.",
"title": ""
},
{
"docid": "8d1797caf78004e6ba548ace7d5a1161",
"text": "An automated irrigation system was developed to optimize water use for agricultural crops. The system has a distributed wireless network of soil-moisture and temperature sensors placed in the root zone of the plants. In addition, a gateway unit handles sensor information, triggers actuators, and transmits data to a web application. An algorithm was developed with threshold values of temperature and soil moisture that was programmed into a microcontroller-based gateway to control water quantity. The system was powered by photovoltaic panels and had a duplex communication link based on a cellular-Internet interface that allowed for data inspection and irrigation scheduling to be programmed through a web page. The automated system was tested in a sage crop field for 136 days and water savings of up to 90% compared with traditional irrigation practices of the agricultural zone were achieved. Three replicas of the automated system have been used successfully in other places for 18 months. Because of its energy autonomy and low cost, the system has the potential to be useful in water limited geographically isolated areas.",
"title": ""
},
{
"docid": "dc4aba1d336c602b896fbff3e614be39",
"text": "Requirements in computational power have grown dramatically in recent years. This is also the case in many language processing tasks, due to the overwhelming and ever increasing amount of textual information that must be processed in a reasonable time frame. This scenario has led to a paradigm shift in the computing architectures and large-scale data processing strategies used in the Natural Language Processing field. In this paper we present a new distributed architecture and technology for scaling up text analysis running a complete chain of linguistic processors on several virtual machines. Furthermore, we also describe a series of experiments carried out with the goal of analyzing the scaling capabilities of the language processing pipeline used in this setting. We explore the use of Storm in a new approach for scalable distributed language processing across multiple machines and evaluate its effectiveness and efficiency when processing documents on a medium and large scale. The experiments have shown that there is a big room for improvement regarding language processing performance when adopting parallel architectures, and that we might expect even better results with the use of large clusters with many processing",
"title": ""
},
{
"docid": "8da9e8193d4fead65bd38d62a22998a1",
"text": "Cloud computing has been considered as a solution for solving enterprise application distribution and configuration challenges in the traditional software sales model. Migrating from traditional software to Cloud enables on-going revenue for software providers. However, in order to deliver hosted services to customers, SaaS companies have to either maintain their own hardware or rent it from infrastructure providers. This requirement means that SaaS providers will incur extra costs. In order to minimize the cost of resources, it is also important to satisfy a minimum service level to customers. Therefore, this paper proposes resource allocation algorithms for SaaS providers who want to minimize infrastructure cost and SLA violations. Our proposed algorithms are designed in a way to ensure that Saas providers are able to manage the dynamic change of customers, mapping customer requests to infrastructure level parameters and handling heterogeneity of Virtual Machines. We take into account the customers' Quality of Service parameters such as response time, and infrastructure level parameters such as service initiation time. This paper also presents an extensive evaluation study to analyze and demonstrate that our proposed algorithms minimize the SaaS provider's cost and the number of SLA violations in a dynamic resource sharing Cloud environment.",
"title": ""
},
{
"docid": "38bdf0690a8409808cc337475ccf8347",
"text": "Network Traffic Matrix (TM) prediction is defined as the problem of estimating future network traffic from the previous and achieved network traffic data. It is widely used in network planning, resource management and network security. Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that is well-suited to learn from experience to classify, process and predict time series with time lags of unknown size. LSTMs have been shown to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. In this paper, we propose a LSTM RNN framework for predicting Traffic Matrix (TM) in large networks. By validating our framework on real-world data from GÉANT network, we show that our LSTM models converge quickly and give state of the art TM prediction performance for relatively small sized models. keywords Traffic Matrix, Prediction, Neural Networks, Long Short-Term Mermory",
"title": ""
},
{
"docid": "034943e26879bedd5c25079b986851e6",
"text": "3D Time-of-Flight sensing technology provides distant measurements from the camera to the scene in the field of view, for complete depth map of a scene. It works by illuminating the scene with a modulated light sources and measuring the phase change between illuminated and reflected light. This is translated to distance, for each pixel simultaneously. The sensor receives the radiance which is combination of light received along multiple paths due to global illumination. This global radiance causes multi-path interference. Separating these components to recover scene depths is challenging for corner shaped and coronel shaped scene as number of multiple path increases. It is observed that for different scenes, global radiance disappears with increase in frequencies beyond some threshold level. This observation is used to develop a novel technique to recover unambiguous depth map of a scene. It requires minimum two frequencies and 3 to 4 measurements which gives minimum computations.",
"title": ""
},
{
"docid": "101bcd956dcdb0fff3ecf78aa841314a",
"text": "HCI research has increasingly examined how sensing technologies can help people capture and visualize data about their health-related behaviors. Yet, few systems help people reflect more fundamentally on the factors that influence behaviors such as physical activity (PA). To address this research gap, we take a novel approach, examining how such reflections can be stimulated through a medium that generations of families have used for reflection and teaching: storytelling. Through observations and interviews, we studied how 13 families interacted with a low-fidelity prototype, and their attitudes towards this tool. Our prototype used storytelling and interactive prompts to scaffold reflection on factors that impact children's PA. We contribute to HCI research by characterizing how families interacted with a story-driven reflection tool, and how such a tool can encourage critical processes for behavior change. Informed by the Transtheoretical Model, we present design implications for reflective informatics systems.",
"title": ""
}
] |
scidocsrr
|
54d3a0a700dc4af903e37174a33acd05
|
Critical Success Factors for ERP Projects in Small and Medium-Sized Enterprises - The Perspective of Selected ERP System Vendors
|
[
{
"docid": "b1d85112f8a14e1ec28a6a64a03e7ec0",
"text": "This article reports the results of a survey of Chief Information Officers (CIOs) from Fortune 1000 companies on their perceptions of the critical success factors in Enterprise Resource Planning (ERP) implementation. Through a review of the literature, 11 critical success factors were identified , with underlying subfactors, for successful ERP implementation. The degree of criticality of each of these factors were assessed in a survey administered to the CIOs. The 5 most critical factors identified by the CIOs were top management support, project champion, ERP teamwork and composition, project management, and change management program and culture. The importance of each of these factors is discussed.",
"title": ""
}
] |
[
{
"docid": "e882efea987b4f248c0374c1555c668a",
"text": "This paper describes the Sonic Banana, a bend-sensor based alternative MIDI controller.",
"title": ""
},
{
"docid": "94dadbee2ca05ab17298dae45e8aebdc",
"text": "Cloud storage enables users to remotely store their data and enjoy the on-demand high quality cloud applications without the burden of local hardware and software management. Though the benefits are clear, such a service is also relinquishing users' physical possession of their outsourced data, which inevitably poses new security risks toward the correctness of the data in cloud. In order to address this new problem and further achieve a secure and dependable cloud storage service, we propose in this paper a flexible distributed storage integrity auditing mechanism, utilizing the homomorphic token and distributed erasure-coded data. The proposed design allows users to audit the cloud storage with very lightweight communication and computation cost. The auditing result not only ensures strong cloud storage correctness guarantee, but also simultaneously achieves fast data error localization, i.e., the identification of misbehaving server. Considering the cloud data are dynamic in nature, the proposed design further supports secure and efficient dynamic operations on outsourced data, including block modification, deletion, and append. Analysis shows the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.",
"title": ""
},
{
"docid": "0e1dc67e473e6345be5725f2b06e916f",
"text": "A number of experiments explored the hypothesis that immediate memory span is not constant, but varies with the length of the words to be recalled. Results showed: (1) Memory span is inversely related to word length across a wide range of materials; (2) When number of syllables and number of phonemes are held constant, words of short temporal duration are better recalled than words of long duration; (3) Span could be predicted on the basis of the number of words which the subject can read in approximately 2 sec; (4) When articulation is suppressed by requiring the subject to articulate an irrelevant sound, the word length effect disappears with visual presentation, but remains when presentation is auditory. The results are interpreted in terms of a phonemically-based store of limited temporal capacity, which may function as an output buffer for speech production, and as a supplement to a more central working memory system.",
"title": ""
},
{
"docid": "201843d32d030d4c9bb388e4fbcd4f3c",
"text": "This paper reports on thermal-mechanical failures of through-silicon-vias (TSVs), in particular, for the first time, the protrusions at the TSV backside, which is exposed after wafer bonding, thinning and TSV revealing. Temperature dependence of TSV protrusion is investigated based on wide-range thermal shock and thermal cycling tests. While TSV protrusion on the TSV frontside is not visible after any of the tests, protrusions on the backside are found after both thermal shock tests and thermal cycling tests at temperatures above 250°C. The average TSV protrusion height increases from ~0.1 μm at 250°C to ~0.5 μm at 400°C and can be fitted to an exponential function with an activation energy of ~0.6eV, suggesting a Cu grain boundary diffusion mechanism.",
"title": ""
},
{
"docid": "52b481885dc7ad62dc4e8b3e31b9e71e",
"text": "In this paper, we propose a novel deep learning based video sa li ncy prediction method, named DeepVS. Specifically, we establ i h a large-scale eye-tracking database of videos (LEDOV), which includes 32 ubjects’ fixations on 538 videos. We find from LEDOV that human attention is more likely to be attracted by objects, particularly the moving objects or the moving parts of objects. Hence, an object-to-motion convolutional neural network (OM-CNN) is developed to predict the intra-frame saliency for DeepVS, w hich is composed of the objectness and motion subnets. In OM-CNN, cross-net m ask and hierarchical feature normalization are proposed to combine the sp atial features of the objectness subnet and the temporal features of the motion su b et. We further find from our database that there exists a temporal correlati on of human attention with a smooth saliency transition across video frames. We th us propose saliencystructured convolutional long short-term memory (SS-Conv LSTM) network, using the extracted features from OM-CNN as the input. Consequ ently, the interframe saliency maps of a video can be generated, which consid er both structured output with center-bias and cross-frame transitions of hum an attention maps. Finally, the experimental results show that DeepVS advances t he tate-of-the-art in video saliency prediction.",
"title": ""
},
{
"docid": "5e14a79e4634445291d67c3d7f4ea617",
"text": "A a new type of word-of-mouth information, online consumer product review is an emerging market phenomenon that is playing an increasingly important role in consumers’ purchase decisions. This paper argues that online consumer review, a type of product information created by users based on personal usage experience, can serve as a new element in the marketing communications mix and work as free “sales assistants” to help consumers identify the products that best match their idiosyncratic usage conditions. This paper develops a normative model to address several important strategic issues related to consumer reviews. First, we show when and how the seller should adjust its own marketing communication strategy in response to consumer reviews. Our results reveal that if the review information is sufficiently informative, the two types of product information, i.e., the seller-created product attribute information and buyer-created review information, will interact with each other. For example, when the product cost is low and/or there are sufficient expert (more sophisticated) product users, the two types of information are complements, and the seller’s best response is to increase the amount of product attribute information conveyed via its marketing communications after the reviews become available. However, when the product cost is high and there are sufficient novice (less sophisticated) product users, the two types of information are substitutes, and the seller’s best response is to reduce the amount of product attribute information it offers, even if it is cost-free to provide such information. We also derive precise conditions under which the seller can increase its profit by adopting a proactive strategy, i.e., adjusting its marketing strategies even before consumer reviews become available. Second, we identify product/market conditions under which the seller benefits from facilitating such buyer-created information (e.g., by allowing consumers to post user-based product reviews on the seller’s website). Finally, we illustrate the importance of the timing of the introduction of consumer reviews available as a strategic variable and show that delaying the availability of consumer reviews for a given product can be beneficial if the number of expert (more sophisticated) product users is relatively large and cost of the product is low.",
"title": ""
},
{
"docid": "66ce4b486893e17e031a96dca9022ade",
"text": "Product reviews possess critical information regarding customers’ concerns and their experience with the product. Such information is considered essential to firms’ business intelligence which can be utilized for the purpose of conceptual design, personalization, product recommendation, better customer understanding, and finally attract more loyal customers. Previous studies of deriving useful information from customer reviews focused mainly on numerical and categorical data. Textual data have been somewhat ignored although they are deemed valuable. Existing methods of opinion mining in processing customer reviews concentrates on counting positive and negative comments of review writers, which is not enough to cover all important topics and concerns across different review articles. Instead, we propose an automatic summarization approach based on the analysis of review articles’ internal topic structure to assemble customer concerns. Different from the existing summarization approaches centered on sentence ranking and clustering, our approach discovers and extracts salient topics from a set of online reviews and further ranks these topics. The final summary is then generated based on the ranked topics. The experimental study and evaluation show that the proposed approach outperforms the peer approaches, i.e. opinion mining and clustering-summarization, in terms of users’ responsiveness and its ability to discover the most important topics. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5ea3080d724439a0034fa2fe2569995b",
"text": "Robotic picking from cluttered bins is a demanding task, for which Amazon Robotics holds challenges. The 2017 Amazon Robotics Challenge (ARC) required stowing items into a storage system, picking specific items, and packing them into boxes. In this paper, we describe the entry of team NimbRo Picking. Our deep object perception pipeline can be quickly and efficiently adapted to new items using a custom turntable capture system and transfer learning. It produces high-quality item segments, on which grasp poses are found. A planning component coordinates manipulation actions between two robot arms, minimizing execution time. The system has been demonstrated successfully at ARC, where our team reached second places in both the picking task and the final stow-and-pick task. We also evaluate individual components.",
"title": ""
},
{
"docid": "512964f588b7afe09183dbaa3fe254d0",
"text": "This paper proposes an Internet of Things (IoT)-enabled multiagent system (MAS) for residential DC microgrids (RDCMG). The proposed MAS consisting of smart home agents (SHAs) aims to cooperate each other to alleviate the peak load of the RDCMG and to minimize the electricity costs for smart homes. These are achieved by agent utility functions and the best operating time algorithm (BOT) in the MAS. Moreover, IoT-based efficient and cost-effective agent communication method is proposed, which applies message queuing telemetry transport (MQTT) publish/subscribe protocol via MQTT brokers. The proposed IoT-enabled MAS and smart home models are implemented in five Raspberry pi 3 boards and validated by experimental studies for a RDCMG with five smart homes.",
"title": ""
},
{
"docid": "f838806a316b4267e166e7215db12166",
"text": "This paper presents a computationally efficient method for action recognition from depth video sequences. It employs the so called depth motion maps (DMMs) from three projection views (front, side and top) to capture motion cues and uses local binary patterns (LBPs) to gain a compact feature representation. Two types of fusion consisting of feature-level fusion and decision-level fusion are considered. In the feature-level fusion, LBP features from three DMMs are merged before classification while in the decision-level fusion, a soft decision-fusion rule is used to combine the classification outcomes. The introduced method is evaluated on two standard datasets and is also compared with the existing methods. The results indicate that it outperforms the existing methods and is able to process depth video sequences in real-time.",
"title": ""
},
{
"docid": "8d070d8506d8a83ce78bde0e19f28031",
"text": "Although amyotrophic lateral sclerosis and its variants are readily recognised by neurologists, about 10% of patients are misdiagnosed, and delays in diagnosis are common. Prompt diagnosis, sensitive communication of the diagnosis, the involvement of the patient and their family, and a positive care plan are prerequisites for good clinical management. A multidisciplinary, palliative approach can prolong survival and maintain quality of life. Treatment with riluzole improves survival but has a marginal effect on the rate of functional deterioration, whereas non-invasive ventilation prolongs survival and improves or maintains quality of life. In this Review, we discuss the diagnosis, management, and how to cope with impaired function and end of life on the basis of our experience, the opinions of experts, existing guidelines, and clinical trials. We highlight the need for research on the effectiveness of gastrostomy, access to non-invasive ventilation and palliative care, communication between the care team, the patient and his or her family, and recognition of the clinical and social effects of cognitive impairment. We recommend that the plethora of evidence-based guidelines should be compiled into an internationally agreed guideline of best practice.",
"title": ""
},
{
"docid": "e964d88be0270bc6ee7eb7748868dd3c",
"text": "The standard serial algorithm for strongly connected components is based on depth rst search, which is di cult to parallelize. We describe a divide-and-conquer algorithm for this problem which has signi cantly greater potential for parallelization. For a graph with n vertices in which degrees are bounded by a constant, we show the expected serial running time of our algorithm to be O(n log n).",
"title": ""
},
{
"docid": "f7bb972cc08d290661bd1f53c4f505f4",
"text": "BACKGROUND\nOpen-source clinical natural-language-processing (NLP) systems have lowered the barrier to the development of effective clinical document classification systems. Clinical natural-language-processing systems annotate the syntax and semantics of clinical text; however, feature extraction and representation for document classification pose technical challenges.\n\n\nMETHODS\nThe authors developed extensions to the clinical Text Analysis and Knowledge Extraction System (cTAKES) that simplify feature extraction, experimentation with various feature representations, and the development of both rule and machine-learning based document classifiers. The authors describe and evaluate their system, the Yale cTAKES Extensions (YTEX), on the classification of radiology reports that contain findings suggestive of hepatic decompensation.\n\n\nRESULTS AND DISCUSSION\nThe F(1)-Score of the system for the retrieval of abdominal radiology reports was 96%, and was 79%, 91%, and 95% for the presence of liver masses, ascites, and varices, respectively. The authors released YTEX as open source, available at http://code.google.com/p/ytex.",
"title": ""
},
{
"docid": "70c6da9da15ad40b4f64386b890ccf51",
"text": "In this paper, we describe a positioning control for a SCARA robot using a recurrent neural network. The simultaneous perturbation optimization method is used for the learning rule of the recurrent neural network. Then the recurrent neural network learns inverse dynamics of the SCARA robot. We present details of the control scheme using the simultaneous perturbation. Moreover, we consider an example for two target positions using an actual SCARA robot. The result is shown.",
"title": ""
},
{
"docid": "751644f811112a4ac7f1ead5f456056b",
"text": "Camera-based text processing has attracted considerable attention and numerous methods have been proposed. However, most of these methods have focused on the scene text detection problem and relatively little work has been performed on camera-captured document images. In this paper, we present a text-line detection algorithm for camera-captured document images, which is an essential step toward document understanding. In particular, our method is developed by incorporating state estimation (an extension of scale selection) into a connected component (CC)-based framework. To be precise, we extract CCs with the maximally stable extremal region algorithm and estimate the scales and orientations of CCs from their projection profiles. Since this state estimation facilitates a merging process (bottom-up clustering) and provides a stopping criterion, our method is able to handle arbitrarily oriented text-lines and works robustly for a range of scales. Finally, a text-line/non-text-line classifier is trained and non-text candidates (e.g., background clutters) are filtered out with the classifier. Experimental results show that the proposed method outperforms conventional methods on a standard dataset and works well for a new challenging dataset.",
"title": ""
},
{
"docid": "c6f52d8333406bce50d72779f07d5ac2",
"text": "Dimensionality reduction studies methods that effectively reduce data dimensionality for efficient data processing tasks such as pattern recognition, machine learning, text retrieval, and data mining. We introduce the field of dimensionality reduction by dividing it into two parts: feature extraction and feature selection. Feature extraction creates new features resulting from the combination of the original features; and feature selection produces a subset of the original features. Both attempt to reduce the dimensionality of a dataset in order to facilitate efficient data processing tasks. We introduce key concepts of feature extraction and feature selection, describe some basic methods, and illustrate their applications with some practical cases. Extensive research into dimensionality reduction is being carried out for the past many decades. Even today its demand is further increasing due to important high-dimensional applications such as gene expression data, text categorization, and document indexing.",
"title": ""
},
{
"docid": "7a58e55ea8f2cd7697e859a4da7c8844",
"text": "A sex difference on mental-rotation tasks has been demonstrated repeatedly, but not in children less than 4 years of age. To demonstrate mental rotation in human infants, we habituated 5-month-old infants to an object revolving through a 240 degrees angle. In successive test trials, infants saw the habituation object or its mirror image revolving through a previously unseen 120 degrees angle. Only the male infants appeared to recognize the familiar object from the new perspective, a feat requiring mental rotation. These data provide evidence for a sex difference in mental rotation of an object through three-dimensional space, consistently seen in adult populations.",
"title": ""
},
{
"docid": "d4c19a8e4e51ede55ce62a3bcc3df5ad",
"text": "The daily average PM2.5 concentration forecast is a leading component nowadays in air quality research, which is necessary to perform in order to assess the impact of air on the health and welfare of every living being. The present work is aimed at analyzing and benchmarking a neural-network approach to the prediction of average PM2.5 concentrations. The model thus obtained will be indispensable, as a control tool, for the purpose of preventing dangerous situations that may arise. To this end we have obtained data and measurements based on samples taken during the early hours of the day. Results from three different topologies of neural networks were compared so as to identify their potential uses, or rather, their strengths and weaknesses: Multilayer Perceptron (MLP), Radial Basis Function (RBF) and Square Multilayer Perceptron (SMLP). Moreover, two classical models were built (a persistence model and a linear regression), so as to compare their results with the ones provided by the neural network models. The results clearly demonstrated that the neural approach not only outperformed the classical models but also showed fairly similar values among different topologies. Moreover, a differential behavior in terms of stability and length of the training phase emerged during testing as well. The RBF shows up to be the network with the shortest training times, combined with a greater stability during the prediction stage, thus characterizing this topology as an ideal solution for its use in environmental applications instead of the widely used and less effective MLP. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "01a4b2be52e379db6ace7fa8ed501805",
"text": "The goal of our work is to complete the depth channel of an RGB-D image. Commodity-grade depth cameras often fail to sense depth for shiny, bright, transparent, and distant surfaces. To address this problem, we train a deep network that takes an RGB image as input and predicts dense surface normals and occlusion boundaries. Those predictions are then combined with raw depth observations provided by the RGB-D camera to solve for depths for all pixels, including those missing in the original observation. This method was chosen over others (e.g., inpainting depths directly) as the result of extensive experiments with a new depth completion benchmark dataset, where holes are filled in training data through the rendering of surface reconstructions created from multiview RGB-D scans. Experiments with different network inputs, depth representations, loss functions, optimization methods, inpainting methods, and deep depth estimation networks show that our proposed approach provides better depth completions than these alternatives.",
"title": ""
},
{
"docid": "b66f1ccd73a5bbeea79713ebb97f7112",
"text": "Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless essential quantitative task in Clinical Microbiology Laboratories. With this work we explore the possibility to find effective solutions to the above issue by designing and testing two different machine learning approaches. The first one is based on the extraction of a complete set of handcrafted morphometric and radiometric features used within a Support Vector Machines solution. The second one is based on the design and configuration of a Convolutional Neural Networks deep learning architecture. To validate, in a real and challenging clinical scenario, the proposed bacterial load estimation techniques, we built and publicly released a fully labeled large and representative database of both single and aggregated bacterial colonies extracted from routine clinical laboratory culture plates. Dataset enhancement approaches have also been experimentally tested for performance optimization. The adopted deep learning approach outperformed the handcrafted feature based one, and also a conventional reference technique, by a large margin, becoming a preferable solution for the addressed Digital Microbiology Imaging quantification task, especially in the emerging context of Full Laboratory Automation systems.",
"title": ""
}
] |
scidocsrr
|
3948f6f6aebf27bedd2a29837beb842f
|
Information security awareness and behavior : a theory-based literature review
|
[
{
"docid": "9e1cefe8c58774ea54b507a3702f825f",
"text": "Organizations and individuals are increasingly impacted by misuses of information that result from security lapses. Most of the cumulative research on information security has investigated the technical side of this critical issue, but securing organizational systems has its grounding in personal behavior. The fact remains that even with implementing mandatory controls, the application of computing defenses has not kept pace with abusers’ attempts to undermine them. Studies of information security contravention behaviors have focused on some aspects of security lapses and have provided some behavioral recommendations such as punishment of offenders or ethics training. While this research has provided some insight on information security contravention, they leave incomplete our understanding of the omission of information security measures among people who know how to protect their systems but fail to do so. Yet carelessness with information and failure to take available precautions contributes to significant civil losses and even to crimes. Explanatory theory to guide research that might help to answer important questions about how to treat this omission problem lacks empirical testing. This empirical study uses protection motivation theory to articulate and test a threat control model to validate assumptions and better understand the ‘‘knowing-doing” gap, so that more effective interventions can be developed. 2008 Elsevier Ltd. All rights reserved. d. All rights reserved. Workman), [email protected] (W.H. Bommer), [email protected] 2800 M. Workman et al. / Computers in Human Behavior 24 (2008) 2799–2816",
"title": ""
}
] |
[
{
"docid": "1349bf2e61f0d831e34093fb9d68b1a8",
"text": "Although diverse news stories are actively posted on social media, readers often focus on the news which reinforces their pre-existing views, leading to 'filter bubble' effects. To combat this, some recent systems expose and nudge readers toward stories with different points of view. One example is the Wall Street Journal's 'Blue Feed, Red Feed' system, which presents posts from biased publishers on each side of a topic. However, these systems have had limited success. We present a complementary approach which identifies high consensus 'purple' posts that generate similar reactions from both 'blue' and 'red' readers. We define and operationalize consensus for news posts on Twitter in the context of US politics. We show that high consensus posts can be identified and discuss their empirical properties. We present a method for automatically identifying high and low consensus news posts on Twitter, which can work at scale across many publishers. To do this, we propose a novel category of audience leaning based features, which we show are well suited to this task. Finally, we present our 'Purple Feed' system which highlights high consensus posts from publishers on both sides of the political spectrum.",
"title": ""
},
{
"docid": "9f3803ae394163e32fe81784b671de92",
"text": "A smart community is a distributed system consisting of a set of smart homes which utilize the smart home scheduling techniques to enable customers to automatically schedule their energy loads targeting various purposes such as electricity bill reduction. Smart home scheduling is usually implemented in a decentralized fashion inside a smart community, where customers compete for the community level renewable energy due to their relatively low prices. Typically there exists an aggregator as a community wide electricity policy maker aiming to minimize the total electricity bill among all customers. This paper develops a new renewable energy aware pricing scheme to achieve this target. We establish the proof that under certain assumptions the optimal solution of decentralized smart home scheduling is equivalent to that of the centralized technique, reaching the theoretical lower bound of the community wide total electricity bill. In addition, an advanced cross entropy optimization technique is proposed to compute the pricing scheme of renewable energy, which is then integrated in smart home scheduling. The simulation results demonstrate that our pricing scheme facilitates the reduction of both the community wide electricity bill and individual electricity bills compared to the uniform pricing. In particular, the community wide electricity bill can be reduced to only 0.06 percent above the theoretic lower bound.",
"title": ""
},
{
"docid": "4c8ff8cf19292475b724d7036ed8b75c",
"text": "The purpose of this study was to examine intratester reliability of a test designed to measure the standing pelvic-tilt angle, active posterior and anterior pelvic-tilt angles and ranges of motion, and the total pelvic-tilt range of motion (ROM). After an instruction session, the pelvic-tilt angles of the right side of 20 men were calculated using trigonometric functions. Ranges of motion were determined from the pelvic-tilt angles. Intratester reliability coefficients (Pearson r) for test and retest measurements were .88 for the standing pelvic-tilt angle, .88 for the posterior pelvic-tilt angle, .92 for the anterior pelvic-tilt angle, .62 for the posterior pelvic-tilt ROM, .92 for the anterior pelvic-tilt ROM, and .87 for the total ROM. We discuss the factors that may have influenced the reliability of the measurements and the clinical implications and limitations of the test. We suggest additional research to examine intratester reliability of measuring the posterior pelvic-tilt ROM, intertester reliability of measuring all angles and ROM, and the pelvic tilt of many types of subjects.",
"title": ""
},
{
"docid": "11e340ddb1a747eabebd4e5eeb097ce5",
"text": "Computational thinking, a form of thinking and problem solving within computer science, has become a popular focus of research on computer science education. In this paper, we attempt to investigate the role that computational thinking plays in the experience of introductory computer science students at a South African university. To this end, we have designed and administered a test for computational thinking ability, and contrasted the results of this test with the class marks for the students involved. The results of this test give us an initial view of the abilities that students possess when entering the computer science course. The results indicate that students who performed well in the assessment have a favourable pass rate for their class tests, and specific areas of weakness have been identified. Finally, we describe the plan for a follow-up test to take place at the end of the course to determine how students' abilities have changed over a semester of studies.",
"title": ""
},
{
"docid": "88a4ab49e7d3263d5d6470d123b6e74b",
"text": "Graph databases have gained renewed interest in the last years, due to its applications in areas such as the Semantic Web and Social Networks Analysis. We study the problem of querying graph databases, and, in particular, the expressiveness and complexity of evaluation for several general-purpose query languages, such as the regular path queries and its extensions with conjunctions and inverses. We distinguish between two semantics for these languages. The first one, based on simple paths, easily leads to intractability, while the second one, based on arbitrary paths, allows tractable evaluation for an expressive family of languages.\n We also study two recent extensions of these languages that have been motivated by modern applications of graph databases. The first one allows to treat paths as first-class citizens, while the second one permits to express queries that combine the topology of the graph with its underlying data.",
"title": ""
},
{
"docid": "1d1db0c5943e6141d0d62c20d706a51f",
"text": "The use of renewable energy source (RES) in meet the demand of electrical energy is getting into attention as solution of the problem a deficit of electrical energy. Application of RES in electricity generation system is done in a variety of configurations, among others in microgrid system. Implementation of microgrid systems provide many advantages both from the user and from the electric utility provider. Many microgrid development carried out in several countries, because microgrid offers many advantages, including better power quality and more environmentally friendly. Microgrid development concern in technology generation, microgrid architecture, power electronics, control systems, protection systems. This paper reviewing various technological developments related to microgrid system and case study about microgrid system development using grid tie inverter (GTI). Microgrid system can implemented using GTI, power transfer can occur from GTI to grid when GTI has power excess and grid supplying power to GTI when GTI power shortage.",
"title": ""
},
{
"docid": "729581c92155092a82886e58284e8b92",
"text": "We investigate here the capabilities of a 400-element reconfigurable transmitarray antenna to synthesize monopulse radiation patterns for radar applications in X-band. The generation of the sum (Σ) and difference (A) patterns are demonstrated both theoretically and experimentally for broadside as well as tilted beams in different azimuthal planes. Two different feed configurations have been considered, namely, a single focal source and a four-element focal source configuration. The latter enables the simultaneous generation of a Σ- and two A-patterns in orthogonal planes, which is an important advantage for tracking applications with stringent requirements in speed and accuracy.",
"title": ""
},
{
"docid": "6ae2f2fa9a58fd101f6f43276ce2ff04",
"text": "In the past decade, we have witnessed an unparalleled success of information and communication technologies (ICT), which is expected to be even more proliferating and ubiquitous in the future. Among many ICT applications, ICT components embedded into various devices and systems have become a critical one. In fact, embedded systems with communication capability span virtually every aspect of our daily life. An embedded system is defined as a computer system designed to perform dedicated specific functions, usually under real-time computing constraints. It is called ‘‘embedded’’ because it is embedded as a part of a complete device or system. By contrast, a general-purpose computer is designed to satisfy a wide range of user requirements. Embedded systems range from portable devices such as smart phones and MP3 players, to large installations like plant control systems. Recently, the convergence of cyber and physical spaces [1] has further transformed traditional embedded systems into cyberphysical systems (CPS), which are characterized by tight integration and coordination between computation and physical processes by means of networking. In CPS, various embedded devices with computational components are networked to monitor, sense, and actuate physical elements in the real world. Examples of CPS encompass a wide range of large-scale engineered systems such as avionics, healthcare, transportation, automation, and smart grid systems. In addition, the recent proliferation of smart phones and mobile Internet devices equipped with multiple sensors can be leveraged to enable mobile cyber-physical applications. In all of these systems, it is of critical importance to properly resolve the complex interactions between various computational and physical elements. In this guest editorial, we first provide an overview of CPS by introducing major issues in CPS as well as recent research efforts and future opportunities for CPS. Then, we summarize the papers in the special section by clearly describing their main contributions on CPS research. The remainder of the editorial is organized as follows: In Section 2, we provide an overview of CPS. We first explain the key characteristics of CPS compared to the traditional embedded systems. Then, we introduce the recent trend in CPS research with an emphasis on major research topics in CPS. We introduce recent CPS-related projects in Section 3. Summary of the papers in the special section follows in Section 4 by focusing on their contributions on CPS research. Finally, our conclusion follows in Section 5.",
"title": ""
},
{
"docid": "06a76b28605da6e3617c420926fd827e",
"text": "Abstractive summarization is the ultimate goal of document summarization research, but previously it is less investigated due to the immaturity of text generation techniques. Recently impressive progress has been made to abstractive sentence summarization using neural models. Unfortunately, attempts on abstractive document summarization are still in a primitive stage, and the evaluation results are worse than extractive methods on benchmark datasets. In this paper, we review the difficulties of neural abstractive document summarization, and propose a novel graph-based attention mechanism in the sequence-to-sequence framework. The intuition is to address the saliency factor of summarization, which has been overlooked by prior works. Experimental results demonstrate our model is able to achieve considerable improvement over previous neural abstractive models. The data-driven neural abstractive method is also competitive with state-of-the-art extractive methods.ive summarization is the ultimate goal of document summarization research, but previously it is less investigated due to the immaturity of text generation techniques. Recently impressive progress has been made to abstractive sentence summarization using neural models. Unfortunately, attempts on abstractive document summarization are still in a primitive stage, and the evaluation results are worse than extractive methods on benchmark datasets. In this paper, we review the difficulties of neural abstractive document summarization, and propose a novel graph-based attention mechanism in the sequence-to-sequence framework. The intuition is to address the saliency factor of summarization, which has been overlooked by prior works. Experimental results demonstrate our model is able to achieve considerable improvement over previous neural abstractive models. The data-driven neural abstractive method is also competitive with state-of-the-art extractive methods.",
"title": ""
},
{
"docid": "5225d2972d53770cf1c677ba4d161f38",
"text": "The importance of minimizing flow completion times (FCT) in datacenters has led to a growing literature on new network transport designs. Of particular note is pFabric, a protocol that achieves near-optimal FCTs. However, pFabric's performance comes at the cost of generality, since pFabric requires specialized hardware that embeds a specific scheduling policy within the network fabric, making it hard to meet diverse policy goals. Aiming for generality, the recent Fastpass proposal returns to a design based on commodity network hardware and instead relies on a centralized scheduler. Fastpass achieves generality, but (as we show) loses many of pFabric's performance benefits.\n We present pHost, a new transport design aimed at achieving both: the near-optimal performance of pFabric and the commodity network design of Fastpass. Similar to Fastpass, pHost keeps the network simple by decoupling the network fabric from scheduling decisions. However, pHost introduces a new distributed protocol that allows end-hosts to directly make scheduling decisions, thus avoiding the overheads of Fastpass's centralized scheduler architecture. We show that pHost achieves performance on par with pFabric (within 4% for typical conditions) and significantly outperforms Fastpass (by a factor of 3.8×) while relying only on commodity network hardware.",
"title": ""
},
{
"docid": "64e2b73e8a2d12a1f0bbd7d07fccba72",
"text": "Point-of-interest (POI) recommendation is an important service to Location-Based Social Networks (LBSNs) that can benefit both users and businesses. In recent years, a number of POI recommender systems have been proposed, but there is still a lack of systematical comparison thereof. In this paper, we provide an allaround evaluation of 12 state-of-the-art POI recommendation models. From the evaluation, we obtain several important findings, based on which we can better understand and utilize POI recommendation models in various scenarios. We anticipate this work to provide readers with an overall picture of the cutting-edge research on POI recommendation.",
"title": ""
},
{
"docid": "c1a96dbed9373dddd0a7a07770395a7e",
"text": "Mobile devices are increasingly the dominant Internet access technology. Nevertheless, high costs, data caps, and throttling are a source of widespread frustration, and a significant barrier to adoption in emerging markets. This paper presents Flywheel, an HTTP proxy service that extends the life of mobile data plans by compressing responses in-flight between origin servers and client browsers. Flywheel is integrated with the Chrome web browser and reduces the size of proxied web pages by 50% for a median user. We report measurement results from millions of users as well as experience gained during three years of operating and evolving the production",
"title": ""
},
{
"docid": "4d1f7ca631304e03b720c501d7e9a227",
"text": "Due to the open and distributed characteristics of web service, its access control becomes a challenging problem which has not been addressed properly. In this paper, we show how semantic web technologies can be used to build a flexible access control system for web service. We follow the Role-based Access Control model and extend it with credential attributes. The access control model is represented by a semantic ontology, and specific semantic rules are constructed to implement such as dynamic roles assignment, separation of duty constraints and roles hierarchy reasoning, etc. These semantic rules can be verified and executed automatically by the reasoning engine, which can simplify the definition and enhance the interoperability of the access control policies. The basic access control architecture based on the semantic proposal for web service is presented. Finally, a prototype of the system is implemented to validate the proposal.",
"title": ""
},
{
"docid": "9507febd41296b63e8a6434eb27400f9",
"text": "This paper presents a new approach for automatic concept extraction, using grammatical parsers and Latent Semantic Analysis. The methodology is described, also the tool used to build the benchmarkingcorpus. The results obtained on student essays shows good inter-rater agreement and promising machine extraction performance. Concept extraction is the first step to automatically extract concept maps fromstudent’s essays or Concept Map Mining.",
"title": ""
},
{
"docid": "5aa10413b995b6b86100585f3245e4d9",
"text": "In this paper, we describe the design of Neurogrid, a neuromorphic system for simulating large-scale neural models in real time. Neuromorphic systems realize the function of biological neural systems by emulating their structure. Designers of such systems face three major design choices: 1) whether to emulate the four neural elements-axonal arbor, synapse, dendritic tree, and soma-with dedicated or shared electronic circuits; 2) whether to implement these electronic circuits in an analog or digital manner; and 3) whether to interconnect arrays of these silicon neurons with a mesh or a tree network. The choices we made were: 1) we emulated all neural elements except the soma with shared electronic circuits; this choice maximized the number of synaptic connections; 2) we realized all electronic circuits except those for axonal arbors in an analog manner; this choice maximized energy efficiency; and 3) we interconnected neural arrays in a tree network; this choice maximized throughput. These three choices made it possible to simulate a million neurons with billions of synaptic connections in real time-for the first time-using 16 Neurocores integrated on a board that consumes three watts.",
"title": ""
},
{
"docid": "364e6c4e1405b287a0d377bb943d1e6a",
"text": "The Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) provides a calibration-based sequential scheme for combining precipitation estimates from multiple satellites, as well as gauge analyses where feasible, at fine scales (0.25° 0.25° and 3 hourly). TMPA is available both after and in real time, based on calibration by the TRMM Combined Instrument and TRMM Microwave Imager precipitation products, respectively. Only the after-real-time product incorporates gauge data at the present. The dataset covers the latitude band 50°N–S for the period from 1998 to the delayed present. Early validation results are as follows: the TMPA provides reasonable performance at monthly scales, although it is shown to have precipitation rate–dependent low bias due to lack of sensitivity to low precipitation rates over ocean in one of the input products [based on Advanced Microwave Sounding Unit-B (AMSU-B)]. At finer scales the TMPA is successful at approximately reproducing the surface observation–based histogram of precipitation, as well as reasonably detecting large daily events. The TMPA, however, has lower skill in correctly specifying moderate and light event amounts on short time intervals, in common with other finescale estimators. Examples are provided of a flood event and diurnal cycle determination.",
"title": ""
},
{
"docid": "2fbaa089463440b291caa4da5c776a54",
"text": "In the literature, two series of models have been proposed to address prediction problems including classification and regression. Simple models, such as generalized linear models, have ordinary performance but strong interpretability on a set of simple features. The other series, including tree-based models, organize numerical, categorical, and high dimensional features into a comprehensive structure with rich interpretable information in the data. In this paper, we propose a novel Discriminative Pattern-based Prediction framework (<inline-formula> <tex-math notation=\"LaTeX\">$\\sf {DPPred}$</tex-math><alternatives><inline-graphic xlink:href=\"shang-ieq1-2757476.gif\"/> </alternatives></inline-formula>) to accomplish the prediction tasks by taking their advantages of both effectiveness and interpretability. Specifically, <inline-formula><tex-math notation=\"LaTeX\">$\\sf {DPPred}$</tex-math><alternatives> <inline-graphic xlink:href=\"shang-ieq2-2757476.gif\"/></alternatives></inline-formula> adopts the concise discriminative patterns that are on the prefix paths from the root to leaf nodes in the tree-based models. <inline-formula> <tex-math notation=\"LaTeX\">$\\sf {DPPred}$</tex-math><alternatives><inline-graphic xlink:href=\"shang-ieq3-2757476.gif\"/> </alternatives></inline-formula> selects a limited number of the useful discriminative patterns by searching for the most effective pattern combination to fit generalized linear models. Extensive experiments show that in many scenarios, <inline-formula><tex-math notation=\"LaTeX\">$\\sf {DPPred}$</tex-math><alternatives> <inline-graphic xlink:href=\"shang-ieq4-2757476.gif\"/></alternatives></inline-formula> provides competitive accuracy with the state-of-the-art as well as the valuable interpretability for developers and experts. In particular, taking a clinical application dataset as a case study, our <inline-formula><tex-math notation=\"LaTeX\">$\\sf {DPPred}$</tex-math> <alternatives><inline-graphic xlink:href=\"shang-ieq5-2757476.gif\"/></alternatives></inline-formula> outperforms the baselines by using only 40 concise discriminative patterns out of a potentially exponentially large set of patterns.",
"title": ""
},
{
"docid": "6ae289d7da3e923c1288f39fd7a162f6",
"text": "The usage of digital evidence from electronic devices has been rapidly expanding within litigation, and along with this increased usage, the reliance upon forensic computer examiners to acquire, analyze, and report upon this evidence is also rapidly growing. This growing demand for forensic computer examiners raises questions concerning the selection of individuals qualified to perform this work. While courts have mechanisms for qualifying witnesses that provide testimony based on scientific data, such as digital data, the qualifying criteria covers a wide variety of characteristics including, education, experience, training, professional certifications, or other special skills. In this study, we compare task performance responses from forensic computer examiners with an expert review panel and measure the relationship with the characteristics of the examiners to their quality responses. The results of this analysis provide insight into identifying forensic computer examiners that provide high-quality responses.",
"title": ""
},
{
"docid": "93d40aa40a32edab611b6e8c4a652dbb",
"text": "In this paper, we present a detailed design of dynamic video segmentation network (DVSNet) for fast and efficient semantic video segmentation. DVSNet consists of two convolutional neural networks: a segmentation network and a flow network. The former generates highly accurate semantic segmentations, but is deeper and slower. The latter is much faster than the former, but its output requires further processing to generate less accurate semantic segmentations. We explore the use of a decision network to adaptively assign different frame regions to different networks based on a metric called expected confidence score. Frame regions with a higher expected confidence score traverse the flow network. Frame regions with a lower expected confidence score have to pass through the segmentation network. We have extensively performed experiments on various configurations of DVSNet, and investigated a number of variants for the proposed decision network. The experimental results show that our DVSNet is able to achieve up to 70.4% mIoU at 19.8 fps on the Cityscape dataset. A high speed version of DVSNet is able to deliver an fps of 30.4 with 63.2% mIoU on the same dataset. DVSNet is also able to reduce up to 95% of the computational workloads.",
"title": ""
}
] |
scidocsrr
|
ab93fb19a9288e6dfac396827b40a0d1
|
Anonymized data: generation, models, usage
|
[
{
"docid": "21384ea8d80efbf2440fb09a61b03be2",
"text": "We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.",
"title": ""
}
] |
[
{
"docid": "8cd73397c9a79646ac1b2acac44dd8a7",
"text": "Liquid micro-jet array impingement cooling of a power conversion module with 12 power switching devices (six insulated gate bipolar transistors and six diodes) is investigated. The 1200-V/150-A module converts dc input power to variable frequency, variable voltage three-phase ac output to drive a 50HP three-phase induction motor. The silicon devices are attached to a packaging layer [direct bonded copper (DBC)], which in turn is soldered to a metal base plate. DI water micro-jet array impinges on the base plate of the module targeted at the footprint area of the devices. Although the high heat flux cooling capability of liquid impingement is a well-established finding, the impact of its practical implementation in power systems has never been addressed. This paper presents the first one-to-one comparison of liquid micro-jet array impingement cooling (JAIC) with the traditional methods, such as air-cooling over finned heat sink or liquid flow in multi-pass cold plate. Results show that compared to the conventional cooling methods, JAIC can significantly enhance the module output power. If the output power is maintained constant, the device temperature can be reduced drastically by JAIC. Furthermore, jet impingement provides uniform cooling for multiple devices placed over a large area, thereby reducing non-uniformity of temperature among the devices. The reduction in device temperature, both its absolute value and the non-uniformity, implies multi-fold increase in module reliability. The results thus illustrate the importance of efficient thermal management technique for compact and reliable power conversion application",
"title": ""
},
{
"docid": "3a9cb9113f114ee69c7ddf252c0ccd30",
"text": "OBJECTIVE\nRecent research on access to food among low-income populations in industrialised countries has begun to focus on neighbourhood food availability as a key determinant of dietary behaviour. This study examined the relationship between various measures of food store access and household fruit and vegetable use among participants in the Food Stamp Program, America's largest domestic food assistance programme.\n\n\nDESIGN\nA secondary data analysis was conducted using the 1996-97 National Food Stamp Program Survey. The survey employed a 1-week food inventory method, including two at-home interviews, to determine household food use. Separate linear regression models were developed to analyse fruit and vegetable use. Independent variables included distance to store, travel time to store, ownership of a car and difficulty of supermarket access. All models controlled for a full set of socio-economic variables.\n\n\nSUBJECTS\nA nationally representative sample of participants (n=963) in the Food Stamp Program.\n\n\nRESULTS\nAfter controlling for confounding variables, easy access to supermarket shopping was associated with increased household use of fruits (84 grams per adult equivalent per day; 95% confidence interval 5, 162). Distance from home to food store was inversely associated with fruit use by households. Similar patterns were seen with vegetable use, though associations were not significant.\n\n\nCONCLUSIONS\nEnvironmental factors are importantly related to dietary choice in a nationally representative sample of low-income households, reinforcing the importance of including such factors in interventions that seek to effect dietary improvements.",
"title": ""
},
{
"docid": "c8ba40dd66f57f6d192a73be94440d07",
"text": "PURPOSE\nWound infection after an ileostomy reversal is a common problem. To reduce wound-related complications, purse-string skin closure was introduced as an alternative to conventional linear skin closure. This study is designed to compare wound infection rates and operative outcomes between linear and purse-string skin closure after a loop ileostomy reversal.\n\n\nMETHODS\nBetween December 2002 and October 2010, a total of 48 consecutive patients undergoing a loop ileostomy reversal were enrolled. Outcomes were compared between linear skin closure (group L, n = 30) and purse string closure (group P, n = 18). The operative technique for linear skin closure consisted of an elliptical incision around the stoma, with mobilization, and anastomosis of the ileum. The rectus fascia was repaired with interrupted sutures. Skin closure was performed with vertical mattress interrupted sutures. Purse-string skin closure consisted of a circumstomal incision around the ileostomy using the same procedures as used for the ileum. Fascial closure was identical to linear closure, but the circumstomal skin incision was approximated using a purse-string subcuticular suture (2-0 Polysorb).\n\n\nRESULTS\nBetween group L and P, there were no differences of age, gender, body mass index, and American Society of Anesthesiologists (ASA) scores. Original indication for ileostomy was 23 cases of malignancy (76.7%) in group L, and 13 cases of malignancy (77.2%) in group P. The median time duration from ileostomy to reversal was 4.0 months (range, 0.6 to 55.7 months) in group L and 4.1 months (range, 2.2 to 43.9 months) in group P. The median operative time was 103 minutes (range, 45 to 260 minutes) in group L and 100 minutes (range, 30 to 185 minutes) in group P. The median hospital stay was 11 days (range, 5 to 4 days) in group L and 7 days (range, 4 to 14 days) in group P (P < 0.001). Wound infection was found in 5 cases (16.7%) in group L and in one case (5.6%) in group L (P = 0.26).\n\n\nCONCLUSION\nBased on this study, purse-string skin closure after a loop ileostomy reversal showed comparable outcomes, in terms of wound infection rates, to those of linear skin closure. Thus, purse-string skin closure could be a good alternative to the conventional linear closure.",
"title": ""
},
{
"docid": "ef742ded3107fe9c5812a7c866835117",
"text": "Much commentary has been circulating in academe regarding the research skills, or lack thereof, in members of ‘‘Generation Y,’’ the generation born between 1980 and 1994. The students currently on college campuses, as well as those due to arrive in the next few years, have grown up in front of electronic screens: television, movies, video games, computer monitors. It has been said that student critical thinking and other cognitive skills (as well as their physical well-being) are suffering because of the large proportion of time spent in sedentary pastimes, passively absorbing words and images, rather than in reading. It may be that students’ cognitive skills are not fully developing due to ubiquitous electronic information technologies. However, it may also be that academe, and indeed the entire world, is currently in the middle of a massive and wideranging shift in the way knowledge is disseminated and learned.",
"title": ""
},
{
"docid": "6beab636e3a9f8163d2f0a6271102d9a",
"text": "The development of high-dimensional generative models has recently gained a great surge of interest with the introduction of variational auto-encoders and generative adversarial neural networks. Different variants have been proposed where the underlying latent space is structured, for example, based on attributes describing the data to generate. We focus on a particular problem where one aims at generating samples corresponding to a number of objects under various views. We assume that the distribution of the data is driven by two independent latent factors: the content, which represents the intrinsic features of an object, and the view, which stands for the settings of a particular observation of that object. Therefore, we propose a generative model and a conditional variant built on such a disentangled latent space. This approach allows us to generate realistic samples corresponding to various objects in a high variety of views. Unlike many multiview approaches, our model doesn’t need any supervision on the views but only on the content. Compared to other conditional generation approaches that are mostly based on binary or categorical attributes, we make no such assumption about the factors of variations. Our model can be used on problems with a huge, potentially infinite, number of categories. We experiment it on four image datasets on which we demonstrate the effectiveness of the model and its ability to generalize.",
"title": ""
},
{
"docid": "09c85a470bc9a38ed6af9878b9d3a754",
"text": "Prospective memory performance can be enhanced by task importance, for example by promising a reward. Typically, this comes at costs in the ongoing task. However, previous research has suggested that social importance (e.g., providing a social motive) can enhance prospective memory performance without additional monitoring costs in activity-based and time-based tasks. The aim of the present study was to investigate the influence of social importance in an event-based task. We compared four conditions: social importance, promising a reward, both social importance and promising a reward, and standard prospective memory instructions (control condition). The results showed enhanced prospective memory performance for all importance conditions compared to the control condition. Although ongoing task performance was slowed in all conditions with a prospective memory task when compared to a baseline condition with no prospective memory task, additional costs occurred only when both the social importance and reward were present simultaneously. Alone, neither social importance nor promising a reward produced an additional slowing when compared to the cost in the standard (control) condition. Thus, social importance and reward can enhance event-based prospective memory at no additional cost.",
"title": ""
},
{
"docid": "f07d44c814bdb87ffffc42ace8fd53a4",
"text": "We describe a batch method that uses a sizeable fraction of the training set at each iteration, and that employs secondorder information. • To improve the learning process, we follow a multi-batch approach in which the batch changes at each iteration. • This inherently gives the algorithm a stochastic flavor that can cause instability in L-BFGS. • We show how to perform stable quasi-Newton updating in the multi-batch setting, illustrate the behavior of the algorithm in a distributed computing platform, and study its convergence properties for both the convex and nonconvex cases. Introduction min w∈Rd F (w) = 1 n n ∑ i=1 f (w;x, y) Idea: select a sizeable sample Sk ⊂ {1, . . . , n} at every iteration and perform quasi-Newton steps 1. Distributed computing setting: distributed gradient computation (with faults) 2. Multi-Batch setting: samples are changed at every iteration to accelerate learning Goal: show that stable quasi-Newton updating can be achieved in both settings without incurring extra computational cost, or special synchronization Issue: samples used at the beginning and at the end of every iteration are different • potentially harmful for quasi-Newton methods Key: controlled sampling • consecutive samples overlap Sk ∩ Sk+1 = Ok 6= ∅ • gradient differences based on this overlap – stable quasi-Newton updates Multi-Batch L-BFGS Method At the k-th iteration: • sample Sk ⊂ {1, . . . , n} chosen, and iterates updated via wk+1 = wk − αkHkgk k where gk k is the batch gradient g Sk k = 1 |Sk| ∑ i∈Sk ∇f ( wk;x , y ) and Hk is the inverse BFGS Hessian approximation Hk+1 =V T k HkVk + ρksks T k ρk = 1 yT k sk , Vk = 1− ρkyksk • to ensure consistent curvature pair updates sk+1 = wk+1 − wk, yk+1 = gk k+1 − g Ok k where gk k+1 and g Ok k are gradients based on the overlapping samples only Ok = Sk ∩ Sk+1 Sample selection:",
"title": ""
},
{
"docid": "a220595aea41424065d4c60d60768ffa",
"text": "Characteristics of high-voltage dual-metal-trench (DMT) SiC Schottky pinch-rectifiers are reported for the first time. At a reverse bias of 300 V, the reverse leakage current of the SiC DMT device is 75 times less than that of a planar device while the forward bias characteristics remain comparable to those of a planar device. In this work, 4H-SiC pinch-rectifiers have been fabricated using a small/large barrier height (Ti/Ni) DMT device structure. The DMT structure is specially designed to permit simple fabrication in SiC. The Ti Schottky contact metal serves as a self-aligned trench etch mask and only four basic fabrication steps are required.",
"title": ""
},
{
"docid": "50e7e02f9a4b8b65cf2bce212314e77c",
"text": "Over the past few years, massive amounts of world knowledge have been accumulated in publicly available knowledge bases, such as Freebase, NELL, and YAGO. Yet despite their seemingly huge size, these knowledge bases are greatly incomplete. For example, over 70% of people included in Freebase have no known place of birth, and 99% have no known ethnicity. In this paper, we propose a way to leverage existing Web-search-based question-answering technology to fill in the gaps in knowledge bases in a targeted way. In particular, for each entity attribute, we learn the best set of queries to ask, such that the answer snippets returned by the search engine are most likely to contain the correct value for that attribute. For example, if we want to find Frank Zappa's mother, we could ask the query `who is the mother of Frank Zappa'. However, this is likely to return `The Mothers of Invention', which was the name of his band. Our system learns that it should (in this case) add disambiguating terms, such as Zappa's place of birth, in order to make it more likely that the search results contain snippets mentioning his mother. Our system also learns how many different queries to ask for each attribute, since in some cases, asking too many can hurt accuracy (by introducing false positives). We discuss how to aggregate candidate answers across multiple queries, ultimately returning probabilistic predictions for possible values for each attribute. Finally, we evaluate our system and show that it is able to extract a large number of facts with high confidence.",
"title": ""
},
{
"docid": "fe903498e0c3345d7e5ebc8bf3407c2f",
"text": "This paper describes a general continuous-time framework for visual-inertial simultaneous localization and mapping and calibration. We show how to use a spline parameterization that closely matches the torque-minimal motion of the sensor. Compared to traditional discrete-time solutions, the continuous-time formulation is particularly useful for solving problems with high-frame rate sensors and multiple unsynchronized devices. We demonstrate the applicability of the method for multi-sensor visual-inertial SLAM and calibration by accurately establishing the relative pose and internal parameters of multiple unsynchronized devices. We also show the advantages of the approach through evaluation and uniform treatment of both global and rolling shutter cameras within visual and visual-inertial SLAM systems.",
"title": ""
},
{
"docid": "3bcd7aaa3a3c8d19ba4d6edb6554dd85",
"text": "In order to achieve up to 1 Gb/s peak data rate in future IMT-Advanced mobile systems, carrier aggregation technology is introduced by the 3GPP to support very-high-data-rate transmissions over wide frequency bandwidths (e.g., up to 100 MHz) in its new LTE-Advanced standards. This article first gives a brief review of continuous and non-continuous CA techniques, followed by two data aggregation schemes in physical and medium access control layers. Some technical challenges for implementing CA technique in LTE-Advanced systems, with the requirements of backward compatibility to LTE systems, are highlighted and discussed. Possible technical solutions for the asymmetric CA problem, control signaling design, handover control, and guard band setting are reviewed. Simulation results show Doppler frequency shift has only limited impact on data transmission performance over wide frequency bands in a high-speed mobile environment when the component carriers are time synchronized. The frequency aliasing will generate much more interference between adjacent component carriers and therefore greatly degrades the bit error rate performance of downlink data transmissions.",
"title": ""
},
{
"docid": "3b90d2c7858680a9d90c49e63d39c7c6",
"text": "With multiple crowd gatherings of millions of people every year in events ranging from pilgrimages to protests, concerts to marathons, and festivals to funerals; visual crowd analysis is emerging as a new frontier in computer vision. In particular, counting in highly dense crowds is a challenging problem with far-reaching applicability in crowd safety and management, as well as gauging political significance of protests and demonstrations. In this paper, we propose a novel approach that simultaneously solves the problems of counting, density map estimation and localization of people in a given dense crowd image. Our formulation is based on an important observation that the three problems are inherently related to each other making the loss function for optimizing a deep CNN decomposable. Since localization requires high-quality images and annotations, we introduce UCF-QNRF dataset that overcomes the shortcomings of previous datasets, and contains 1.25 million humans manually marked with dot annotations. Finally, we present evaluation measures and comparison with recent deep CNN networks, including those developed specifically for crowd counting. Our approach significantly outperforms state-of-the-art on the new dataset, which is the most challenging dataset with the largest number of crowd annotations in the most diverse set of scenes.",
"title": ""
},
{
"docid": "03daed6effd2abd60be64c7cab280688",
"text": "Prepaid Energy Meter has been implemented in several countries. In fact, the disadvantage of the system is the behavior control of the users. Recharging should be carried out on the meter. The problem arises when the consumers leave and the electric pulses discharged. Therefore, a system is needed to control the electrical pulse wherever they are. Prepaid energy meter is proposed and simulated by using PROTEUS software. The system was designed by using ATmega128 as a microcontroller. This paper evaluates the accuracy of voltage and current measurement that produced by the model. The simulations show that our proposed prepaid energy meter produces minimum error compared to the actual volt and actual ampere meter.",
"title": ""
},
{
"docid": "216b169897d93939e64b552e4422aa69",
"text": "The ideal treatment of the nasolabial fold, the tear trough, the labiomandibular fold and the mentolabial sulcus is still discussed controversially. The detailed topographical anatomy of the fat compartments may clarify the anatomy of facial folds and may offer valuable information for choosing the adequate treatment modality. Nine non-fixed cadaver heads in the age range between 72 and 89 years (five female and four male) were investigated. Computed tomographic scans were performed after injection of a radiographic contrast medium directly into the fat compartments surrounding prominent facial folds. The data were analysed after multiplanar image reconstruction. The fat compartments surrounding the facial folds could be defined in each subject. Different arrangement patterns of the fat compartments around the facial rhytides were found. The nasolabial fold, the tear trough and the labiomandibular fold represent an anatomical border between adjacent fat compartments. By contrast, the glabellar fold and the labiomental sulcus have no direct relation to the boundaries of facial fat. Deep fat, underlying a facial rhytide, was identified underneath the nasolabial crease and the labiomental sulcus. In conclusion, an improvement by a compartment-specific volume augmentation of the nasolabial fold, the tear trough and the labiomandibular fold is limited by existing boundaries that extend into the skin. In the area of the nasolabial fold and the mentolabial sulcus, deep fat exists which can be used for augmentation and subsequent elevation of the folds. The treatment of the tear trough deformity appears anatomically the most challenging area since the superficial and deep fat compartments are separated by an osseo-cutaneous barrier, the orbicularis retaining ligament. In severe cases, a surgical treatment should be considered. By contrast, the glabellar fold shows the most simple anatomical architecture. The fold lies above one subcutaneous fat compartment that can be used for augmentation.",
"title": ""
},
{
"docid": "637344d11dc3dad19691e5f4973c2107",
"text": "This article presents a compact multi-layer handset phone 13.56 MHz NFC antenna design by novel LITA (Laser-Induced Integrated Thin-Film Antenna) technologies, which is jointly developed by ITRI and ACON in Taiwan. Through the proposed LITA technology, metal layouts of antennas can be formed on the internal surface of a smartwatch casing successfully with conformal, thin-film type, multi-layer and highly integrating characteristics. It is demonstrated that by designing a NFC antenna to become a two-layer overlapped coil structure by LITA technology, the required antenna layout area can be reduced about 40% compared to the co-planar NFC coil designs and also keep good inductive distance. The constructed antenna prototype integrated with a NFC reader chip (NXP NPC300) is analyzed and the read distance between the proposed antenna and type1 to type4 NFC tags classified by the NFC Forum are tasted and discussed in this paper.",
"title": ""
},
{
"docid": "bda2c57a02275e0533f83da1ad46b573",
"text": "In this thesis, we propose a new, scalable probabilistic logic called ProPPR to combine the best of the symbolic and statistical worlds. ProPPR has the rich semantic representation of Prolog, but we associate a feature vector to each clause, such that each clause has a weight vector that can be learned from the training data. Instead of searching over the entire graph for solutions, ProPPR uses a provably-correct approximate personalized PageRank to construct a subgraph for local grounding: the inference time is now independent of the size of the KB. We show that ProPPR can be viewed as a recursive extension to the path ranking algorithm (PRA), and outperforms PRA in the inference task with one million facts from NELL.",
"title": ""
},
{
"docid": "e53a8e3e7664f66cce0593ea6f8a2443",
"text": "In real world social networks, there are multiple cascades which are rarely independent. They usually compete or cooperate with each other. Motivated by the reinforcement theory in sociology we leverage the fact that adoption of a user to any behavior is modeled by the aggregation of behaviors of its neighbors. We use a multidimensional marked Hawkes process to model users product adoption and consequently spread of cascades in social networks. The resulting inference problem is proved to be convex and is solved in parallel by using the barrier method. The advantage of the proposed model is twofold; it models correlated cascades and also learns the latent diffusion network. Experimental results on synthetic and two real datasets gathered from Twitter, URL shortening and music streaming services, illustrate the superior performance of the proposed model over the alternatives. Introduction Social networks and virtual communities play a key role in today’s life. People share their thoughts, beliefs, opinions, news, and even their locations in social networks and engage in social interactions by commenting, liking, mentioning and following each other. This virtual world is an ideal place for studying social behaviors and spread of cultural norms (Vespignani 2012), contagion of diseases (Barabasi 2015), advertising and marketing (Valera and Rodriguez 2015) and estimating the culprit in malicious diffusions (Farajtabar et al. 2015a). Among them, the study of information diffusion or more generally dynamics on the network is of crucial importance and can be used in many applications. The trace of information diffusion, virus or infection spread, rumor propagation, and product adoption is usually called cascades. In conventional studies of diffusion networks, individual cascades are mostly considered in isolation, i.e., independent of each other (Rodriguez et al. 2015). However in realistic situations, they are rarely independent and can be competitive, when a URL shortening service become popular the others receive less attention; or cooperative, when usage of Google Play Music correlates with that of Youtube due to, for example, simultaneous arrival of new albums (Fig. 1). Modeling multiple cascades which are correlated to each other is a challenging problem. Considerable work have Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 0 200 400 600 Time (hr) 0 50",
"title": ""
},
{
"docid": "2d774ec62cdac08997cb8b86e73fe015",
"text": "This paper focuses on modeling resolving and simulations of the inverse kinematics of an anthropomorphic redundant robotic structure with seven degrees of freedom and a workspace similar to human arm. Also the kinematical model and the kinematics equations of the robotic arm are presented. A method of resolving the redundancy of seven degrees of freedom robotic arm is presented using Fuzzy Logic toolbox from MATLAB®.",
"title": ""
},
{
"docid": "2283e43c2bad5ac682fe185cb2b8a9c1",
"text": "As widely recognized in the literature, information technology (IT) investments have several special characteristics that make assessing their costs and benefits complicated. Here, we address the problem of evaluating a web content management system for both internal and external use. The investment is presently undergoing an evaluation process in a multinational company. We aim at making explicit the desired benefits and expected risks of the system investment. An evaluation hierarchy at general level is constructed. After this, a more detailed hierarchy is constructed to take into account the contextual issues. To catch the contextual issues key company representatives were interviewed. The investment alternatives are compared applying the principles of the Analytic Hierarchy Process (AHP). Due to the subjective and uncertain characteristics of the strategic IT investments a wide range of sensitivity analyses is performed.",
"title": ""
},
{
"docid": "0f9b073461047d698b6bba8d9ee7bff2",
"text": "Different psychotherapeutic theories provide contradictory accounts of adult narcissism as the product of either parental coldness or excessive parental admiration during childhood. Yet, none of these theories has been tested systematically in a nonclinical sample. The authors compared four structural equation models predicting overt and covert narcissism among 120 United Kingdom adults. Both forms of narcissism were predicted by both recollections of parental coldness and recollections of excessive parental admiration. Moreover, a suppression relationship was detected between these predictors: The effects of each were stronger when modeled together than separately. These effects were found after controlling for working models of attachment; covert narcissism was predicted also by attachment anxiety. This combination of childhood experiences may help to explain the paradoxical combination of grandiosity and fragility in adult narcissism.",
"title": ""
}
] |
scidocsrr
|
a51e3155a8fb6d3093bc43a57f7c6dcf
|
Analyzing and Detecting Opinion Spam on a Large-scale Dataset via Temporal and Spatial Patterns
|
[
{
"docid": "381ce2a247bfef93c67a3c3937a29b5a",
"text": "Product reviews are now widely used by individuals and organizations for decision making (Litvin et al., 2008; Jansen, 2010). And because of the profits at stake, people have been known to try to game the system by writing fake reviews to promote target products. As a result, the task of deceptive review detection has been gaining increasing attention. In this paper, we propose a generative LDA-based topic modeling approach for fake review detection. Our model can aptly detect the subtle differences between deceptive reviews and truthful ones and achieves about 95% accuracy on review spam datasets, outperforming existing baselines by a large margin.",
"title": ""
},
{
"docid": "646097feed29f603724f7ec6b8bbeb8b",
"text": "Online reviews provide valuable information about products and services to consumers. However, spammers are joining the community trying to mislead readers by writing fake reviews. Previous attempts for spammer detection used reviewers' behaviors, text similarity, linguistics features and rating patterns. Those studies are able to identify certain types of spammers, e.g., those who post many similar reviews about one target entity. However, in reality, there are other kinds of spammers who can manipulate their behaviors to act just like genuine reviewers, and thus cannot be detected by the available techniques. In this paper, we propose a novel concept of a heterogeneous review graph to capture the relationships among reviewers, reviews and stores that the reviewers have reviewed. We explore how interactions between nodes in this graph can reveal the cause of spam and propose an iterative model to identify suspicious reviewers. This is the first time such intricate relationships have been identified for review spam detection. We also develop an effective computation method to quantify the trustiness of reviewers, the honesty of reviews, and the reliability of stores. Different from existing approaches, we don't use review text information. Our model is thus complementary to existing approaches and able to find more difficult and subtle spamming activities, which are agreed upon by human judges after they evaluate our results.",
"title": ""
},
{
"docid": "0cf81998c0720405e2197c62afa08ee7",
"text": "User-generated online reviews can play a significant role in the success of retail products, hotels, restaurants, etc. However, review systems are often targeted by opinion spammers who seek to distort the perceived quality of a product by creating fraudulent reviews. We propose a fast and effective framework, FRAUDEAGLE, for spotting fraudsters and fake reviews in online review datasets. Our method has several advantages: (1) it exploits the network effect among reviewers and products, unlike the vast majority of existing methods that focus on review text or behavioral analysis, (2) it consists of two complementary steps; scoring users and reviews for fraud detection, and grouping for visualization and sensemaking, (3) it operates in a completely unsupervised fashion requiring no labeled data, while still incorporating side information if available, and (4) it is scalable to large datasets as its run time grows linearly with network size. We demonstrate the effectiveness of our framework on synthetic and real datasets; where FRAUDEAGLE successfully reveals fraud-bots in a large online app review database. Introduction The Web has greatly enhanced the way people perform certain activities (e.g. shopping), find information, and interact with others. Today many people read/write reviews on merchant sites, blogs, forums, and social media before/after they purchase products or services. Examples include restaurant reviews on Yelp, product reviews on Amazon, hotel reviews on TripAdvisor, and many others. Such user-generated content contains rich information about user experiences and opinions, which allow future potential customers to make better decisions about spending their money, and also help merchants improve their products, services, and marketing. Since online reviews can directly influence customer purchase decisions, they are crucial to the success of businesses. While positive reviews with high ratings can yield financial gains, negative reviews can damage reputation and cause monetary loss. This effect is magnified as the information spreads through the Web (Hitlin 2003; Mendoza, Poblete, and Castillo 2010). As a result, online review systems are attractive targets for opinion fraud. Opinion fraud involves reviewers (often paid) writing bogus reviews (Kost May 2012; Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Streitfeld August 2011). These spam reviews come in two flavors: defaming-spam which untruthfully vilifies, or hypespam that deceitfully promotes the target product. The opinion fraud detection problem is to spot the fake reviews in online sites, given all the reviews on the site, and for each review, its text, its author, the product it was written for, timestamp of posting, and its star-rating. Typically no user profile information is available (or is self-declared and cannot be trusted), while more side information for products (e.g. price, brand), and for reviews (e.g. number of (helpful) feedbacks) could be available depending on the site. Detecting opinion fraud, as defined above, is a non-trivial and challenging problem. Fake reviews are often written by experienced professionals who are paid to write high quality, believable reviews. As a result, it is difficult for an average potential customer to differentiate bogus reviews from truthful ones, just by looking at individual reviews text(Ott et al. 2011). As such, manual labeling of reviews is hard and ground truth information is often unavailable, which makes training supervised models less attractive for this problem. Summary of previous work. Previous attempts at solving the problem use several heuristics, such as duplicated reviews (Jindal and Liu 2008), or acquire bogus reviews from non-experts (Ott et al. 2011), to generate pseudo-ground truth, or a reference dataset. This data is then used for learning classification models together with carefully engineered features. One downside of such techniques is that they do not generalize: one needs to collect new data and train a new model for review data from a different domain, e.g., hotel vs. restaurant reviews. Moreover feature selection becomes a tedious sub-problem, as datasets from different domains might exhibit different characteristics. Other feature-based proposals include (Lim et al. 2010; Mukherjee, Liu, and Glance 2012). A large body of work on fraud detection relies on review text information (Jindal and Liu 2008; Ott et al. 2011; Feng, Banerjee, and Choi 2012) or behavioral evidence (Lim et al. 2010; Xie et al. 2012; Feng et al. 2012), and ignore the connectivity structure of review data. On the other hand, the network of reviewers and products contains rich information that implicitly represents correlations among these entities. The review network is also invaluable for detecting teams of fraudsters that operate collaboratively on targeted products. Our contributions. In this work we propose an unsuperProceedings of the Seventh International AAAI Conference on Weblogs and Social Media",
"title": ""
}
] |
[
{
"docid": "3bd639feecf4194c73c3e20ae4ef8203",
"text": "We present an optimized implementation of the Fan-Vercauteren variant of Brakerski’s scale-invariant homomorphic encryption scheme. Our algorithmic improvements focus on optimizing decryption and homomorphic multiplication in the Residue Number System (RNS), using the Chinese Remainder Theorem (CRT) to represent and manipulate the large coefficients in the ciphertext polynomials. In particular, we propose efficient procedures for scaling and CRT basis extension that do not require translating the numbers to standard (positional) representation. Compared to the previously proposed RNS design due to Bajard et al. [3], our procedures are simpler and faster, and introduce a lower amount of noise. We implement our optimizations in the PALISADE library and evaluate the runtime performance for the range of multiplicative depths from 1 to 100. For example, homomorphic multiplication for a depth-20 setting can be executed in 62 ms on a modern server system, which is already practical for some outsourced-computing applications. Our algorithmic improvements can also be applied to other scale-invariant homomorphic encryption schemes, such as YASHE.",
"title": ""
},
{
"docid": "e2a54639dfa4d1a828be814982ceb0a1",
"text": "Large-scale data analysis lies in the core of modern enterprises and scientific research. With the emergence of cloud computing, the use of an analytical query processing infrastructure (e.g., Amazon EC2) can be directly mapped to monetary value. MapReduce has been a popular framework in the context of cloud computing, designed to serve long running queries (jobs) which can be processed in batch mode. Taking into account that different jobs often perform similar work, there are many opportunities for sharing. In principle, sharing similar work reduces the overall amount of work, which can lead to reducing monetary charges incurred while utilizing the processing infrastructure. In this paper we propose a sharing framework tailored to MapReduce. Our framework, MRShare, transforms a batch of queries into a new batch that will be executed more efficiently, by merging jobs into groups and evaluating each group as a single query. Based on our cost model for MapReduce, we define an optimization problem and we provide a solution that derives the optimal grouping of queries. Experiments in our prototype, built on top of Hadoop, demonstrate the overall effectiveness of our approach and substantial savings.",
"title": ""
},
{
"docid": "aea8ac7970162655d5616f5b3985430c",
"text": "The growing use of convolutional neural networks (CNN) for a broad range of visual tasks, including tasks involving fine details, raises the problem of applying such networks to a large field of view, since the amount of computations increases significantly with the number of pixels. To deal effectively with this difficulty, we develop and compare methods of using CNNs for the task of small target localization in natural images, given a limited ”budget” of samples to form an image. Inspired in part by human vision, we develop and compare variable sampling schemes, with peak resolution at the center and decreasing resolution with eccentricity, applied iteratively by re-centering the image at the previous predicted target location. The results indicate that variable resolution models significantly outperform constant resolution models. Surprisingly, variable resolution models and in particular multi-channel models, outperform the optimal, ”budget-free” full-resolution model, using only 5% of the samples.",
"title": ""
},
{
"docid": "74c6600ea1027349081c08c687119ee3",
"text": "Segmentation of clitics has been shown to improve accuracy on a variety of Arabic NLP tasks. However, state-of-the-art Arabic word segmenters are either limited to formal Modern Standard Arabic, performing poorly on Arabic text featuring dialectal vocabulary and grammar, or rely on linguistic knowledge that is hand-tuned for each dialect. We extend an existing MSA segmenter with a simple domain adaptation technique and new features in order to segment informal and dialectal Arabic text. Experiments show that our system outperforms existing systems on broadcast news and Egyptian dialect, improving segmentation F1 score on a recently released Egyptian Arabic corpus to 92.09%, compared to 91.60% for another segmenter designed specifically for Egyptian Arabic.",
"title": ""
},
{
"docid": "d1afaada6bf5927d9676cee61d3a1d49",
"text": "t-Closeness is a privacy model recently defined for data anonymization. A data set is said to satisfy t-closeness if, for each group of records sharing a combination of key attributes, the distance between the distribution of a confidential attribute in the group and the distribution of the attribute in the entire data set is no more than a threshold t. Here, we define a privacy measure in terms of information theory, similar to t-closeness. Then, we use the tools of that theory to show that our privacy measure can be achieved by the postrandomization method (PRAM) for masking in the discrete case, and by a form of noise addition in the general case.",
"title": ""
},
{
"docid": "bdbb97522eea6cb9f8e11f07c2e83282",
"text": "Middle ear surgery is strongly influenced by anatomical and functional characteristics of the middle ear. The complex anatomy means a challenge for the otosurgeon who moves between preservation or improvement of highly important functions (hearing, balance, facial motion) and eradication of diseases. Of these, perforations of the tympanic membrane, chronic otitis media, tympanosclerosis and cholesteatoma are encountered most often in clinical practice. Modern techniques for reconstruction of the ossicular chain aim for best possible hearing improvement using delicate alloplastic titanium prostheses, but a number of prosthesis-unrelated factors work against this intent. Surgery is always individualized to the case and there is no one-fits-all strategy. Above all, both middle ear diseases and surgery can be associated with a number of complications; the most important ones being hearing deterioration or deafness, dizziness, facial palsy and life-threatening intracranial complications. To minimize risks, a solid knowledge of and respect for neurootologic structures is essential for an otosurgeon who must train him- or herself intensively on temporal bones before performing surgery on a patient.",
"title": ""
},
{
"docid": "f4db297c70b1aba64ce3ed17b0837859",
"text": "Despite the success of the automatic speech recognition framework in its own application field, its adaptation to the problem of acoustic event detection has resulted in limited success. In this paper, instead of treating the problem similar to the segmentation and classification tasks in speech recognition, we pose it as a regression task and propose an approach based on random forest regression. Furthermore, event localization in time can be efficiently handled as a joint problem. We first decompose the training audio signals into multiple interleaved superframes which are annotated with the corresponding event class labels and their displacements to the temporal onsets and offsets of the events. For a specific event category, a random-forest regression model is learned using the displacement information. Given an unseen superframe, the learned regressor will output the continuous estimates of the onset and offset locations of the events. To deal with multiple event categories, prior to the category-specific regression phase, a superframe-wise recognition phase is performed to reject the background superframes and to classify the event superframes into different event categories. While jointly posing event detection and localization as a regression problem is novel, the superior performance on two databases ITC-Irst and UPC-TALP demonstrates the efficiency and potential of the proposed approach.",
"title": ""
},
{
"docid": "3e9f338da297c5173cf075fa15cd0a2e",
"text": "Recent years have witnessed a surge of publications aimed at tracing temporal changes in lexical semantics using distributional methods, particularly prediction-based word embedding models. However, this vein of research lacks the cohesion, common terminology and shared practices of more established areas of natural language processing. In this paper, we survey the current state of academic research related to diachronic word embeddings and semantic shifts detection. We start with discussing the notion of semantic shifts, and then continue with an overview of the existing methods for tracing such time-related shifts with word embedding models. We propose several axes along which these methods can be compared, and outline the main challenges before this emerging subfield of NLP, as well as prospects and possible applications.",
"title": ""
},
{
"docid": "eff4f126e50447f872109549d060fbc8",
"text": "Many combinatorial problems are NP-complete for general graphs. However, when restricted to series–parallel graphs or partial k-trees, many of these problems can be solved in polynomial time, mostly in linear time. On the other hand, very few problems are known to be NP-complete for series–parallel graphs or partial k-trees. These include the subgraph isomorphism problem and the bandwidth problem. However, these problems are NP-complete even for trees. In this paper, we show that the edge-disjoint paths problem is NP-complete for series–parallel graphs and for partial 2-trees although the problem is trivial for trees and can be solved for outerplanar graphs in polynomial time. ? 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "be09a9be6ef80f694c34546767300b41",
"text": "Nipple-sparing mastectomy (NSM) is increasingly popular as a procedure for the treatment of breast cancer and as a prophylactic procedure for those at high risk of developing the disease. However, it remains a controversial option due to questions regarding its oncological safety and concerns regarding locoregional recurrence. This systematic review with a pooled analysis examines the current literature regarding NSM, including locoregional recurrence and complication rates. Systematic electronic searches were conducted using the PubMed database and the Ovid database for studies reporting the indications for NSM and the subsequent outcomes. Studies between January 1970 and January 2015 (inclusive) were analysed if they met the inclusion criteria. Pooled descriptive statistics were performed. Seventy-three studies that met the inclusion criteria were included in the analysis, yielding 12,358 procedures. After a mean follow up of 38 months (range, 7.4-156 months), the overall pooled locoregional recurrence rate was 2.38%, the overall complication rate was 22.3%, and the overall incidence of nipple necrosis, either partial or total, was 5.9%. Significant heterogeneity was found among the published studies and patient selection was affected by tumour characteristics. We concluded that NSM appears to be an oncologically safe option for appropriately selected patients, with low rates of locoregional recurrence. For NSM to be performed, tumours should be peripherally located, smaller than 5 cm in diameter, located more than 2 cm away from the nipple margin, and human epidermal growth factor 2-negative. A separate histopathological examination of the subareolar tissue and exclusion of malignancy at this site is essential for safe oncological practice. Long-term follow-up studies and prospective cohort studies are required in order to determine the best reconstructive methods.",
"title": ""
},
{
"docid": "767b6a698ee56a4859c21f70f52b2b81",
"text": "This article surveyed the main neuromarketing techniques used in the world and the practical results obtained. Specifically, the objectives are (1) to identify the main existing definitions of neuromarketing; (2) to identify the importance and the potential contributions of neuromarketing; (3) to demonstrate the advantages of neuromarketing as a marketing research tool compared to traditional research methods; (4) to identify the ethical issues involved with neuromarketing research; (5) to present the main neuromarketing techniques that are being used in the development of marketing research; (6) to present studies in which neuromarketing research techniques were used; and (7) to identify the main limitations of neuromarketing. The results obtained allow an understanding of the ways to develop, store, Journal of Management Research ISSN 1941-899X 2014, Vol. 6, No. 2 www.macrothink.org/jmr 202 retrieve and use information about consumers, as well as ways to develop the field of neuromarketing. In addition to offering theoretical support for neuromarketing, this article discusses business cases, implementation and achievements.",
"title": ""
},
{
"docid": "e93eaa695003cb409957e5c7ed19bf2a",
"text": "Prominent research argues that consumers often use personal budgets to manage self-control problems. This paper analyzes the link between budgeting and selfcontrol problems in consumption-saving decisions. It shows that the use of goodspecific budgets depends on the combination of a demand for commitment and the demand for flexibility resulting from uncertainty about intratemporal trade-offs between goods. It explains the subtle mechanism which renders budgets useful commitments, their interaction with minimum-savings rules (another widely-studied form of commitment), and how budgeting depends on the intensity of self-control problems. This theory matches several empirical findings on personal budgeting. JEL CLASSIFICATION: D23, D82, D86, D91, E62, G31",
"title": ""
},
{
"docid": "47c05e54488884854e6bcd5170ed65e8",
"text": "This work is about a novel methodology for window detection in urban environments and its multiple use in vision system applications. The presented method for window detection includes appropriate early image processing, provides a multi-scale Haar wavelet representation for the determination of image tiles which is then fed into a cascaded classifier for the task of window detection. The classifier is learned from a Gentle Adaboost driven cascaded decision tree on masked information from training imagery and is tested towards window based ground truth information which is together with the original building image databases publicly available. The experimental results demonstrate that single window detection is to a sufficient degree successful, e.g., for the purpose of building recognition, and, furthermore, that the classifier is in general capable to provide a region of interest operator for the interpretation of urban environments. The extraction of this categorical information is beneficial to index into search spaces for urban object recognition as well as aiming towards providing a semantic focus for accurate post-processing in 3D information processing systems. Targeted applications are (i) mobile services on uncalibrated imagery, e.g. , for tourist guidance, (ii) sparse 3D city modeling, and (iii) deformation analysis from high resolution imagery.",
"title": ""
},
{
"docid": "63baa6371fc07d3ef8186f421ddf1070",
"text": "With the first few words of Neural Networks and Intellect: Using Model-Based Concepts, Leonid Perlovsky embarks on the daring task of creating a mathematical concept of “the mind.” The content of the book actually exceeds even the most daring of expectations. A wide variety of concepts are linked together intertwining the development of artificial intelligence, evolutionary computation, and even the philosophical observations ranging from Aristotle and Plato to Kant and Gvdel. Perlovsky discusses fundamental questions with a number of engineering applications to filter them through philosophical categories (both ontological and epistemological). In such a fashion, the inner workings of the human mind, consciousness, language-mind relationships, learning, and emotions are explored mathematically in amazing details. Perlovsky even manages to discuss the concept of beauty perception in mathematical terms. Beginners will appreciate that Perlovsky starts with the basics. The first chapter contains an introduction to probability, statistics, and pattern recognition, along with the intuitive explanation of the complicated mathematical concepts. The second chapter reviews numerous mathematical approaches, algorithms, neural networks, and the fundamental mathematical ideas underlying each method. It analyzes fundamental limitations of the nearest neighbor methods and the simple neural network. Vapnik’s statistical learning theory, support vector machines, and Grossberg’s neural field theories are clearly explained. Roles of hierarchical organization and evolutionary computation are analyzed. Even experts in the field might find interesting the relationships among various algorithms and approaches. Fundamental mathematical issues include origins of combinatorial complexity (CC) of many algorithms and neural networks (operations or training) and its relationship to di-",
"title": ""
},
{
"docid": "8053e52a227757090de0a88b80055e8c",
"text": "INTRODUCTION\nWe examined US adults' understanding of a Nutrition Facts panel (NFP), which requires health literacy (ie, prose, document, and quantitative literacy skills), and the association between label understanding and dietary behavior.\n\n\nMETHODS\nData were from the Health Information National Trends Survey, a nationally representative survey of health information seeking among US adults (N = 3,185) conducted from September 6, 2013, through December 30, 2013. Participants viewed an ice cream nutrition label and answered 4 questions that tested their ability to apply basic arithmetic and understanding of percentages to interpret the label. Participants reported their intake of sugar-sweetened soda, fruits, and vegetables. Regression analyses tested associations among label understanding, demographic characteristics, and self-reported dietary behaviors.\n\n\nRESULTS\nApproximately 24% of people could not determine the calorie content of the full ice-cream container, 21% could not estimate the number of servings equal to 60 g of carbohydrates, 42% could not estimate the effect on daily calorie intake of foregoing 1 serving, and 41% could not calculate the percentage daily value of calories in a single serving. Higher scores for label understanding were associated with consuming more vegetables and less sugar-sweetened soda, although only the association with soda consumption remained significant after adjusting for demographic factors.\n\n\nCONCLUSION\nMany consumers have difficulty interpreting nutrition labels, and label understanding correlates with self-reported dietary behaviors. The 2016 revised NFP labels may address some deficits in consumer understanding by eliminating the need to perform certain calculations (eg, total calories per package). However, some tasks still require the ability to perform calculations (eg, percentage daily value of calories). Schools have a role in teaching skills, such as mathematics, needed for nutrition label understanding.",
"title": ""
},
{
"docid": "5ae07e0d3157b62f6d5e0e67d2b7f2ea",
"text": "G. Francis and F. Hermens (2002) used computer simulations to claim that many current models of metacontrast masking can account for the findings of V. Di Lollo, J. T. Enns, and R. A. Rensink (2000). They also claimed that notions of reentrant processing are not necessary because all of V. Di Lollo et al. 's data can be explained by feed-forward models. The authors show that G. Francis and F. Hermens's claims are vitiated by inappropriate modeling of attention and by ignoring important aspects of V. Di Lollo et al. 's results.",
"title": ""
},
{
"docid": "6c45d7b4a7732da4441261f7f1e9e42c",
"text": "In citation-based summarization, text written by several researchers is leveraged to identify the important aspects of a target paper. Previous work on this problem focused almost exclusively on its extraction aspect (i.e. selecting a representative set of citation sentences that highlight the contribution of the target paper). Meanwhile, the fluency of the produced summaries has been mostly ignored. For example, diversity, readability, cohesion, and ordering of the sentences included in the summary have not been thoroughly considered. This resulted in noisy and confusing summaries. In this work, we present an approach for producing readable and cohesive citation-based summaries. Our experiments show that the proposed approach outperforms several baselines in terms of both extraction quality and fluency.",
"title": ""
},
{
"docid": "140815c8ccd62d0169fa294f6c4994b8",
"text": "Six specific personality traits – playfulness, chase-proneness, curiosity/fearlessness, sociability, aggressiveness, and distance-playfulness – and a broad boldness dimension have been suggested for dogs in previous studies based on data collected in a standardized behavioural test (‘‘dog mentality assessment’’, DMA). In the present study I investigated the validity of the specific traits for predicting typical behaviour in everyday life. A questionnaire with items describing the dog’s typical behaviour in a range of situations was sent to owners of dogs that had carried out the DMA behavioural test 1–2 years earlier. Of the questionnaires that were sent out 697 were returned, corresponding to a response rate of 73.3%. Based on factor analyses on the questionnaire data, behavioural factors in everyday life were suggested to correspond to the specific personality traits from the DMA. Correlation analyses suggested construct validity for the traits playfulness, curiosity/ fearlessness, sociability, and distance-playfulness. Chase-proneness, which I expected to be related to predatory behaviour in everyday life, was instead related to human-directed play interest and nonsocial fear. Aggressiveness was the only trait from the DMA with low association to all of the behavioural factors from the questionnaire. The results suggest that three components of dog personality are measured in the DMA: (1) interest in playing with humans; (2) attitude towards strangers (interest in, fear of, and aggression towards); and (3) non-social fearfulness. These three components correspond to the traits playfulness, sociability, and curiosity/fearlessness, respectively, all of which were found to be related to a higher-order shyness–boldness dimension. www.elsevier.com/locate/applanim Applied Animal Behaviour Science 91 (2005) 103–128 * Present address: Department of Anatomy and Physiology, Faculty of Veterinary Medicine and Animal Science, Swedish University of Agricultural Sciences, Box 7011, SE-750 07 Uppsala, Sweden. Tel.: +46 18 67 28 21; fax: +46 18 67 21 11. E-mail address: [email protected]. 0168-1591/$ – see front matter # 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.applanim.2004.08.030 Chase-proneness and distance-playfulness seem to be mixed measures of these personality components, and are not related to any additional components. Since the time between the behavioural test and the questionnaire was 1–2 years, the results indicate long-term consistency of the personality components. Based on these results, the DMA seems to be useful in predicting behavioural problems that are related to social and non-social fear, but not in predicting other potential behavioural problems. However, considering this limitation, the test seems to validly assess important aspects of dog personality, which supports the use of the test as an instrument in dog breeding and in selection of individual dogs for different purposes. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d6794e4917896ba1040b4a83f8bd69b4",
"text": "There has been little work on computational grammars for Amh aric or other Ethio-Semitic languages and their use for pars ing and generation. This paper introduces a grammar for a fragment o f Amharic within the Extensible Dependency Grammar (XDG) fr amework of Debusmann. A language such as Amharic presents special ch allenges for the design of a dependency grammar because of th complex morphology and agreement constraints. The paper describes how a morphological analyzer for the language can be integra t d into the grammar, introduces empty nodes as a solution to the problem of null subjects and objects, and extends the agreement prin ci le of XDG in several ways to handle verb agreement with objects as well as subjects and the constraints governing relative clause v erbs. It is shown that XDG’s multiple dimensions lend themselves to a new appr oach to relative clauses in the language. The introduced ext ensions to XDG are also applicable to other Ethio-Semitic languages.",
"title": ""
},
{
"docid": "fcce2e75108497f0e8e37300d6ad335c",
"text": "The authors performed a meta-analysis of studies examining the association between polymorphisms in the 5,10-methylenetetrahydrofolate reductase (MTHFR) gene, including MTHFR C677T and A1298C, and common psychiatric disorders, including unipolar depression, anxiety disorders, bipolar disorder, and schizophrenia. The primary comparison was between homozygote variants and the wild type for MTHFR C677T and A1298C. For unipolar depression and the MTHFR C677T polymorphism, the fixed-effects odds ratio for homozygote variants (TT) versus the wild type (CC) was 1.36 (95% confidence interval (CI): 1.11, 1.67), with no residual between-study heterogeneity (I(2) = 0%)--based on 1,280 cases and 10,429 controls. For schizophrenia and MTHFR C677T, the fixed-effects odds ratio for TT versus CC was 1.44 (95% CI: 1.21, 1.70), with low heterogeneity (I(2) = 42%)--based on 2,762 cases and 3,363 controls. For bipolar disorder and MTHFR C677T, the fixed-effects odds ratio for TT versus CC was 1.82 (95% CI: 1.22, 2.70), with low heterogeneity (I(2) = 42%)-based on 550 cases and 1,098 controls. These results were robust to various sensitively analyses. This meta-analysis demonstrates an association between the MTHFR C677T variant and depression, schizophrenia, and bipolar disorder, raising the possibility of the use of folate in treatment and prevention.",
"title": ""
}
] |
scidocsrr
|
fd442bf575a11d993e51e7decaabbe16
|
Neural CRF Parsing
|
[
{
"docid": "242977c8b2a5768b18fc276309407d60",
"text": "We present a parser that relies primarily on extracting information directly from surface spans rather than on propagating information through enriched grammar structure. For example, instead of creating separate grammar symbols to mark the definiteness of an NP, our parser might instead capture the same information from the first word of the NP. Moving context out of the grammar and onto surface features can greatly simplify the structural component of the parser: because so many deep syntactic cues have surface reflexes, our system can still parse accurately with context-free backbones as minimal as Xbar grammars. Keeping the structural backbone simple and moving features to the surface also allows easy adaptation to new languages and even to new tasks. On the SPMRL 2013 multilingual constituency parsing shared task (Seddah et al., 2013), our system outperforms the top single parser system of Björkelund et al. (2013) on a range of languages. In addition, despite being designed for syntactic analysis, our system also achieves stateof-the-art numbers on the structural sentiment task of Socher et al. (2013). Finally, we show that, in both syntactic parsing and sentiment analysis, many broad linguistic trends can be captured via surface features.",
"title": ""
},
{
"docid": "bf543ecca4a533ef8871cc886cee66a1",
"text": "We propose the first implementation of an infinite-order generative dependency model. The model is based on a new recursive neural network architecture, the Inside-Outside Recursive Neural Network. This architecture allows information to flow not only bottom-up, as in traditional recursive neural networks, but also topdown. This is achieved by computing content as well as context representations for any constituent, and letting these representations interact. Experimental results on the English section of the Universal Dependency Treebank show that the infinite-order model achieves a perplexity seven times lower than the traditional third-order model using counting, and tends to choose more accurate parses in k-best lists. In addition, reranking with this model achieves state-of-the-art unlabelled attachment scores and unlabelled exact match scores.",
"title": ""
},
{
"docid": "60a7e9be448a0ac4e25d1eed5b075de9",
"text": "Prepositional phrase (PP) attachment disambiguation is a known challenge in syntactic parsing. The lexical sparsity associated with PP attachments motivates research in word representations that can capture pertinent syntactic and semantic features of the word. One promising solution is to use word vectors induced from large amounts of raw text. However, state-of-the-art systems that employ such representations yield modest gains in PP attachment accuracy. In this paper, we show that word vector representations can yield significant PP attachment performance gains. This is achieved via a non-linear architecture that is discriminatively trained to maximize PP attachment accuracy. The architecture is initialized with word vectors trained from unlabeled data, and relearns those to maximize attachment accuracy. We obtain additional performance gains with alternative representations such as dependency-based word vectors. When tested on both English and Arabic datasets, our method outperforms both a strong SVM classifier and state-of-the-art parsers. For instance, we achieve 82.6% PP attachment accuracy on Arabic, while the Turbo and Charniak self-trained parsers obtain 76.7% and 80.8% respectively.",
"title": ""
}
] |
[
{
"docid": "a993a7a5aa45fb50e19326ec4c98472d",
"text": "Innumerable terror and suspicious messages are sent through Instant Messengers (IM) and Social Networking Sites (SNS) which are untraced, leading to hindrance for network communications and cyber security. We propose a Framework that discover and predict such messages that are sent using IM or SNS like Facebook, Twitter, LinkedIn, and others. Further, these instant messages are put under surveillance that identifies the type of suspected cyber threat activity by culprit along with their personnel details. Framework is developed using Ontology based Information Extraction technique (OBIE), Association rule mining (ARM) a data mining technique with set of pre-defined Knowledge-based rules (logical), for decision making process that are learned from domain experts and past learning experiences of suspicious dataset like GTD (Global Terrorist Database). The experimental results obtained will aid to take prompt decision for eradicating cyber crimes.",
"title": ""
},
{
"docid": "fb4837a619a6b9e49ca2de944ec2314e",
"text": "Inverse reinforcement learning addresses the general problem of recovering a reward function from samples of a policy provided by an expert/demonstrator. In this paper, we introduce active learning for inverse reinforcement learning. We propose an algorithm that allows the agent to query the demonstrator for samples at specific states, instead of relying only on samples provided at “arbitrary” states. The purpose of our algorithm is to estimate the reward function with similar accuracy as other methods from the literature while reducing the amount of policy samples required from the expert. We also discuss the use of our algorithm in higher dimensional problems, using both Monte Carlo and gradient methods. We present illustrative results of our algorithm in several simulated examples of different complexities.",
"title": ""
},
{
"docid": "843968fe4adff16e160c75105505db66",
"text": "As user-generated Web content increases, the amount of inappropriate and/or objectionable content also grows. Several scholarly communities are addressing how to detect and manage such content: research in computer vision focuses on detection of inappropriate images, natural language processing technology has advanced to recognize insults. However, profanity detection systems remain flawed. Current list-based profanity detection systems have two limitations. First, they are easy to circumvent and easily become stale - that is, they cannot adapt to misspellings, abbreviations, and the fast pace of profane slang evolution. Secondly, they offer a one-size fits all solution; they typically do not accommodate domain, community and context specific needs. However, social settings have their own normative behaviors - what is deemed acceptable in one community may not be in another. In this paper, through analysis of comments from a social news site, we provide evidence that current systems are performing poorly and evaluate the cases on which they fail. We then address community differences regarding creation/tolerance of profanity and suggest a shift to more contextually nuanced profanity detection systems.",
"title": ""
},
{
"docid": "b1313b777c940445eb540b1e12fa559e",
"text": "In this paper we explore the correlation between the sound of words and their meaning, by testing if the polarity (‘good guy’ or ‘bad guy’) of a character’s role in a work of fiction can be predicted by the name of the character in the absence of any other context. Our approach is based on phonological and other features proposed in prior theoretical studies of fictional names. These features are used to construct a predictive model over a manually annotated corpus of characters from motion pictures. By experimenting with different mixtures of features, we identify phonological features as being the most discriminative by comparison to social and other types of features, and we delve into a discussion of specific phonological and phonotactic indicators of a character’s role’s polarity.",
"title": ""
},
{
"docid": "c45a494afc622ec7ab5af78098945eeb",
"text": "While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood. In the past, image classifiers have been shown to be vulnerable to so-called adversarial attacks, which are created by additively perturbing the correctly classified image. In this paper, we propose the ADef algorithm to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step. We demonstrate our results on MNIST with convolutional neural networks and on ImageNet with Inception-v3 and ResNet-101.",
"title": ""
},
{
"docid": "5a769755df0734fdbc5fd1904f8c5f37",
"text": "Composite dental restorations represent a unique class of biomaterials with severe restrictions on biocompatibility, curing behavior, esthetics, and ultimate material properties. These materials are presently limited by shrinkage and polymerization-induced shrinkage stress, limited toughness, the presence of unreacted monomer that remains following the polymerization, and several other factors. Fortunately, these materials have been the focus of a great deal of research in recent years with the goal of improving restoration performance by changing the initiation system, monomers, and fillers and their coupling agents, and by developing novel polymerization strategies. Here, we review the general characteristics of the polymerization reaction and recent approaches that have been taken to improve composite restorative performance.",
"title": ""
},
{
"docid": "6875d41e412d71f45d6d4ea43697ed80",
"text": "Context Emergency department visits by older adults are often due to adverse drug events, but the proportion of these visits that are the result of drugs designated as inappropriate for use in this population is unknown. Contribution Analyses of a national surveillance study of adverse drug events and a national outpatient survey estimate that Americans age 65 years or older have more than 175000 emergency department visits for adverse drug events yearly. Three commonly prescribed drugs accounted for more than one third of visits: warfarin, insulin, and digoxin. Caution The study was limited to adverse events in the emergency department. Implication Strategies to decrease adverse drug events among older adults should focus on warfarin, insulin, and digoxin. The Editors Adverse drug events cause clinically significant morbidity and mortality and are associated with large economic costs (15). They are common in older adults, regardless of whether they live in the community, reside in long-term care facilities, or are hospitalized (59). Most physicians recognize that prescribing medications to older patients requires special considerations, but nongeriatricians are typically unfamiliar with the most commonly used measure of medication appropriateness for older patients: the Beers criteria (1012). The Beers criteria are a consensus-based list of medications identified as potentially inappropriate for use in older adults. The criteria were introduced in 1991 to help researchers evaluate prescription quality in nursing homes (10). The Beers criteria were updated in 1997 and 2003 to apply to all persons age 65 years or older, to include new medications judged to be ineffective or to pose unnecessarily high risk, and to rate the severity of adverse outcomes (11, 12). Prescription rates of Beers criteria medications have become a widely used measure of quality of care for older adults in research studies in the United States and elsewhere (1326). The application of the Beers criteria as a measure of health care quality and safety has expanded beyond research studies. The Centers for Medicare & Medicaid Services incorporated the Beers criteria into federal safety regulations for long-term care facilities in 1999 (27). The prescription rate of potentially inappropriate medications is one of the few medication safety measures in the National Healthcare Quality Report (28) and has been introduced as a Health Plan and Employer Data and Information Set quality measure for managed care plans (29). Despite widespread adoption of the Beers criteria to measure prescription quality and safety, as well as proposals to apply these measures to additional settings, such as medication therapy management services under Medicare Part D (30), population-based data on the effect of adverse events from potentially inappropriate medications are sparse and do not compare the risks for adverse events from Beers criteria medications against those from other medications (31, 32). Adverse drug events that lead to emergency department visits are clinically significant adverse events (5) and result in increased health care resource utilization and expense (6). We used nationally representative public health surveillance data to estimate the number of emergency department visits for adverse drug events involving Beers criteria medications and compared the number with that for adverse drug events involving other medications. We also estimated the frequency of outpatient prescription of Beers criteria medications and other medications to calculate and compare the risks for emergency department visits for adverse drug events per outpatient prescription visit. Methods Data Sources National estimates of emergency department visits for adverse drug events were based on data from the 58 nonpediatric hospitals participating in the National Electronic Injury Surveillance SystemCooperative Adverse Drug Event Surveillance (NEISS-CADES) System, a nationally representative, size-stratified probability sample of hospitals (excluding psychiatric and penal institutions) in the United States and its territories with a minimum of 6 beds and a 24-hour emergency department (Figure 1) (3335). As described elsewhere (5, 34), trained coders at each hospital reviewed clinical records of every emergency department visit to report physician-diagnosed adverse drug events. Coders reported clinical diagnosis, medication implicated in the adverse event, and narrative descriptions of preceding circumstances. Data collection, management, quality assurance, and analyses were determined to be public health surveillance activities by the Centers for Disease Control and Prevention (CDC) and U.S. Food and Drug Administration human subjects oversight bodies and, therefore, did not require human subject review or institutional review board approval. Figure 1. Data sources and descriptions. NAMCS= National Ambulatory Medical Care Survey (36); NEISS-CADES= National Electronic Injury Surveillance SystemCooperative Adverse Drug Event Surveillance System (5, 3335); NHAMCS = National Hospital Ambulatory Medical Care Survey (37). *The NEISS-CADES is a 63-hospital national probability sample, but 5 pediatric hospitals were not included in this analysis. National estimates of outpatient prescription were based on 2 cross-sectional surveys, the National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS), designed to provide information on outpatient office visits and visits to hospital outpatient clinics and emergency departments (Figure 1) (36, 37). These surveys have been previously used to document the prescription rates of inappropriate medications (17, 3840). Definition of Potentially Inappropriate Medications The most recent iteration of the Beers criteria (12) categorizes 41 medications or medication classes as potentially inappropriate under any circumstances (always potentially inappropriate) and 7 medications or medication classes as potentially inappropriate when used in certain doses, frequencies, or durations (potentially inappropriate in certain circumstances). For example, ferrous sulfate is considered to be potentially inappropriate only when used at dosages greater than 325 mg/d, but not potentially inappropriate if used at lower dosages. For this investigation, we included the Beers criteria medications listed in Table 1. Because medication dose, duration, and frequency were not always available in NEISS-CADES and are not reported in NAMCS and NHAMCS, we included medications regardless of dose, duration, or frequency of use. We excluded 3 medications considered to be potentially inappropriate when used in specific formulations (short-acting nifedipine, short-acting oxybutynin, and desiccated thyroid) because NEISS-CADES, NAMCS, and NHAMCS do not reliably identify these formulations. Table 1. Potentially Inappropriate Medications for Individuals Age 65 Years or Older The updated Beers criteria identify additional medications as potentially inappropriate if they are prescribed to patients who have certain preexisting conditions. We did not include these medications because they have rarely been used in previous studies or safety measures and NEISS-CADES, NAMCS, and NHAMCS do not reliably identify preexisting conditions. Identification of Emergency Department Visits for Adverse Drug Events We defined an adverse drug event case as an incident emergency department visit by a patient age 65 years or older, from 1 January 2004 to 31 December 2005, for a condition that the treating physician explicitly attributed to the use of a drug or for a drug-specific effect (5). Adverse events include allergic reactions (immunologically mediated effects) (41), adverse effects (undesirable pharmacologic or idiosyncratic effects at recommended doses) (41), unintentional overdoses (toxic effects linked to excess dose or impaired excretion) (41), or secondary effects (such as falls and choking). We excluded cases of intentional self-harm, therapeutic failures, therapy withdrawal, drug abuse, adverse drug events that occurred as a result of medical treatment received during the emergency department visit, and follow-up visits for a previously diagnosed adverse drug event. We defined an adverse drug event from Beers criteria medications as an emergency department visit in which a medication from Table 1 was implicated. Identification of Outpatient Prescription Visits We used the NAMCS and NHAMCS public use data files for the most recent year available (2004) to identify outpatient prescription visits. We defined an outpatient prescription visit as any outpatient office, hospital clinic, or emergency department visit at which treatment with a medication of interest was either started or continued. We identified medications by generic name for those with a single active ingredient and by individual active ingredients for combination products. We categorized visits with at least 1 medication identified in Table 1 as involving Beers criteria medications. Statistical Analysis Each NEISS-CADES, NAMCS, and NHAMCS case is assigned a sample weight on the basis of the inverse probability of selection (33, 4244). We calculated national estimates of emergency department visits and prescription visits by summing the corresponding sample weights, and we calculated 95% CIs by using the SURVEYMEANS procedure in SAS, version 9.1 (SAS Institute, Cary, North Carolina), to account for the sampling strata and clustering by site. To obtain annual estimates of visits for adverse events, we divided NEISS-CADES estimates for 20042005 and corresponding 95% CI end points by 2. Estimates based on small numbers of cases (<20 cases for NEISS-CADES and <30 cases for NAMCS and NHAMCS) or with a coefficient of variation greater than 30% are considered statistically unstable and are identified in the tables. To estimate the risk for adverse events relative to outpatient prescription",
"title": ""
},
{
"docid": "0f45452e8c9ca8aaf501e7e89685746b",
"text": "Chatbots are programs that mimic human conversation using Artificial Intelligence (AI). It is designed to be the ultimate virtual assistant, entertainment purpose, helping one to complete tasks ranging from answering questions, getting driving directions, turning up the thermostat in smart home, to playing one's favorite tunes etc. Chatbot has become more popular in business groups right now as they can reduce customer service cost and handles multiple users at a time. But yet to accomplish many tasks there is need to make chatbots as efficient as possible. To address this problem, in this paper we provide the design of a chatbot, which provides an efficient and accurate answer for any query based on the dataset of FAQs using Artificial Intelligence Markup Language (AIML) and Latent Semantic Analysis (LSA). Template based and general questions like welcome/ greetings and general questions will be responded using AIML and other service based questions uses LSA to provide responses at any time that will serve user satisfaction. This chatbot can be used by any University to answer FAQs to curious students in an interactive fashion.",
"title": ""
},
{
"docid": "807564cfc2e90dee21a3efd8dc754ba3",
"text": "The present paper reports two studies designed to test the Dualistic Model of Passion with regard to performance attainment in two fields of expertise. Results from both studies supported the Passion Model. Harmonious passion was shown to be a positive source of activity investment in that it directly predicted deliberate practice (Study 1) and positively predicted mastery goals which in turn positively predicted deliberate practice (Study 2). In turn, deliberate practice had a direct positive impact on performance attainment. Obsessive passion was shown to be a mixed source of activity investment. While it directly predicted deliberate practice (Study 1) and directly predicted mastery goals (which predicted deliberate practice), it also predicted performance-avoidance and performance-approach goals, with the former having a tendency to facilitate performance directly, and the latter to directly negatively impact on performance attainment (Study 2). Finally, harmonious passion was also positively related to subjective well-being (SWB) in both studies, while obsessive passion was either unrelated (Study 1) or negatively related to SWB (Study 2). The conceptual and applied implications of the differential influences of harmonious and obsessive passion in performance are discussed.",
"title": ""
},
{
"docid": "9457e14120dc6b3f97b654b457c4dee3",
"text": "We present a numerical framework for approximating unknown governing equations using observation data and deep neural networks (DNN). In particular, we propose to use residual network (ResNet) as the basic building block for equation approximation. We demonstrate that the ResNet block can be considered as a one-step method that is exact in temporal integration. We then present two multi-step methods, recurrent ResNet (RT-ResNet) method and recursive ReNet (RS-ResNet) method. The RT-ResNet is a multi-step method on uniform time steps, whereas the RS-ResNet is an adaptive multi-step method using variable time steps. All three methods presented here are based on integral form of the underlying dynamical system. As a result, they do not require time derivative data for equation recovery and can cope with relatively coarsely distributed trajectory data. Several numerical examples are presented to demonstrate the performance of the methods.",
"title": ""
},
{
"docid": "d28e502bef7a9ff4a7f4b0ec9a275cd4",
"text": "The paper focuses on vocabulary learning strategies as a subcategory of language learning strategies and their instruction within the ESP context at the Faculty of Maritime Studies and Transport in Portorož. Vocabulary strategy instruction will be implemented at our faculty as part of a broader PhD research into the effect of language learning strategy instruction on strategy use and subject-specific and general language acquisition. Additional variables that will be taken into consideration are language proficiency, motivation and learning styles of the students. The introductory section in which the situation that triggered my PhD research is presented is followed by a theoretical introduction to the concept of language and vocabulary learning strategies. The aspects that the paper focuses on are the central role of lexis within ESP, vocabulary learning strategy taxonomies, and the presentation of research studies made in the examined field to date. The final section presents the explicit vocabulary learning strategy instruction model. In the conclusion, some implications for teaching can be found. © 2006 Scripta Manent. Slovensko društvo uèiteljev tujega strokovnega jezika.",
"title": ""
},
{
"docid": "4806b28786af042c23897dbf23802789",
"text": "With the rapidly increasing popularity of deep neural networks for image recognition tasks, a parallel interest in generating adversarial examples to attack the trained models has arisen. To date, these approaches have involved either directly computing gradients with respect to the image pixels or directly solving an optimization on the image pixels. We generalize this pursuit in a novel direction: can a separate network be trained to efficiently attack another fully trained network? We demonstrate that it is possible, and that the generated attacks yield startling insights into the weaknesses of the target network. We call such a network an Adversarial Transformation Network (ATN). ATNs transform any input into an adversarial attack on the target network, while being minimally perturbing to the original inputs and the target network’s outputs. Further, we show that ATNs are capable of not only causing the target network to make an error, but can be constructed to explicitly control the type of misclassification made. We demonstrate ATNs on both simple MNISTdigit classifiers and state-of-the-art ImageNet classifiers deployed by Google, Inc.: Inception ResNet-v2. With the resurgence of deep neural networks for many real-world classification tasks, there is an increased interest in methods to assess the weaknesses in the trained models. Adversarial examples are small perturbations of the inputs that are carefully crafted to fool the network into producing incorrect outputs. Seminal work by (Szegedy et al. 2013) and (Goodfellow, Shlens, and Szegedy 2014), as well as much recent work, has shown that adversarial examples are abundant, and that there are many ways to discover them. Given a classifier f(x) : x ∈ X → y ∈ Y and original inputs x ∈ X , the problem of generating untargeted adversarial examples can be expressed as the optimization: argminx∗ L(x,x ∗) s.t. f(x∗) = f(x), where L(·) is a distance metric between examples from the input space (e.g., the L2 norm). Similarly, generating a targeted adversarial attack on a classifier can be expressed as argminx∗ L(x,x ∗) s.t. f(x∗) = yt, where yt ∈ Y is some target label chosen by the attacker. Until now, these optimization problems have been solved using three broad approaches: (1) By directly using optimizers like L-BFGS or Adam (Kingma and Ba 2015), as Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. proposed in (Szegedy et al. 2013) and (Carlini and Wagner 2016). (2) By approximation with single-step gradient-based techniques like fast gradient sign (Goodfellow, Shlens, and Szegedy 2014) or fast least likely class (Kurakin, Goodfellow, and Bengio 2016). (3) By approximation with iterative variants of gradient-based techniques (Kurakin, Goodfellow, and Bengio 2016; Moosavi-Dezfooli et al. 2016; Moosavi-Dezfooli, Fawzi, and Frossard 2016). These approaches use multiple forward and backward passes through the target network to more carefully move an input towards an adversarial classification. Other approaches assume a black-box model and only having access to the target model’s output (Papernot et al. 2016; Baluja, Covell, and Sukthankar 2015; Tramèr et al. 2016). See (Papernot et al. 2015) for a discussion of threat models. Each of the above approaches solved an optimization problem such that a single set of inputs was perturbed enough to force the target network to make a mistake. We take a fundamentally different approach: given a welltrained target network, can we create a separate, attacknetwork that, with high probability, minimally transforms all inputs into ones that will be misclassified? No per-sample optimization problems should be solved. The attack-network should take as input a clean image and output a minimally modified image that will cause a misclassification in the target network. Further, can we do this while imposing strict constraints on the types and amount of perturbations allowed? We introduce a class of networks, called Adversarial Transformation Networks, to efficiently address this task. Adversarial Transformation Networks In this work, we propose Adversarial Transformation Networks (ATNs). An ATN is a neural network that transforms an input into an adversarial example against a target network or set of networks. ATNs may be untargeted or targeted, and trained in a black-box or white-box manner. In this work, we will focus on targeted, white-box ATNs. Formally, an ATN can be defined as a neural network: gf,θ(x) : x ∈ X → x′ (1) where θ is the parameter vector of g, f is the target network which outputs a probability distribution across class labels, and x′ ∼ x, but argmax f(x) = argmax f(x′). The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)",
"title": ""
},
{
"docid": "ee2c37fd2ebc3fd783bfe53213e7470e",
"text": "Mind-body interventions are beneficial in stress-related mental and physical disorders. Current research is finding associations between emotional disorders and vagal tone as indicated by heart rate variability. A neurophysiologic model of yogic breathing proposes to integrate research on yoga with polyvagal theory, vagal stimulation, hyperventilation, and clinical observations. Yogic breathing is a unique method for balancing the autonomic nervous system and influencing psychologic and stress-related disorders. Many studies demonstrate effects of yogic breathing on brain function and physiologic parameters, but the mechanisms have not been clarified. Sudarshan Kriya yoga (SKY), a sequence of specific breathing techniques (ujjayi, bhastrika, and Sudarshan Kriya) can alleviate anxiety, depression, everyday stress, post-traumatic stress, and stress-related medical illnesses. Mechanisms contributing to a state of calm alertness include increased parasympathetic drive, calming of stress response systems, neuroendocrine release of hormones, and thalamic generators. This model has heuristic value, research implications, and clinical applications.",
"title": ""
},
{
"docid": "948e65673f679fe37027f4dc496397f8",
"text": "Online courses are growing at a tremendous rate, and although we have discovered a great deal about teaching and learning in the online environment, there is much left to learn. One variable that needs to be explored further is procrastination in online coursework. In this mixed methods study, quantitative methods were utilized to evaluate the influence of online graduate students’ attributions for academic outcomes to ability, effort, context, and luck on their tendency to procrastinate. Additionally, qualitative methods were utilized to explore students’ attributional beliefs about their tendency to procrastinate in their online coursework. Collectively, results indicated that ability, effort, context, and luck influenced procrastination in this sample of graduate students. A discussion of these findings, implications for instructors, and recommendations for future research ensues. Online course offerings and degree programs have recently increased at a rapid rate and have gained in popularity among students (Allen & Seaman, 2010, 2011). Garrett (2007) reported that half of prospective students surveyed about postsecondary programs expressed a preference for online and hybrid programs, typically because of the flexibility and convenience (Daymont, Blau, & Campbell, 2011). Advances in learning management systems such as Blackboard have facilitated the dramatic increase in asynchronous programs. Although the research literature concerning online learning has blossomed over the past decade, much is left to learn about important variables that impact student learning and achievement. The purpose of this mixed methods study was to better understand the relationship between online graduate students’ attributional beliefs and their tendency to procrastinate. The approach to this objective was twofold. First, quantitative methods were utilized to evaluate the influence of students’ attributions for academic outcomes to ability, effort, context, and luck on their tendency to procrastinate. Second, qualitative methods were utilized to explore students’ attributional beliefs about their tendency to procrastinate in their online coursework. Journal of Interactive Online Learning Rakes, Dunn, and Rakes",
"title": ""
},
{
"docid": "b66846f076d41c8be3f5921cc085d997",
"text": "We present a novel hierarchical force-directed method for drawing large graphs. The algorithm produces a graph embedding in an Euclidean space E of any dimension. A two or three dimensional drawing of the graph is then obtained by projecting a higher-dimensional embedding into a two or three dimensional subspace of E. Projecting high-dimensional drawings onto two or three dimensions often results in drawings that are “smoother” and more symmetric. Among the other notable features of our approach are the utilization of a maximal independent set filtration of the set of vertices of a graph, a fast energy function minimization strategy, efficient memory management, and an intelligent initial placement of vertices. Our implementation of the algorithm can draw graphs with tens of thousands of vertices using a negligible amount of memory in less than one minute on a mid-range PC.",
"title": ""
},
{
"docid": "d90b6c61369ff0458843241cd30437ba",
"text": "The unprecedented challenges of creating Biosphere 2, the world's first laboratory for biospherics, the study of global ecology and long-term closed ecological system dynamics, led to breakthrough developments in many fields, and a deeper understanding of the opportunities and difficulties of material closure. This paper will review accomplishments and challenges, citing some of the key research findings and publications that have resulted from the experiments in Biosphere 2. Engineering accomplishments included development of a technique for variable volume to deal with pressure differences between the facility and outside environment, developing methods of atmospheric leak detection and sealing, while achieving new standards of closure, with an annual atmospheric leakrate of less than 10%, or less than 300 ppm per day. This degree of closure permitted detailed tracking of carbon dioxide, oxygen, and trace gases such as nitrous oxide and ethylene over the seasonal variability of two years. Full closure also necessitated developing new approaches and technologies for complete air, water, and wastewater recycle and reuse within the facility. The development of a soil-based highly productive agricultural system was a first in closed ecological systems, and much was learned about managing a wide variety of crops using non-chemical means of pest and disease control. Closed ecological systems have different temporal biogeochemical cycling and ranges of atmospheric components because of their smaller reservoirs of air, water and soil, and higher concentration of biomass, and Biosphere 2 provided detailed examination and modeling of these accelerated cycles over a period of closure which measured in years. Medical research inside Biosphere 2 included the effects on humans of lowered oxygen: the discovery that human productivity can be maintained with good health with lowered atmospheric oxygen levels could lead to major economies on the design of space stations and planetary/lunar settlements. The improved health resulting from the calorie-restricted but nutrient dense Biosphere 2 diet was the first such scientifically controlled experiment with humans. The success of Biosphere 2 in creating a diversity of terrestrial and marine environments, from rainforest to coral reef, allowed detailed studies with comprehensive measurements such that the dynamics of these complex biomic systems are now better understood. The coral reef ecosystem, the largest artificial reef ever built, catalyzed methods of study now being applied to planetary coral reef systems. Restoration ecology advanced through the creation and study of the dynamics of adaptation and self-organization of the biomes in Biosphere 2. The international interest that Biosphere 2 generated has given new impetus to the public recognition of the sciences of biospheres (biospherics), biomes and closed ecological life systems. The facility, although no longer a materially-closed ecological system, is being used as an educational facility by Columbia University as an introduction to the study of the biosphere and complex system ecology and for carbon dioxide impacts utilizing the complex ecosystems created in Biosphere '. The many lessons learned from Biosphere 2 are being used by its key team of creators in their design and operation of a laboratory-sized closed ecological system, the Laboratory Biosphere, in operation as of March 2002, and for the design of a Mars on Earth(TM) prototype life support system for manned missions to Mars and Mars surface habitats. Biosphere 2 is an important foundation for future advances in biospherics and closed ecological system research.",
"title": ""
},
{
"docid": "99880fca88bef760741f48166a51ca6f",
"text": "This paper describes first results using the Unified Medical Language System (UMLS) for distantly supervised relation extraction. UMLS is a large knowledge base which contains information about millions of medical concepts and relations between them. Our approach is evaluated using existing relation extraction data sets that contain relations that are similar to some of those in UMLS.",
"title": ""
},
{
"docid": "bfd0cba38636eb6b708347a55d4adcd6",
"text": "A flexible multiscale and directional representation for images is proposed. The scheme combines directional filter banks with the Laplacian pyramid to provides a sparse representation for twodimensional piecewise smooth signals resembling images. The underlying expansion is a frame and can be designed to be a tight frame. Pyramidal directional filter banks provide an effective method to implement the digital curvelet transform. The regularity issue of the iterated filters in the directional filter bank is examined.",
"title": ""
},
{
"docid": "45881ab3fc9b2d09f211808e8c9b0a3c",
"text": "Nowadays a large number of user-adaptive systems has been developed. Commonly, the effort to build user models is repeated across applications and domains, due to the lack of interoperability and synchronization among user-adaptive systems. There is a strong need for the next generation of user models to be interoperable, i.e. to be able to exchange user model portions and to use the information that has been exchanged to enrich the user experience. This paper presents an overview of the well-established literature dealing with user model interoperability, discussing the most representative work which has provided valuable solutions to face interoperability issues. Based on a detailed decomposition and a deep analysis of the selected work, we have isolated a set of dimensions characterizing the user model interoperability process along which the work has been classified. Starting from this analysis, the paper presents some open issues and possible future deployments in the area.",
"title": ""
},
{
"docid": "d7594a6e11835ac94ee40e5d69632890",
"text": "(CLUES) is an advanced, automated mortgageunderwriting rule-based expert system. The system was developed to increase the production capacity and productivity of Countrywide branches, improve the consistency of underwriting, and reduce the cost of originating a loan. The system receives selected information from the loan application, credit report, and appraisal. It then decides whether the loan should be approved or whether it requires further review by a human underwriter. If the system approves the loan, no further review is required, and the application is funded. CLUES has been in operation since February 1993 and is currently processing more than 8500 loans each month in over 300 decentralized branches around the country.",
"title": ""
}
] |
scidocsrr
|
badab268b74d50beb5b21b9d5952b1b3
|
Turkish document classification based on Word2Vec and SVM classifier
|
[
{
"docid": "8da50eee8aaebe575eeaceae49c9fb37",
"text": "In this paper, we propose a set of language resources for building Turkish language processing applications. Specifically, we present a finite-state implementation of a morphological parser, an averaged perceptron-based morphological disambiguator, and compilation of a web corpus. Turkish is an agglutinative language with a highly productive inflectional and derivational morphology. We present an implementation of a morphological parser based on two-level morphology. This parser is one of the most complete parsers for Turkish and it runs independent of any other external system such as PCKIMMO in contrast to existing parsers. Due to complex phonology and morphology of Turkish, parsing introduces some ambiguous parses. We developed a morphological disambiguator with accuracy of about 98% using averaged perceptron algorithm. We also present our efforts to build a Turkish web corpus of about 423 million words.",
"title": ""
}
] |
[
{
"docid": "795e22957969913f2bfbc16c59a9a95d",
"text": "We present an incremental polynomial-time algorithm for enumerating all circuits of a matroid or, more generally, all minimal spanning sets for a flat. We also show the NP-hardness of several related enumeration problems. †RUTCOR, Rutgers University, 640 Bartholomew Road, Piscataway NJ 08854-8003; ({boros,elbassio,gurvich}@rutcor.rutgers.edu). ‡Department of Computer Science, Rutgers University, 110 Frelinghuysen Road, Piscataway NJ 08854-8003; ([email protected]). ∗This research was supported in part by the National Science Foundation Grant IIS0118635. The research of the first and third authors was also supported in part by the Office of Naval Research Grant N00014-92-J-1375. The second and third authors are also grateful for the partial support by DIMACS, the National Science Foundation’s Center for Discrete Mathematics and Theoretical Computer Science.",
"title": ""
},
{
"docid": "64e57a5382411ade7c0ad4ef7f094aa9",
"text": "In this paper we present the techniques used for the University of Montréal's team submissions to the 2013 Emotion Recognition in the Wild Challenge. The challenge is to classify the emotions expressed by the primary human subject in short video clips extracted from feature length movies. This involves the analysis of video clips of acted scenes lasting approximately one-two seconds, including the audio track which may contain human voices as well as background music. Our approach combines multiple deep neural networks for different data modalities, including: (1) a deep convolutional neural network for the analysis of facial expressions within video frames; (2) a deep belief net to capture audio information; (3) a deep autoencoder to model the spatio-temporal information produced by the human actions depicted within the entire scene; and (4) a shallow network architecture focused on extracted features of the mouth of the primary human subject in the scene. We discuss each of these techniques, their performance characteristics and different strategies to aggregate their predictions. Our best single model was a convolutional neural network trained to predict emotions from static frames using two large data sets, the Toronto Face Database and our own set of faces images harvested from Google image search, followed by a per frame aggregation strategy that used the challenge training data. This yielded a test set accuracy of 35.58%. Using our best strategy for aggregating our top performing models into a single predictor we were able to produce an accuracy of 41.03% on the challenge test set. These compare favorably to the challenge baseline test set accuracy of 27.56%.",
"title": ""
},
{
"docid": "eb12e9e10d379fcbc156e94c3b447ce1",
"text": "Control-Flow Integrity (CFI) is an effective approach to mitigating control-flow hijacking attacks. Conventional CFI techniques statically extract a control-flow graph (CFG) from a program and instrument the program to enforce that CFG. The statically generated CFG includes all edges for all possible inputs; however, for a concrete input, the CFG may include many unnecessary edges.\n We present Per-Input Control-Flow Integrity (PICFI), which is a new CFI technique that can enforce a CFG computed for each concrete input. PICFI starts executing a program with the empty CFG and lets the program itself lazily add edges to the enforced CFG if such edges are required for the concrete input. The edge addition is performed by PICFI-inserted instrumentation code. To prevent attackers from arbitrarily adding edges, PICFI uses a statically computed all-input CFG to constrain what edges can be added at runtime. To minimize performance overhead, operations for adding edges are designed to be idempotent, so they can be patched to no-ops after their first execution. As our evaluation shows, PICFI provides better security than conventional fine-grained CFI with comparable performance overhead.",
"title": ""
},
{
"docid": "8a1e94245d8fbdaf97402923d4dbc213",
"text": "This is the first study to measure the 'sense of community' reportedly offered by the CrossFit gym model. A cross-sectional study adapted Social Capital and General Belongingness scales to compare perceptions of a CrossFit gym and a traditional gym. CrossFit gym members reported significantly higher levels of social capital (both bridging and bonding) and community belongingness compared with traditional gym members. However, regression analysis showed neither social capital, community belongingness, nor gym type was an independent predictor of gym attendance. Exercise and health professionals may benefit from evaluating further the 'sense of community' offered by gym-based exercise programmes.",
"title": ""
},
{
"docid": "cd45dd9d63c85bb0b23ccb4a8814a159",
"text": "Parameter set learned using all WMT12 data (Callison-Burch et al., 2012): • 100,000 binary rankings covering 8 language directions. •Restrict scoring for all languages to exact and paraphrase matching. Parameters encode human preferences that generalize across languages: •Prefer recall over precision. •Prefer word choice over word order. •Prefer correct translations of content words over function words. •Prefer exact matches over paraphrase matches, while still giving significant credit to paraphrases. Visualization",
"title": ""
},
{
"docid": "44ff9580f0ad6321827cf3f391a61151",
"text": "This paper aims to evaluate the aesthetic visual quality of a special type of visual media: digital images of paintings. Assessing the aesthetic visual quality of paintings can be considered a highly subjective task. However, to some extent, certain paintings are believed, by consensus, to have higher aesthetic quality than others. In this paper, we treat this challenge as a machine learning problem, in order to evaluate the aesthetic quality of paintings based on their visual content. We design a group of methods to extract features to represent both the global characteristics and local characteristics of a painting. Inspiration for these features comes from our prior knowledge in art and a questionnaire survey we conducted to study factors that affect human's judgments. We collect painting images and ask human subjects to score them. These paintings are then used for both training and testing in our experiments. Experimental results show that the proposed work can classify high-quality and low-quality paintings with performance comparable to humans. This work provides a machine learning scheme for the research of exploring the relationship between aesthetic perceptions of human and the computational visual features extracted from paintings.",
"title": ""
},
{
"docid": "bd100b77d129163277b9ea6225fd3af3",
"text": "Spatial interactions (or flows), such as population migration and disease spread, naturally form a weighted location-to-location network (graph). Such geographically embedded networks (graphs) are usually very large. For example, the county-to-county migration data in the U.S. has thousands of counties and about a million migration paths. Moreover, many variables are associated with each flow, such as the number of migrants for different age groups, income levels, and occupations. It is a challenging task to visualize such data and discover network structures, multivariate relations, and their geographic patterns simultaneously. This paper addresses these challenges by developing an integrated interactive visualization framework that consists three coupled components: (1) a spatially constrained graph partitioning method that can construct a hierarchy of geographical regions (communities), where there are more flows or connections within regions than across regions; (2) a multivariate clustering and visualization method to detect and present multivariate patterns in the aggregated region-to-region flows; and (3) a highly interactive flow mapping component to map both flow and multivariate patterns in the geographic space, at different hierarchical levels. The proposed approach can process relatively large data sets and effectively discover and visualize major flow structures and multivariate relations at the same time. User interactions are supported to facilitate the understanding of both an overview and detailed patterns.",
"title": ""
},
{
"docid": "a04e2df0d6ca5eae1db6569b43b897bd",
"text": "Workflow technologies have become a major vehicle for easy and efficient development of scientific applications. In the meantime, state-of-the-art resource provisioning technologies such as cloud computing enable users to acquire computing resources dynamically and elastically. A critical challenge in integrating workflow technologies with resource provisioning technologies is to determine the right amount of resources required for the execution of workflows in order to minimize the financial cost from the perspective of users and to maximize the resource utilization from the perspective of resource providers. This paper suggests an architecture for the automatic execution of large-scale workflow-based applications on dynamically and elastically provisioned computing resources. Especially, we focus on its core algorithm named PBTS (Partitioned Balanced Time Scheduling), which estimates the minimum number of computing hosts required to execute a workflow within a user-specified finish time. The PBTS algorithm is designed to fit both elastic resource provisioning models such as Amazon EC2 and malleable parallel application models such as MapReduce. The experimental results with a number of synthetic workflows and several real science workflows demonstrate that PBTS estimates the resource capacity close to the theoretical low bound. © 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "485b48bb7b489d2be73de84994a16e42",
"text": "This paper presents Conflux, a fast, scalable and decentralized blockchain system that optimistically process concurrent blocks without discarding any as forks. The Conflux consensus protocol represents relationships between blocks as a direct acyclic graph and achieves consensus on a total order of the blocks. Conflux then, from the block order, deterministically derives a transaction total order as the blockchain ledger. We evaluated Conflux on Amazon EC2 clusters with up to 20k full nodes. Conflux achieves a transaction throughput of 5.76GB/h while confirming transactions in 4.5-7.4 minutes. The throughput is equivalent to 6400 transactions per second for typical Bitcoin transactions. Our results also indicate that when running Conflux, the consensus protocol is no longer the throughput bottleneck. The bottleneck is instead at the processing capability of individual nodes.",
"title": ""
},
{
"docid": "8ea5ed93c3c162c99fe329d243906712",
"text": "This paper describes the design, simulation and measurement of a dual-band slotted waveguide antenna array for adaptive 5G networks, operating in the millimeter wave frequency range. Its structure is composed by two groups of slots milled onto the opposite faces of a rectangular waveguide, enabling antenna operation over two different frequency bands, namely 28 and 38 GHz. Measured and numerical results, obtained using ANSYS HFSS, demonstrate two bandwidths of approximately 26.36% and 9.78% for 28 GHz and 38 GHz, respectively. The antenna gain varies from 12.6 dBi for the lower frequency band to 15.6dBi for the higher one.",
"title": ""
},
{
"docid": "adc389d574dfed8fd67418f9d04f6fcf",
"text": "In the recent years, studies of design and programming practices in mobile development are gaining more attention from researchers. Several such empirical studies used Android applications (paid, free, and open source) to analyze factors such as size, quality, dependencies, reuse, and cloning. Most of the studies use executable files of the apps (APK files), instead of source code because of availability issues (most of free apps available at the Android official market are not open-source, but still can be downloaded and analyzed in APK format). However, using only APK files in empirical studies comes with some threats to the validity of the results. In this paper, we analyze some of these pertinent threats. In particular, we analyzed the impact of third-party libraries and code obfuscation practices on estimating the amount of reuse by class cloning in Android apps. When including and excluding third-party libraries from the analysis, we found statistically significant differences in the amount of class cloning 24,379 free Android apps. Also, we found some evidence that obfuscation is responsible for increasing a number of false positives when detecting class clones. Finally, based on our findings, we provide a list of actionable guidelines for mining and analyzing large repositories of Android applications and minimizing these threats to validity",
"title": ""
},
{
"docid": "01875eeb7da3676f46dd9d3f8bf3ecac",
"text": "It is shown that a certain tour of 49 cities, one in each of the 48 states and Washington, D C , has the shortest road distance T HE TRAVELING-SALESMAN PROBLEM might be described as follows: Find the shortest route (tour) for a salesman starting from a given city, visiting each of a specified group of cities, and then returning to the original point of departure. More generally, given an n by n symmetric matrix D={d,j), where du represents the 'distance' from / to J, arrange the points in a cyclic order in such a way that the sum of the du between consecutive points is minimal. Since there are only a finite number of possibilities (at most 3>' 2 (« —1)0 to consider, the problem is to devise a method of picking out the optimal arrangement which is reasonably efficient for fairly large values of n. Although algorithms have been devised for problems of similar nature, e.g., the optimal assignment problem,''** little is known about the traveling-salesman problem. We do not claim that this note alters the situation very much; what we shall do is outline a way of approaching the problem that sometimes, at least, enables one to find an optimal path and prove it so. In particular, it will be shown that a certain arrangement of 49 cities, one m each of the 48 states and Washington, D. C, is best, the du used representing road distances as taken from an atlas. * HISTORICAL NOTE-The origin of this problem is somewhat obscure. It appears to have been discussed informally among mathematicians at mathematics meetings for many years. Surprisingly little in the way of results has appeared in the mathematical literature.'\" It may be that the minimal-distance tour problem was stimulated by the so-called Hamiltonian game' which is concerned with finding the number of different tours possible over a specified network The latter problem is cited by some as the origin of group theory and has some connections with the famou8 Four-Color Conjecture ' Merrill Flood (Columbia Universitj') should be credited with stimulating interest in the traveling-salesman problem in many quarters. As early as 1937, he tried to obtain near optimal solutions in reference to routing of school buses. Both Flood and A W. Tucker (Princeton University) recall that they heard about the problem first in a seminar talk by Hassler Whitney at Princeton in 1934 (although Whitney, …",
"title": ""
},
{
"docid": "c9577cd328841c1574e18677c00731d0",
"text": "Topic modeling refers to the task of discovering the underlying thematic structure in a text corpus, where the output is commonly presented as a report of the top terms appearing in each topic. Despite the diversity of topic modeling algorithms that have been proposed, a common challenge in successfully applying these techniques is the selection of an appropriate number of topics for a given corpus. Choosing too few topics will produce results that are overly broad, while choosing too many will result in the“over-clustering” of a corpus into many small, highly-similar topics. In this paper, we propose a term-centric stability analysis strategy to address this issue, the idea being that a model with an appropriate number of topics will be more robust to perturbations in the data. Using a topic modeling approach based on matrix factorization, evaluations performed on a range of corpora show that this strategy can successfully guide the model selection process.",
"title": ""
},
{
"docid": "cc4e1ab4461261f181182600302c47b2",
"text": "The task of WSDM 2018 Music Recommendation Challenge is predict the probability of a user re-listening to a song within a specified time window after the first observable listening event. This paper presents our approach to this challenge. We built our recommendation models using multiple additive decision trees and factorization machines. By capturing the time-series characteristics of the music listening data, we can achieve significant improvement over baseline models. Meanwhile, ensemble of a collection of models that take into consideration the cold-start nature of the music recommendation task can further significantly improve upon the best single model. We show how our approach achieved an AUROC score of 0.73666 on the withheld test set, and thereby attaining the overall 5th place in the competition.",
"title": ""
},
{
"docid": "ece03e1f4d2d129daafebc63872a41e2",
"text": "With the development of Internet, social networks have become important platforms which allow users to follow streams of posts generated by their friends and acquaintances. Through mining a collection of nodes with similarities, community detection can make us understand the characteristics of complex network deeply. Therefore, community detection has attracted increasing attention in recent years. Since targeted at on-line social networks, we investigate how to exploit user's profile and topological structure information in social circle discovery. Firstly, according to directionality of linkages, we put forward inlink Salton metric and out-link Salton metric to measure user's topological structure. Then we propose an improved density peaks-based clustering method and deploy it to discover social circles with overlap on account of user's profileand topological structure-based features. Experiments on real-world dataset demonstrate the effectiveness of the proposed framework. Further experiments are conducted to understand the importance of different parameters and different features in social circle discovery. & 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c7eefe7395070d6da50120a82a5f624a",
"text": "The paper reports on three projects at our laboratory that deal respectively with synchronous collaborative design, asynchronous collaborative design, and design coordination. The Electronic Cocktail Napkin and its mobile extension that runs on hand-held computers supports synchronous design with shared freehand drawing environments. The PHIDIAS hypermedia system supports long-term, asynchronous collaboration by enabling designers of large complex artifacts to store Ž . and retrieve rationale about design decisions and the Construction Kit Builder CKB supports team design by supporting a priori agreements among team members to avoid conflicts. q 1998 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "2e31e38fe00d4de7897e544b9aeebd6e",
"text": "Many researchers have conceptualized smoking uptake behavior in adolescence as progressing through a sequence of developmental stages. Multiple social, psychological, and biological factors influence this process, and may play different functions at different points in the progression, and play different roles for different people. The major objective of this paper is to review empirical studies of predictors of transitions in stages of smoking progression, and identify similarities and differences related to predictors of stages and transitions across studies. While a number of factors related to stage of progression replicated across studies, few variables uniquely predicted a particular stage or transition in smoking behavior. Subsequently, theoretical considerations related to stage conceptualization and measurement, inter-individual differences in intra-individual change, and the staged or continuous nature of smoking progression are discussed.",
"title": ""
},
{
"docid": "0f84e488b0e0b18e829aee14213dcebe",
"text": "The ability to reliably identify sarcasm and irony in text can improve the perfo rmance of many Natural Language Processing (NLP) systems including summarization, sentiment analysis, etc. The existing sar casm detection systems have focused on identifying sarcasm on a sentence level or for a specific phrase. However, often it is impos sible to identify a sentence containing sarcasm without knowing the context. In this paper we describe a corpus generation experiment w h re e collect regular and sarcastic Amazon product reviews. We perform qualitative and quantitative analysis of the corpus. The resu lting corpus can be used for identifying sarcasm on two levels: a document and a text utterance (where a text utterance can be as short as a sentence and as long as a whole document).",
"title": ""
},
{
"docid": "5b7930de475b6f83f8333439fd0f9c3b",
"text": "Cloud applications are increasingly built from a mixture of runtime technologies. Hosted functions and service-oriented web hooks are among the most recent ones which are natively supported by cloud platforms. They are collectively referred to as serverless computing by application engineers due to the transparent on-demand instance activation and microbilling without the need to provision infrastructure explicitly. This half-day tutorial explains the use cases for serverless computing and the drivers and existing software solutions behind the programming and deployment model also known as Function-as-a-Service in the overall cloud computing stack. Furthermore, it presents practical open source tools for deriving functions from legacy code and for the management and execution of functions in private and public clouds.",
"title": ""
},
{
"docid": "2f200468d1c8ddef1e1805cfb047b702",
"text": "BACKGROUND\nIn a previous trial of antiretroviral therapy (ART) involving pregnant women with human immunodeficiency virus (HIV) infection, those randomly assigned to receive tenofovir, emtricitabine, and ritonavir-boosted lopinavir (TDF-FTC-LPV/r) had infants at greater risk for very premature birth and death within 14 days after delivery than those assigned to receive zidovudine, lamivudine, and ritonavir-boosted lopinavir (ZDV-3TC-LPV/r).\n\n\nMETHODS\nUsing data from two U.S.-based cohort studies, we compared the risk of adverse birth outcomes among infants with in utero exposure to ZDV-3TC-LPV/r, TDF-FTC-LPV/r, or TDF-FTC with ritonavir-boosted atazanavir (ATV/r). We evaluated the risk of preterm birth (<37 completed weeks of gestation), very preterm birth (<34 completed weeks), low birth weight (<2500 g), and very low birth weight (<1500 g). Risk ratios with 95% confidence intervals were estimated with the use of modified Poisson models to adjust for confounding.\n\n\nRESULTS\nThere were 4646 birth outcomes. Few infants or fetuses were exposed to TDF-FTC-LPV/r (128 [2.8%]) as the initial ART regimen during gestation, in contrast with TDF-FTC-ATV/r (539 [11.6%]) and ZDV-3TC-LPV/r (954 [20.5%]). As compared with women receiving ZDV-3TC-LPV/r, women receiving TDF-FTC-LPV/r had a similar risk of preterm birth (risk ratio, 0.90; 95% confidence interval [CI], 0.60 to 1.33) and low birth weight (risk ratio, 1.13; 95% CI, 0.78 to 1.64). As compared to women receiving TDF-FTC-ATV/r, women receiving TDF-FTC-LPV/r had a similar or slightly higher risk of preterm birth (risk ratio, 1.14; 95% CI, 0.75 to 1.72) and low birth weight (risk ratio, 1.45; 95% CI, 0.96 to 2.17). There were no significant differences between regimens in the risk of very preterm birth or very low birth weight.\n\n\nCONCLUSIONS\nThe risk of adverse birth outcomes was not higher with TDF-FTC-LPV/r than with ZDV-3TC-LPV/r or TDF-FTC-ATV/r among HIV-infected women and their infants in the United States, although power was limited for some comparisons. (Funded by the National Institutes of Health and others.).",
"title": ""
}
] |
scidocsrr
|
04092f4adbbd6afe0bc4dd313b0e68cb
|
Understanding technology choices and values through social class
|
[
{
"docid": "0d6d4bd526bbb27eaf4a42aeeeb08c94",
"text": "We describe a new method for use in the process of co-designing technologies with users called technology probes. Technology probes are simple, flexible, adaptable technologies with three interdisciplinary goals: the social science goal of understanding the needs and desires of users in a real-world setting, the engineering goal of field-testing the technology, and the design goal of inspiring users and researchers to think about new technologies. We present the results of designing and deploying two technology probes, the messageProbe and the videoProbe, with diverse families in France, Sweden, and the U.S. We conclude with our plans for creating new technologies for and with families based on our experiences.",
"title": ""
},
{
"docid": "cf768855de6b9c33a1b8284b4e24383f",
"text": "The Value Sensitive Design (VSD) methodology provides a comprehensive framework for advancing a value-centered research and design agenda. Although VSD provides helpful ways of thinking about and designing value-centered computational systems, we argue that the specific mechanics of VSD create thorny tensions with respect to value sensitivity. In particular, we examine limitations due to value classifications, inadequate guidance on empirical tools for design, and the ways in which the design process is ordered. In this paper, we propose ways of maturing the VSD methodology to overcome these limitations and present three empirical case studies that illustrate a family of methods to effectively engage local expressions of values. The findings from our case studies provide evidence of how we can mature the VSD methodology to mitigate the pitfalls of classification and engender a commitment to reflect on and respond to local contexts of design.",
"title": ""
}
] |
[
{
"docid": "5249a94aa9d9dbb211bb73fa95651dfd",
"text": "Power and energy have become increasingly important concerns in the design and implementation of today's multicore/manycore chips. In this paper, we present two priority-based CPU scheduling algorithms, Algorithm Cache Miss Priority CPU Scheduler (CM-PCS) and Algorithm Context Switch Priority CPU Scheduler (CS-PCS), which take advantage of often ignored dynamic performance data, in order to reduce power consumption by over 20 percent with a significant increase in performance. Our algorithms utilize Linux cpusets and cores operating at different fixed frequencies. Many other techniques, including dynamic frequency scaling, can lower a core's frequency during the execution of a non-CPU intensive task, thus lowering performance. Our algorithms match processes to cores better suited to execute those processes in an effort to lower the average completion time of all processes in an entire task, thus improving performance. They also consider a process's cache miss/cache reference ratio, number of context switches and CPU migrations, and system load. Finally, our algorithms use dynamic process priorities as scheduling criteria. We have tested our algorithms using a real AMD Opteron 6134 multicore chip and measured results directly using the “KillAWatt” meter, which samples power periodically during execution. Our results show not only a power (energy/execution time) savings of 39 watts (21.43 percent) and 38 watts (20.88 percent), but also a significant improvement in the performance, performance per watt, and execution time · watt (energy) for a task consisting of 24 concurrently executing benchmarks, when compared to the default Linux scheduler and CPU frequency scaling governor.",
"title": ""
},
{
"docid": "8c043576bd1a73b783890cdba3a5e544",
"text": "We present a novel approach to collaborative prediction, using low-norm instead of low-rank factorizations. The approach is inspired by, and has strong connections to, large-margin linear discrimination. We show how to learn low-norm factorizations by solving a semi-definite program, and discuss generalization error bounds for them.",
"title": ""
},
{
"docid": "6a74a4d52d468b823a8a9e1a123864bd",
"text": "In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with most of the CNN-based recognition models. Albeit simple, Random Erasing is complementary to commonly used data augmentation techniques such as random cropping and flipping, and yields consistent improvement over strong baselines in image classification, object detection and person reidentification. Code is available at: https://github. com/zhunzhong07/Random-Erasing.",
"title": ""
},
{
"docid": "8096963007e7b6bfbcaf406e16125df9",
"text": "This paper analyses the substantially growing markets for crowdfunding, in which retail investors lend to borrowers without financial intermediaries. Critics suggest these markets allow sophisticated investors to take advantage of unsophisticated investors. The growth and viability of these markets critically depends on the underlying incentives. We provide evidence of perverse incentives in crowdfunding that are not fully recognized by the market. In particular we look at group leader bids in the presence of origination fees and find that these bids are (wrongly) perceived as a signal of good loan quality, resulting in lower interest rates. Yet these loans actually have higher default rates. These adverse incentives are overcome only with sufficient skin in the game and when there are no origination fees. The results provide important implications for crowdfunding, its structure and regulation.",
"title": ""
},
{
"docid": "ec03f26e8a4708c8e9f839b3006d0231",
"text": "We propose an automatic diabetic retinopathy (DR) analysis algorithm based on two-stages deep convolutional neural networks (DCNN). Compared to existing DCNN-based DR detection methods, the proposed algorithm have the following advantages: (1) Our method can point out the location and type of lesions in the fundus images, as well as giving the severity grades of DR. Moreover, since retina lesions and DR severity appear with different scales in fundus images, the integration of both local and global networks learn more complete and specific features for DR analysis. (2) By introducing imbalanced weighting map, more attentions will be given to lesion patches for DR grading, which significantly improve the performance of the proposed algorithm. In this study, we label 12, 206 lesion patches and re-annotate the DR grades of 23, 595 fundus images from Kaggle competition dataset. Under the guidance of clinical ophthalmologists, the experimental results show that our local lesion detection net achieve comparable performance with trained human observers, and the proposed imbalanced weighted scheme also be proved to significantly improve the capability of our DCNN-based DR grading algorithm.",
"title": ""
},
{
"docid": "8bbe774cd10b2383ad2c81c1c492e617",
"text": "BACKGROUND\nModern management of Parkinson's disease (PD) aims to obtain symptom control, to reduce clinical disability, and to improve quality of life. Music acts as a specific stimulus to obtain motor and emotional responses by combining movement and stimulation of different sensory pathways. We explored the efficacy of active music therapy (MT) on motor and emotional functions in patients with PD.\n\n\nMETHODS\nThis prospective, randomized, controlled, single-blinded study lasted 3 months. It consisted of weekly sessions of MT and physical therapy (PT). Thirty-two patients with PD, all stable responders to levodopa and in Hoehn and Yahr stage 2 or 3, were randomly assigned to two groups of 16 patients each. We assessed severity of PD with the Unified Parkinson's Disease Rating Scale, emotional functions with the Happiness Measure, and quality of life using the Parkinson's Disease Quality of Life Questionnaire. MT sessions consisted of choral singing, voice exercise, rhythmic and free body movements, and active music involving collective invention. PT sessions included a series of passive stretching exercises, specific motor tasks, and strategies to improve balance and gait.\n\n\nRESULTS\nMT had a significant overall effect on bradykinesia as measured by the Unified Parkinson's Disease Rating Scale (p < .034). Post-MT session findings were consistent with motor improvement, especially in bradykinesia items (p < .0001). Over time, changes on the Happiness Measure confirmed a beneficial effect of MT on emotional functions (p < .0001). Improvements in activities of daily living and in quality of life were also documented in the MT group (p < .0001). PT improved rigidity (p < .0001).\n\n\nCONCLUSIONS\nMT is effective on motor, affective, and behavioral functions. We propose active MT as a new method for inclusion in PD rehabilitation programs.",
"title": ""
},
{
"docid": "70d820f14b4d30f03268e51db87e19f0",
"text": "Many emerging applications driven the fast development of the device-free localization DfL technique, which does not require the target to carry any wireless devices. Most current DfL approaches have two main drawbacks in practical applications. First, as the pre-calibrated received signal strength RSS in each location i.e., radio-map of a specific area cannot be directly applied to the new areas, the manual calibration for different areas will lead to a high human effort cost. Second, a large number of RSS are needed to accurately localize the targets, thus causes a high communication cost and the areas variety will further exacerbate this problem. This paper proposes FitLoc, a fine-grained and low cost DfL approach that can localize multiple targets over various areas, especially in the outdoor environment and similar furnitured indoor environment. FitLoc unifies the radio-map over various areas through a rigorously designed transfer scheme, thus greatly reduces the human effort cost. Furthermore, benefiting from the compressive sensing theory, FitLoc collects a few RSS and performs a fine-grained localization, thus reduces the communication cost. Theoretical analyses validate the effectivity of the problem formulation and the bound of localization error is provided. Extensive experimental results illustrate the effectiveness and robustness of FitLoc.",
"title": ""
},
{
"docid": "92c5f9d8f33f00dc0ced4b2fa57916f3",
"text": "Blockchain holds promise for being the revolutionary technology, which has the potential to find applications in numerous fields such as digital money, clearing, gambling and product tracing. However, blockchain faces its own problems and challenges. One key problem is to automatically cluster the behavior patterns of all the blockchain nodes into categories. In this paper, we introduce the problem of behavior pattern clustering in blockchain networks and propose a novel algorithm termed BPC for this problem. We evaluate a long list of potential sequence similarity measures, and select a distance that is suitable for the behavior pattern clustering problem. Extensive experiments show that our proposed algorithm is much more effective than the existing methods in terms of clustering accuracy.",
"title": ""
},
{
"docid": "4ac2dcda8b5d843631ef0328154d5e15",
"text": "This paper describes the use of physical unclonable functions (PUFs) in low-cost authentication and key generation applications. First, it motivates the use of PUFs versus conventional secure nonvolatile memories and defines the two primary PUF types: “strong PUFs” and “weak PUFs.” It describes strong PUF implementations and their use for low-cost authentication. After this description, the paper covers both attacks and protocols to address errors. Next, the paper covers weak PUF implementations and their use in key generation applications. It covers error-correction schemes such as pattern matching and index-based coding. Finally, this paper reviews several emerging concepts in PUF technologies such as public model PUFs and new PUF implementation technologies.",
"title": ""
},
{
"docid": "f262aba2003f986012bbec1a9c2fcb83",
"text": "Hemiplegic migraine is a rare form of migraine with aura that involves motor aura (weakness). This type of migraine can occur as a sporadic or a familial disorder. Familial forms of hemiplegic migraine are dominantly inherited. Data from genetic studies have implicated mutations in genes that encode proteins involved in ion transportation. However, at least a quarter of the large families affected and most sporadic cases do not have a mutation in the three genes known to be implicated in this disorder, suggesting that other genes are still to be identified. Results from functional studies indicate that neuronal hyperexcitability has a pivotal role in the pathogenesis of hemiplegic migraine. The clinical manifestations of hemiplegic migraine range from attacks with short-duration hemiparesis to severe forms with recurrent coma and prolonged hemiparesis, permanent cerebellar ataxia, epilepsy, transient blindness, or mental retardation. Diagnosis relies on a careful patient history and exclusion of potential causes of symptomatic attacks. The principles of management are similar to those for common varieties of migraine, except that vasoconstrictors, including triptans, are historically contraindicated but are often used off-label to stop the headache, and prophylactic treatment can include lamotrigine and acetazolamide.",
"title": ""
},
{
"docid": "13f5414bcdc5213fef9458fa31f2e593",
"text": "It has been suggested that the prevalence of Helicobacter pylori infection has stabilized in the USA and is decreasing in China. We conducted a systematic literature analysis to test this hypothesis. PubMed and Embase searches were conducted up to 19 January 2015. Trends in the prevalence of H. pylori infection over time were assessed by regression analysis using Microsoft Excel. Overall, 25 Chinese studies (contributing 28 datasets) and 11 US studies (contributing 11 datasets) were included. There was a significant decrease over time in the H. pylori infection prevalence for the Chinese studies overall (p = 0.00018) and when studies were limited to those that used serum immunoglobulin G (IgG) assays to detect H. pylori infection (p = 0.014; 20 datasets). The weighted mean prevalence of H. pylori infection was 66 % for rural Chinese populations and 47 % for urban Chinese populations. There was a significant trend towards a decreasing prevalence of H. pylori infection for studies that included only urban populations (p = 0.04; 9 datasets). This trend was no longer statistically significant when these studies were further restricted to those that used serum IgG assays to detect H. pylori infection, although this may have been because of low statistical power due to the small number of datasets available for this analysis (p = 0.28; 6 datasets). There were no significant trends in terms of changes in the prevalence of H. pylori infection over time for studies conducted in the USA. In conclusion, the prevalence of H. pylori infection is most likely decreasing in China, due to a combination of increasing urbanization, which we found to be associated with lower H. pylori infection rates, and possibly also decreasing rates of H. pylori infection within urban populations. This will probably result in a gradual decrease in peptic ulcer and gastric cancer rates in China over time.",
"title": ""
},
{
"docid": "9fe23b9bb2ac499f2a29dbd010ded826",
"text": "A number of authors have recently considered iterative soft-in soft-out (SISO) decoding algorithms for classical linear block codes that utilize redundant Tanner graphs. Jiang and Narayanan presented a practically realizable algorithm that applies only to cyclic codes while Kothiyal et al. presented an algorithm that, while applicable to arbitrary linear block codes, does not imply a low-complexity implementation. This work first presents the aforementioned algorithms in a common framework and then presents a related algorithm - random redundant iterative decoding - that is both practically realizable and applicable to arbitrary linear block codes. Simulation results illustrate the successful application of the random redundant iterative decoding algorithm to the extended binary Golay code. Additionally, the proposed algorithm is shown to outperform Jiang and Narayanan's algorithm for a number of Bose-Chaudhuri-Hocquenghem (BCH) codes",
"title": ""
},
{
"docid": "ef787cfc1b00c9d05ec9293ff802f172",
"text": "High Definition (HD) maps play an important role in modern traffic scenes. However, the development of HD maps coverage grows slowly because of the cost limitation. To efficiently model HD maps, we proposed a convolutional neural network with a novel prediction layer and a zoom module, called LineNet. It is designed for state-of-the-art lane detection in an unordered crowdsourced image dataset. And we introduced TTLane, a dataset for efficient lane detection in urban road modeling applications. Combining LineNet and TTLane, we proposed a pipeline to model HD maps with crowdsourced data for the first time. And the maps can be constructed precisely even with inaccurate crowdsourced data.",
"title": ""
},
{
"docid": "29236d00bde843ff06e0f1a3e0ab88e4",
"text": "■ The advent of the modern cruise missile, with reduced radar observables and the capability to fly at low altitudes with accurate navigation, placed an enormous burden on all defense weapon systems. Every element of the engagement process, referred to as the kill chain, from detection to target kill assessment, was affected. While the United States held the low-observabletechnology advantage in the late 1970s, that early lead was quickly challenged by advancements in foreign technology and proliferation of cruise missiles to unfriendly nations. Lincoln Laboratory’s response to the various offense/defense trade-offs has taken the form of two programs, the Air Vehicle Survivability Evaluation program and the Radar Surveillance Technology program. The radar developments produced by these two programs, which became national assets with many notable firsts, is the subject of this article.",
"title": ""
},
{
"docid": "e2abe7d1ceba4a71b0713eb5eda795d3",
"text": "Lossy image and video compression algorithms yield visually annoying artifacts including blocking, blurring, and ringing, especially at low bit-rates. To reduce these artifacts, post-processing techniques have been extensively studied. Recently, inspired by the great success of convolutional neural network (CNN) in computer vision, some researches were performed on adopting CNN in post-processing, mostly for JPEG compressed images. In this paper, we present a CNN-based post-processing algorithm for High Efficiency Video Coding (HEVC), the state-of-theart video coding standard. We redesign a Variable-filter-size Residuelearning CNN (VRCNN) to improve the performance and to accelerate network training. Experimental results show that using our VRCNN as post-processing leads to on average 4.6% bit-rate reduction compared to HEVC baseline. The VRCNN outperforms previously studied networks in achieving higher bit-rate reduction, lower memory cost, and multiplied computational speedup.",
"title": ""
},
{
"docid": "348488fc6dd8cea52bd7b5808209c4c0",
"text": "Information Technology (IT) within Secretariat General of The Indonesian House of Representatives has important role to support the Member of Parliaments (MPs) duties and functions and therefore needs to be well managed to become enabler in achieving organization goals. In this paper, IT governance at Secretariat General of The Indonesian House of Representatives is evaluated using COBIT 5 framework to get their current capabilities level which then followed by recommendations to improve their level. The result of evaluation shows that IT governance process of Secretariat General of The Indonesian House of Representatives is 1.1 (Performed Process), which means that IT processes have been implemented and achieved their purpose. Recommendations for process improvement are derived based on three criteria (Stakeholder's support, IT human resources, and Achievement target time) resulting three processes in COBIT 5 that need to be prioritized: APO13 (Manage Security), BAI01 (Manage Programmes and Projects), and EDM01 (Ensure Governance Framework Setting and Maintenance).",
"title": ""
},
{
"docid": "a861f641d58e6a1249f1dc960fbd2baf",
"text": "Although CRISPR-Cas9 nucleases are widely used for genome editing, the range of sequences that Cas9 can recognize is constrained by the need for a specific protospacer adjacent motif (PAM). As a result, it can often be difficult to target double-stranded breaks (DSBs) with the precision that is necessary for various genome-editing applications. The ability to engineer Cas9 derivatives with purposefully altered PAM specificities would address this limitation. Here we show that the commonly used Streptococcus pyogenes Cas9 (SpCas9) can be modified to recognize alternative PAM sequences using structural information, bacterial selection-based directed evolution, and combinatorial design. These altered PAM specificity variants enable robust editing of endogenous gene sites in zebrafish and human cells not currently targetable by wild-type SpCas9, and their genome-wide specificities are comparable to wild-type SpCas9 as judged by GUIDE-seq analysis. In addition, we identify and characterize another SpCas9 variant that exhibits improved specificity in human cells, possessing better discrimination against off-target sites with non-canonical NAG and NGA PAMs and/or mismatched spacers. We also find that two smaller-size Cas9 orthologues, Streptococcus thermophilus Cas9 (St1Cas9) and Staphylococcus aureus Cas9 (SaCas9), function efficiently in the bacterial selection systems and in human cells, suggesting that our engineering strategies could be extended to Cas9s from other species. Our findings provide broadly useful SpCas9 variants and, more importantly, establish the feasibility of engineering a wide range of Cas9s with altered and improved PAM specificities.",
"title": ""
},
{
"docid": "2b4ebb63edcca2ef754ed90862ae43c1",
"text": "An overview on the history of the development of insulated gate bipolar transistors (IGBTs) as one key component in today’s power electronic systems is given; the state-of-the-art device concepts are explained as well as an detailed outlook about ongoing and foreseeable development steps is shown. All these measures will result on the one hand in ongoing power density and efficiency increase as important contributors for worldwide energy saving and environmental protection efforts. On the other hand, the exciting competition of more maturing Si IGBT technology with the wide bandgap successors of GaN and SiC switches will go on.",
"title": ""
},
{
"docid": "390e9e2bfb8e94d70d1dbcfbede6dd46",
"text": "Modern software-based services are implemented as distributed systems with complex behavior and failure modes. Many large tech organizations are using experimentation to verify such systems' reliability. Netflix engineers call this approach chaos engineering. They've determined several principles underlying it and have used it to run experiments. This article is part of a theme issue on DevOps.",
"title": ""
},
{
"docid": "3c3f3a9d6897510d5d5d3d55c882502c",
"text": "Error-tolerant graph matching is a powerful concept that has various applications in pattern recognition and machine vision. In the present paper, a new distance measure on graphs is proposed. It is based on the maximal common subgraph of two graphs. The new measure is superior to edit distance based measures in that no particular edit operations together with their costs need to be defined. It is formally shown that the new distance measure is a metric. Potential algorithms for the efficient computation of the new measure are discussed. q 1998 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
302e3d65669d73692eb8d943da2ac257
|
Design of an Autonomous Precision Pollination Robot
|
[
{
"docid": "6f242ee8418eebdd9fdce50ca1e7cfa2",
"text": "HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt età la diffusion de documents scientifiques de niveau recherche, publiés ou non, ´ emanant desétablissements d'enseignement et de recherche français oú etrangers, des laboratoires publics ou privés. Summary. This paper describes the construction and functionality of an Autonomous Fruit Picking Machine (AFPM) for robotic apple harvesting. The key element for the success of the AFPM is the integrated approach which combines state of the art industrial components with the newly designed flexible gripper. The gripper consist of a silicone funnel with a camera mounted inside. The proposed concepts guarantee adequate control of the autonomous fruit harvesting operation globally and of the fruit picking cycle particularly. Extensive experiments in the field validate the functionality of the AFPM.",
"title": ""
}
] |
[
{
"docid": "7f1c7a0887917937147c8f5b2dbe2df3",
"text": "We consider the problem of learning probabilistic models fo r c mplex relational structures between various types of objects. A model can hel p us “understand” a dataset of relational facts in at least two ways, by finding in terpretable structure in the data, and by supporting predictions, or inferences ab out whether particular unobserved relations are likely to be true. Often there is a t radeoff between these two aims: cluster-based models yield more easily interpret abl representations, while factorization-based approaches have given better pr edictive performance on large data sets. We introduce the Bayesian Clustered Tensor Factorization (BCTF) model, which embeds a factorized representation of relatio ns in a nonparametric Bayesian clustering framework. Inference is fully Bayesia n but scales well to large data sets. The model simultaneously discovers interp retable clusters and yields predictive performance that matches or beats previo us probabilistic models for relational data.",
"title": ""
},
{
"docid": "f59fd6af9dea570b49c453de02297f4c",
"text": "OBJECTIVES\nThe role of social media as a source of timely and massive information has become more apparent since the era of Web 2.0.Multiple studies illustrated the use of information in social media to discover biomedical and health-related knowledge.Most methods proposed in the literature employ traditional document classification techniques that represent a document as a bag of words.These techniques work well when documents are rich in text and conform to standard English; however, they are not optimal for social media data where sparsity and noise are norms.This paper aims to address the limitations posed by the traditional bag-of-word based methods and propose to use heterogeneous features in combination with ensemble machine learning techniques to discover health-related information, which could prove to be useful to multiple biomedical applications, especially those needing to discover health-related knowledge in large scale social media data.Furthermore, the proposed methodology could be generalized to discover different types of information in various kinds of textual data.\n\n\nMETHODOLOGY\nSocial media data is characterized by an abundance of short social-oriented messages that do not conform to standard languages, both grammatically and syntactically.The problem of discovering health-related knowledge in social media data streams is then transformed into a text classification problem, where a text is identified as positive if it is health-related and negative otherwise.We first identify the limitations of the traditional methods which train machines with N-gram word features, then propose to overcome such limitations by utilizing the collaboration of machine learning based classifiers, each of which is trained to learn a semantically different aspect of the data.The parameter analysis for tuning each classifier is also reported.\n\n\nDATA SETS\nThree data sets are used in this research.The first data set comprises of approximately 5000 hand-labeled tweets, and is used for cross validation of the classification models in the small scale experiment, and for training the classifiers in the real-world large scale experiment.The second data set is a random sample of real-world Twitter data in the US.The third data set is a random sample of real-world Facebook Timeline posts.\n\n\nEVALUATIONS\nTwo sets of evaluations are conducted to investigate the proposed model's ability to discover health-related information in the social media domain: small scale and large scale evaluations.The small scale evaluation employs 10-fold cross validation on the labeled data, and aims to tune parameters of the proposed models, and to compare with the stage-of-the-art method.The large scale evaluation tests the trained classification models on the native, real-world data sets, and is needed to verify the ability of the proposed model to handle the massive heterogeneity in real-world social media.\n\n\nFINDINGS\nThe small scale experiment reveals that the proposed method is able to mitigate the limitations in the well established techniques existing in the literature, resulting in performance improvement of 18.61% (F-measure).The large scale experiment further reveals that the baseline fails to perform well on larger data with higher degrees of heterogeneity, while the proposed method is able to yield reasonably good performance and outperform the baseline by 46.62% (F-Measure) on average.",
"title": ""
},
{
"docid": "7d0ebf939deed43253d5360e325c3e8e",
"text": "Roughly speaking, clustering evolving networks aims at detecting structurally dense subgroups in networks that evolve over time. This implies that the subgroups we seek for also evolve, which results in many additional tasks compared to clustering static networks. We discuss these additional tasks and difficulties resulting thereof and present an overview on current approaches to solve these problems. We focus on clustering approaches in online scenarios, i.e., approaches that incrementally use structural information from previous time steps in order to incorporate temporal smoothness or to achieve low running time. Moreover, we describe a collection of real world networks and generators for synthetic data that are often used for evaluation.",
"title": ""
},
{
"docid": "b9404d66fa6cc759382c73d6ae16fc0c",
"text": "Aspect extraction is an important and challenging task in aspect-based sentiment analysis. Existing works tend to apply variants of topic models on this task. While fairly successful, these methods usually do not produce highly coherent aspects. In this paper, we present a novel neural approach with the aim of discovering coherent aspects. The model improves coherence by exploiting the distribution of word co-occurrences through the use of neural word embeddings. Unlike topic models which typically assume independently generated words, word embedding models encourage words that appear in similar contexts to be located close to each other in the embedding space. In addition, we use an attention mechanism to de-emphasize irrelevant words during training, further improving the coherence of aspects. Experimental results on real-life datasets demonstrate that our approach discovers more meaningful and coherent aspects, and substantially outperforms baseline methods on several evaluation tasks.",
"title": ""
},
{
"docid": "6c07520a738f068f1bc3bdb8e3fda89b",
"text": "We analyze the role of the Global Brain in the sharing economy, by synthesizing the notion of distributed intelligence with Goertzel’s concept of an offer network. An offer network is an architecture for a future economic system based on the matching of offers and demands without the intermediate of money. Intelligence requires a network of condition-action rules, where conditions represent challenges that elicit action in order to solve a problem or exploit an opportunity. In society, opportunities correspond to offers of goods or services, problems to demands. Tackling challenges means finding the best sequences of condition-action rules to connect all demands to the offers that can satisfy them. This can be achieved with the help of AI algorithms working on a public database of rules, demands and offers. Such a system would provide a universal medium for voluntary collaboration and economic exchange, efficiently coordinating the activities of all people on Earth. It would replace and subsume the patchwork of commercial and community-based sharing platforms presently running on the Internet. It can in principle resolve the traditional problems of the capitalist economy: poverty, inequality, externalities, poor sustainability and resilience, booms and busts, and the neglect of non-monetizable values.",
"title": ""
},
{
"docid": "8fcf31f2de602cf10f769c41acccc221",
"text": "This book contains materials that come out of the Artificial General Intelligence Research Institute (AGIRI) Workshop, held in May 20-21, 2006 at Washington DC. The theme of the workshop is “Transitioning from Narrow AI to Artificial General Intelligence.” In this introductory chapter, we will clarify the notion of “Artificial General Intelligence”, briefly survey the past and present situation of the field, analyze and refute some common objections and doubts regarding this area of research, and discuss what we believe needs to be addressed by the field as a whole in the near future. Finally, we will briefly summarize the contents of the other chapters in this collection.",
"title": ""
},
{
"docid": "36b46a2bf4b46850f560c9586e91d27b",
"text": "Promoting pro-environmental behaviour amongst urban dwellers is one of today's greatest sustainability challenges. The aim of this study is to test whether an information intervention, designed based on theories from environmental psychology and behavioural economics, can be effective in promoting recycling of food waste in an urban area. To this end we developed and evaluated an information leaflet, mainly guided by insights from nudging and community-based social marketing. The effect of the intervention was estimated through a natural field experiment in Hökarängen, a suburb of Stockholm city, Sweden, and was evaluated using a difference-in-difference analysis. The results indicate a statistically significant increase in food waste recycled compared to a control group in the research area. The data analysed was on the weight of food waste collected from sorting stations in the research area, and the collection period stretched for almost 2 years, allowing us to study the short- and long term effects of the intervention. Although the immediate positive effect of the leaflet seems to have attenuated over time, results show that there was a significant difference between the control and the treatment group, even 8 months after the leaflet was distributed. Insights from this study can be used to guide development of similar pro-environmental behaviour interventions for other urban areas in Sweden and abroad, improving chances of reaching environmental policy goals.",
"title": ""
},
{
"docid": "670ad705862fb6d730f6a685d41b9b0b",
"text": "We present a novel optimization strategy for training neural networks which we call “BitNet”. The parameters of neural networks are usually unconstrained and have a dynamic range dispersed over all real values. Our key idea is to limit the expressive power of the network by dynamically controlling the range and set of values that the parameters can take. We formulate this idea using a novel end-to-end approach that circumvents the discrete parameter space by optimizing a relaxed continuous and differentiable upper bound of the typical classification loss function. The approach can be interpreted as a regularization inspired by the Minimum Description Length (MDL) principle. For each layer of the network, our approach optimizes real-valued translation and scaling factors and arbitrary precision integer-valued parameters (weights). We empirically compare BitNet to an equivalent unregularized model on the MNIST and CIFAR-10 datasets. We show that BitNet converges faster to a superior quality solution. Additionally, the resulting model has significant savings in memory due to the use of integer-valued parameters.",
"title": ""
},
{
"docid": "8dec8f3fd456174bb460e24161eb6903",
"text": "Developments in pervasive computing introduced a new world of computing where networked processors embedded and distributed in everyday objects communicating with each other over wireless links. Computers in such environments work in the background while establishing connections among them dynamically and hence will be less visible and intrusive. Such a vision raises questions about how to manage issues like privacy, trust and identity in those environments. In this paper, we review the technical challenges that face pervasive computing environments in relation to each of these issues. We then present a number of security related considerations and use them as a basis for comparison between pervasive and traditional computing. We will argue that these considerations pose particular concerns and challenges to the design and implementation of pervasive environments which are different to those usually found in traditional computing environments. To address these concerns and challenges, further research is needed. We will present a number of directions and topics for possible future research with respect to each of the three issues.",
"title": ""
},
{
"docid": "cc9686bac7de957afe52906763799554",
"text": "A key issue in software evolution analysis is the identification of particular changes that occur across several versions of a program. We present change distilling, a tree differencing algorithm for fine-grained source code change extraction. For that, we have improved the existing algorithm by Chawathe et al. for extracting changes in hierarchically structured data. Our algorithm extracts changes by finding both a match between the nodes of the compared two abstract syntax trees and a minimum edit script that can transform one tree into the other given the computed matching. As a result, we can identify fine-grained change types between program versions according to our taxonomy of source code changes. We evaluated our change distilling algorithm with a benchmark that we developed, which consists of 1,064 manually classified changes in 219 revisions of eight methods from three different open source projects. We achieved significant improvements in extracting types of source code changes: Our algorithm approximates the minimum edit script 45 percent better than the original change extraction approach by Chawathe et al. We are able to find all occurring changes and almost reach the minimum conforming edit script, that is, we reach a mean absolute percentage error of 34 percent, compared to the 79 percent reached by the original algorithm. The paper describes both our change distilling algorithm and the results of our evolution.",
"title": ""
},
{
"docid": "831643d45373109e8cf5602a2c6e1e38",
"text": "A new PV power generation topology, based on the quasi-Z source inverter (QZSI) with battery, is proposed in this paper. With a battery paralleled with one of the capacitors, this system can smooth the grid-injected/load power when the PV panel outputs a variable power with fluctuations. The battery can be charged or discharged without any extra circuit, because of the unique impedance network of QZSI. The operating principle and power flow of this system are analyzed. The closed-loop control strategy is used to achieve Maximum Power Point Tracking (MPPT) and keep the output voltage stable. Three operating modes of the proposed PV power generation system are simulated in the MATLAB/Simulink to verify its principle and theoretical analysis. A prototype is built in the lab, and the experimental results validate the whole system.",
"title": ""
},
{
"docid": "1e511c078892f54f51e93f6f4dbfa31f",
"text": "Over the past decade, the use of mobile phones has increased significantly. However, with every technological development comes some element of health concern, and cell phones are no exception. Recently, various studies have highlighted the negative effects of cell phone exposure on human health, and concerns about possible hazards related to cell phone exposure have been growing. This is a comprehensive, up-to-the-minute overview of the effects of cell phone exposure on human health. The types of cell phones and cell phone technologies currently used in the world are discussed in an attempt to improve the understanding of the technical aspects, including the effect of cell phone exposure on the cardiovascular system, sleep and cognitive function, as well as localized and general adverse effects, genotoxicity potential, neurohormonal secretion and tumour induction. The proposed mechanisms by which cell phones adversely affect various aspects of human health, and male fertility in particular, are explained, and the emerging molecular techniques and approaches for elucidating the effects of mobile phone radiation on cellular physiology using high-throughput screening techniques, such as metabolomics and microarrays, are discussed. A novel study is described, which is looking at changes in semen parameters, oxidative stress markers and sperm DNA damage in semen samples exposed in vitro to cell phone radiation.",
"title": ""
},
{
"docid": "c4029c71409770f4b8f4e43125110552",
"text": "Background: This study compared differences between a control group and a group with unilateral ankle dorsiflexion restriction in the ground reaction force (GRF), angles of the lower limbs joints, and muscular activity during a rebound-jump task in athletes who continue to perform sports activities with unilateral ankle dorsiflexion restriction. Methods: The athletes were divided into the following two groups: The dorsiflexion group included those with a difference of ≥7◦ between bilateral ankle dorsiflexion angles (DF), and the control group included those with a difference of <7◦ between the two ankles (C). An ankle foot orthosis was attached to subjects in group C to apply a restriction on the right-angle dorsiflexion angle. The percentage of maximum voluntary contraction (%MVC) of the legs musculature, components of the GRF, and the hip and knee joint angles during the rebound-jump task were compared between groups DF and C. Results: Group DF showed increased %MVC of the quadriceps muscle, decreased upward component of the GRF, decreased hip flexion, and increased knee eversion angles. Conclusions: This study highlighted that athletes with ankle dorsiflexion restriction had significantly larger knee eversion angles in the rebound-jump task. The reduced hip flexion was likely caused by the restricted ankle dorsiflexion and compensated by the observed increase in quadriceps muscle activation when performing the jump.",
"title": ""
},
{
"docid": "2c2fd7484d137a2ac01bdd4d3f176b44",
"text": "This paper presents a novel two-stage low dropout regulator (LDO) that minimizes output noise via a pre-regulator stage and achieves high power supply rejection via a simple subtractor circuit in the power driver stage. The LDO is fabricated with a standard 0.35mum CMOS process and occupies 0.26 mm2 and 0.39mm2 for single and dual output respectively. Measurement showed PSR is 60dB at 10kHz and integrated noise is 21.2uVrms ranging from 1kHz to 100kHz",
"title": ""
},
{
"docid": "53e668839e9d7e065dc7864830623790",
"text": "Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are introduced using a simplified example. Thereafter, the advantages and pitfalls of the specification of prior knowledge are discussed. To illustrate Bayesian methods explained in this study, in a second example a series of studies that examine the theoretical framework of dynamic interactionism are considered. In the Discussion the advantages and disadvantages of using Bayesian statistics are reviewed, and guidelines on how to report on Bayesian statistics are provided.",
"title": ""
},
{
"docid": "e05270c1d2abeda1cee99f1097c1c5d5",
"text": "E-transactions have become promising and very much convenient due to worldwide and usage of the internet. The consumer reviews are increasing rapidly in number on various products. These large numbers of reviews are beneficial to manufacturers and consumers alike. It is a big task for a potential consumer to read all reviews to make a good decision of purchasing. It is beneficial to mine available consumer reviews for popular products from various product review sites of consumer. The first step is performing sentiment analysis to decide the polarity of a review. On the basis of polarity, we can then classify the review. Comparison is made among the different WEKA classifiers in the form of charts and graphs.",
"title": ""
},
{
"docid": "5a9563f3186414cace353bb261792118",
"text": "Solid waste management is one of major aspect which has to be considered in terms of making urban area environment healthier. The common dustbins placed by the municipal corporation are leading no. of health, environmental and social issues. Various causes are there like improper dustbin placement in city, improper system of collecting waste by City Corporation, and more specifically people are not aware enough to use dustbins in proper way. These various major causes are leading serious problems like, an unhygienic condition, air pollution, and unhealthy environment creating health disease. Up till now, research has been carried out by developing a Software Applications for indicating dustbin status, another by Shortest path method for garbage collecting vehicles by integrating RFID, GSM, GIS system; but no any active efforts has been taken paying attention towards managing such waste in atomized way. Considering all these major factors, a smart solid waste management system is designed that will check status and give alert of dustbin fullness and more significantly system has a feature to literate people to use dustbin properly and to automatically sense and clean garbage present outside the dustbin. Thus presented solution achieves smart solid waste management satisfying goal of making Indian cities clean, healthy and hygienic.",
"title": ""
},
{
"docid": "69bfc5edab903692887371464d6eecb0",
"text": "In recent days text summarization had tremendous growth in all languages, especially in India regional languages. Yet the performance of such system needs improvement. This paper proposes an extractive Malayalam summarizer which reduces redundancy in summarized content and meaning of sentences are considered for summary generation. A semantic graph is created for entire document and summary generated by reducing graph using minimal spanning tree algorithm.",
"title": ""
},
{
"docid": "2d86f517026d93454bb1761dd21c7e9d",
"text": "This article presents a new approach to movement planning, on-line trajectory modification, and imitation learning by representing movement plans based on a set of nonlinear differential equations with well-defined attractor dynamics. In contrast to non-autonomous movement representations like splines, the resultant movement plan remains an autonomous set of nonlinear differential equations that forms a control policy (CP) which is robust to strong external perturbations and that can be modified on-line by additional perceptual variables. The attractor landscape of the control policy can be learned rapidly with a locally weighted regression technique with guaranteed convergence of the learning algorithm and convergence to the movement target. This property makes the system suitable for movement imitation and also for classifying demonstrated movement according to the parameters of the learning system. We evaluate the system with a humanoid robot simulation and an actual humanoid robot. Experiments are presented for the imitation of three types of movements: reaching movements with one arm, drawing movements of 2-D patterns, and tennis swings. Our results demonstrate (a) that multi-joint human movements can be encoded successfully by the CPs, (b) that a learned movement policy can readily be reused to produce robust trajectories towards different targets, (c) that a policy fitted for one particular target provides a good predictor of human reaching movements towards neighboring targets, and (d) that the parameter space which encodes a policy is suitable for measuring to which extent two trajectories are qualitatively similar.",
"title": ""
}
] |
scidocsrr
|
945a8e80cba310bf8e820b8827e9601e
|
Supervised Keyphrase Extraction as Positive Unlabeled Learning
|
[
{
"docid": "8f916f7be3048ae2a367096f4f82207d",
"text": "Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach.",
"title": ""
},
{
"docid": "34208fafbb3009a1bb463e3d8d983e61",
"text": "A large and growing number of web pages display contextual advertising based on keywords automatically extracted from the text of the page, and this is a substantial source of revenue supporting the web today. Despite the importance of this area, little formal, published research exists. We describe a system that learns how to extract keywords from web pages for advertisement targeting. The system uses a number of features, such as term frequency of each potential keyword, inverse document frequency, presence in meta-data, and how often the term occurs in search query logs. The system is trained with a set of example pages that have been hand-labeled with \"relevant\" keywords. Based on this training, it can then extract new keywords from previously unseen pages. Accuracy is substantially better than several baseline systems.",
"title": ""
},
{
"docid": "fb2028ca0e836452862a2cb1fa707d28",
"text": "State-of-the-art approaches for unsupervised keyphrase extraction are typically evaluated on a single dataset with a single parameter setting. Consequently, it is unclear how effective these approaches are on a new dataset from a different domain, and how sensitive they are to changes in parameter settings. To gain a better understanding of state-of-the-art unsupervised keyphrase extraction algorithms, we conduct a systematic evaluation and analysis of these algorithms on a variety of standard evaluation datasets.",
"title": ""
}
] |
[
{
"docid": "70743017cdee81c042491fe9ea550515",
"text": "Lightweight cryptographic solutions are required to guarantee the security of Internet of Things (IoT) pervasiveness. Cryptographic primitives mandate a non-linear operation. The design of a lightweight, secure, non-linear 4 × 4 substitution box (S-box) suited to Internet of Things (IoT) applications is proposed in this work. The structure of the 4 × 4 S-box is devised in the finite fields GF (24) and GF ((22)2). The finite field S-box is realized by multiplicative inversion followed by an affine transformation. The multiplicative inverse architecture employs Euclidean algorithm for inversion in the composite field GF ((22)2). The affine transformation is carried out in the field GF (24). The isomorphic mapping between the fields GF (24) and GF ((22)2) is based on the primitive element in the higher order field GF (24). The recommended finite field S-box architecture is combinational and enables sub-pipelining. The linear and differential cryptanalysis validates that the proposed S-box is within the maximal security bound. It is observed that there is 86.5% lesser gate count for the realization of sub field operations in the composite field GF ((22)2) compared to the GF (24) field. In the PRESENT lightweight cipher structure with the basic loop architecture, the proposed S-box demonstrates 5% reduction in the gate equivalent area over the look-up-table-based S-box with TSMC 180 nm technology.",
"title": ""
},
{
"docid": "86f0fa880f2a72cd3bf189132cc2aa44",
"text": "The advent of new technical solutions has offered a vast scope to encounter the existing challenges in tablet coating technology. One such outcome is the usage of innovative aqueous coating compositions to meet the limitations of organic based coating. The present study aimed at development of delayed release pantoprazole sodium tablets by coating with aqueous acrylic system belonging to methacrylic acid copolymer and to investigate the ability of the dosage form to protect the drug from acid milieu and to release rapidly in the duodenal pH. The core tablets were produced by direct compression using different disintegrants in variable concentrations. The physicochemical properties of all the tablets were consistent and satisfactory. Crosspovidone at 7.5% proved to be a better disintegrant with rapid disintegration with a minute, owing to its wicking properties. The optimized formulations were seal coated using HPMC dispersion to act as a barrier between the acid liable drug and enteric film coatings. The subcoating process was followed by enteric coating of tablets by the application of acryl-Eze at different theoretical weight gains. Enteric coated formulations were subjected to disintegration and dissolution tests by placing them in 0.1 N HCl for 2 h and then in pH 6.8 phosphate buffer for 1 h. The coated tablets remained static without peeling or cracking in the acid media, however instantly disintegrated in the intestinal pH. In the in vitro release studies, the optimized tablets released 0.16% in the acid media and 96% in the basic media which are well within the selected criteria. Results of the stability tests were satisfactory with the dissolution rate and assays were within acceptable limits. The results ascertained the acceptability of the aqueous based enteric coating composition for the successful development of delayed release, duodenal specific dosage forms for proton pump inhibitors.",
"title": ""
},
{
"docid": "bdf22b73549c774c4c42c48998da00f8",
"text": "One of the key issues in practical speech processing is to achieve robust voice activity detection (VAD) against the background noise. Most of the statistical model-based approaches have tried to employ the Gaussian assumption in the discrete Fourier transform (DFT) domain, which, however, deviates from the real observation. In this paper, we propose a class of VAD algorithms based on several statistical models. In addition to the Gaussian model, we also incorporate the complex Laplacian and Gamma probability density functions to our analysis of statistical properties. With a goodness-of-fit tests, we analyze the statistical properties of the DFT spectra of the noisy speech under various noise conditions. Based on the statistical analysis, the likelihood ratio test under the given statistical models is established for the purpose of VAD. Since the statistical characteristics of the speech signal are differently affected by the noise types and levels, to cope with the time-varying environments, our approach is aimed at finding adaptively an appropriate statistical model in an online fashion. The performance of the proposed VAD approaches in both the stationary and nonstationary noise environments is evaluated with the aid of an objective measure.",
"title": ""
},
{
"docid": "ae83a2258907f00500792178dc65340d",
"text": "In this paper, a novel method for lung nodule detection, segmentation and recognition using computed tomography (CT) images is presented. Our contribution consists of several steps. First, the lung area is segmented by active contour modeling followed by some masking techniques to transfer non-isolated nodules into isolated ones. Then, nodules are detected by the support vector machine (SVM) classifier using efficient 2D stochastic and 3D anatomical features. Contours of detected nodules are then extracted by active contour modeling. In this step all solid and cavitary nodules are accurately segmented. Finally, lung tissues are classified into four classes: namely lung wall, parenchyma, bronchioles and nodules. This classification helps us to distinguish a nodule connected to the lung wall and/or bronchioles (attached nodule) from the one covered by parenchyma (solitary nodule). At the end, performance of our proposed method is examined and compared with other efficient methods through experiments using clinical CT images and two groups of public datasets from Lung Image Database Consortium (LIDC) and ANODE09. Solid, non-solid and cavitary nodules are detected with an overall detection rate of 89%; the number of false positive is 7.3/scan and the location of all detected nodules are recognized correctly.",
"title": ""
},
{
"docid": "da5c1fea3f5d360bc3e30c0056e8f9b0",
"text": "The advances of technologies for mobile robotics enable the application of robots to increasingly complex tasks. Cleaning office buildings on a daily basis is a problem that could be partially automatized with a cleaning robot that assists the cleaning professional yielding a higher cleaning capacity. A typical task in this domain is the selective cleaning, that is a focused cleaning effort to dirty spots, which speeds up the overall cleaning procedure significantly. To enable a robotic cleaner to accomplish this task, it is first necessary to distinguish dirty areas from the clean remainder. This paper discusses a vision-based dirt detection system for mobile cleaning robots that can be applied to any surface and dirt without previous training, that is fast enough to be executed on a mobile robot and which achieves high dirt recognition rates of 90% at an acceptable false positive rate of 45%. The paper also introduces a large database of real scenes which was used for the evaluation and is publicly available.",
"title": ""
},
{
"docid": "8cc42ad71caac7605648166f9049df8e",
"text": "This section considers the application of eye movements to user interfaces—both for analyzing interfaces, measuring usability, and gaining insight into human performance—and as an actual control medium within a human-computer dialogue. The two areas have generally been reported separately; but this book seeks to tie them together. For usability analysis, the user’s eye movements while using the system are recorded and later analyzed retrospectively, but the eye movements do not affect the interface in real time. As a direct control medium, the eye movements are obtained and used in real time as an input to the user-computer dialogue. They might be the sole input, typically for disabled users or hands-busy applications, or they might be used as one of several inputs, combining with mouse, keyboard, sensors, or other devices.",
"title": ""
},
{
"docid": "34c6a8fc3fed159b3eaa5e01158d1060",
"text": "Web-based malware attacks have become one of the most serious threats that need to be addressed urgently. Several approaches that have attracted attention as promising ways of detecting such malware include employing one of several blacklists. However, these conventional approaches often fail to detect new attacks owing to the versatility of malicious websites. Thus, it is difficult to maintain up-to-date blacklists with information for new malicious websites. To tackle this problem, this paper proposes a new scheme for detecting malicious websites using the characteristics of IP addresses. Our approach leverages the empirical observation that IP addresses are more stable than other metrics such as URLs and DNS records. While the strings that form URLs or DNS records are highly variable, IP addresses are less variable, i.e., IPv4 address space is mapped onto 4-byte strings. In this paper, a lightweight and scalable detection scheme that is based on machine learning techniques is developed and evaluated. The aim of this study is not to provide a single solution that effectively detects web-based malware but to develop a technique that compensates the drawbacks of existing approaches. The effectiveness of our approach is validated by using real IP address data from existing blacklists and real traffic data on a campus network. The results demonstrate that our scheme can expand the coverage/accuracy of existing blacklists and also detect unknown malicious websites that are not covered by conventional approaches.",
"title": ""
},
{
"docid": "7543281174d7dc63e180249d94ad6c07",
"text": "Enriching speech recognition output with sentence boundaries improves its human readability and enables further processing by downstream language processing modules. We have constructed a hidden Markov model (HMM) system to detect sentence boundaries that uses both prosodic and textual information. Since there are more nonsentence boundaries than sentence boundaries in the data, the prosody model, which is implemented as a decision tree classifier, must be constructed to effectively learn from the imbalanced data distribution. To address this problem, we investigate a variety of sampling approaches and a bagging scheme. A pilot study was carried out to select methods to apply to the full NIST sentence boundary evaluation task across two corpora (conversational telephone speech and broadcast news speech), using both human transcriptions and recognition output. In the pilot study, when classification error rate is the performance measure, using the original training set achieves the best performance among the sampling methods, and an ensemble of multiple classifiers from different downsampled training sets achieves slightly poorer performance, but has the potential to reduce computational effort. However, when performance is measured using receiver operating characteristics (ROC) or area under the curve (AUC), then the sampling approaches outperform the original training set. This observation is important if the 0885-2308/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.csl.2005.06.002 * Corresponding author. Tel.: +1 510 666 2993; fax: +510 666 2956. E-mail addresses: [email protected] (Y. Liu), [email protected] (N.V. Chawla), [email protected] (M.P. Harper), [email protected] (E. Shriberg), [email protected] (A. Stolcke). Y. Liu et al. / Computer Speech and Language 20 (2006) 468–494 469 sentence boundary detection output is used by downstream language processing modules. Bagging was found to significantly improve system performance for each of the sampling methods. The gain from these methods may be diminished when the prosody model is combined with the language model, which is a strong knowledge source for the sentence detection task. The patterns found in the pilot study were replicated in the full NIST evaluation task. The conclusions may be dependent on the task, the classifiers, and the knowledge combination approach. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fce754c728d17319bae7ebe8f532dfe1",
"text": "As previous OS abstractions and structures fail to explicitly consider the separation between resource users an d providers, the shift toward server-side computing poses se rious challenges to OS structures, which is aggravated by the increasing many-core scale and workload diversity. This paper presents the horizontal OS model. We propose a new OS abstraction—subOS—an independent OS instance owning physical resources that can be created, destroyed, a nd resized swiftly. We horizontally decompose the OS into the s upervisor for the resource provider and several subOSes for r esource users. The supervisor discovers, monitors, and prov isions resources for subOSes, while each subOS independentl y runs applications. We confine state sharing among subOSes, but allow on-demand state sharing if necessary. We present the first implementation—RainForest, which supports unmodified Linux applications binaries. Our comprehensive evaluations using six benchmark suites quantit atively show RainForest outperforms Linux with three differ ent kernels, LXC, and XEN. The RainForest source code is soon available.",
"title": ""
},
{
"docid": "8930fa7afc57acd9a6e664ad1801e81a",
"text": "How to construct models for speech/nonspeech discrimination is a crucial point for voice activity detectors (VADs). Semi-supervised learning is the most popular way for model construction in conventional VADs. In this correspondence, we propose an unsupervised learning framework to construct statistical models for VAD. This framework is realized by a sequential Gaussian mixture model. It comprises an initialization process and an updating process. At each subband, the GMM is firstly initialized using EM algorithm, and then sequentially updated frame by frame. From the GMM, a self-regulatory threshold for discrimination is derived at each subband. Some constraints are introduced to this GMM for the sake of reliability. For the reason of unsupervised learning, the proposed VAD does not rely on an assumption that the first several frames of an utterance are nonspeech, which is widely used in most VADs. Moreover, the speech presence probability in the time-frequency domain is a byproduct of this VAD. We tested it on speech from TIMIT database and noise from NOISEX-92 database. The evaluations effectively showed its promising performance in comparison with VADs such as ITU G.729B, GSM AMR, and a typical semi-supervised VAD.",
"title": ""
},
{
"docid": "4be8fe80e78e233fe540d543b0d81bb6",
"text": "Infective endocarditis is defined by a focus of infection within the heart and is a feared disease across the field of cardiology. It is frequently acquired in the health care setting, and more than one-half of cases now occur in patients without known heart disease. Despite optimal care, mortality approaches 30% at 1 year. The challenges posed by infective endocarditis are significant. It is heterogeneous in etiology, clinical manifestations, and course. Staphylococcus aureus, which has become the predominant causative organism in the developed world, leads to an aggressive form of the disease, often in vulnerable or elderly patient populations. There is a lack of research infrastructure and funding, with few randomized controlled trials to guide practice. Longstanding controversies such as the timing of surgery or the role of antibiotic prophylaxis have not been resolved. The present article reviews the challenges posed by infective endocarditis and outlines current and future strategies to limit its impact.",
"title": ""
},
{
"docid": "e98e902e22d9b8acb6e9e9dcd241471c",
"text": "We introduce a novel iterative approach for event coreference resolution that gradually builds event clusters by exploiting inter-dependencies among event mentions within the same chain as well as across event chains. Among event mentions in the same chain, we distinguish withinand cross-document event coreference links by using two distinct pairwise classifiers, trained separately to capture differences in feature distributions of withinand crossdocument event clusters. Our event coreference approach alternates between WD and CD clustering and combines arguments from both event clusters after every merge, continuing till no more merge can be made. And then it performs further merging between event chains that are both closely related to a set of other chains of events. Experiments on the ECB+ corpus show that our model outperforms state-of-the-art methods in joint task of WD and CD event coreference resolution.",
"title": ""
},
{
"docid": "42b9ba3cf10ff879799ae0a4272e68fa",
"text": "This article argues that ( a ) ego, or self, is an organization of knowledge, ( b ) ego is characterized by cognitive biases strikingly analogous to totalitarian information-control strategies, and ( c ) these totalitarian-ego biases junction to preserve organization in cognitive structures. Ego's cognitive biases are egocentricity (self as the focus of knowledge), \"beneffectance\" (perception of responsibility for desired, but not undesired, outcomes), and cognitive conservatism (resistance to cognitive change). In addition to being pervasively evident in recent studies of normal human cognition, these three biases are found in actively functioning, higher level organizations of knowledge, perhaps best exemplified by theoretical paradigms in science. The thesis that egocentricity, beneffectance, and conservatism act to preserve knowledge organizations leads to the proposal of an intrapsychic analog of genetic evolution, which in turn provides an alternative to prevalent motivational and informational interpretations of cognitive biases. The ego rejects the unbearable idea together with its associated affect and behaves as if the idea had never occurred to the person a t all. (Freud, 1894/1959, p. 72) Alike with the individual and the group, the past is being continually re-made, reconstructed in the interests of the present. (Bartlett, 1932, p. 309) As historians of our own lives we seem to be, on the one hand, very inattentive and, on the other, revisionists who will justify the present by changing the past. (Wixon & Laird, 1976, p. 384) \"Who controls the past,\" ran the Party slogan, \"controls the future: who controls the present controls the past.\" (Orwell, 1949, p. 32) totalitarian, was chosen only with substantial reservation because of this label's pejorative connotations. Interestingly, characteristics that seem undesirable in a political system can nonetheless serve adaptively in a personal organization of knowledge. The conception of ego as an organization of knowledge synthesizes influences from three sources --empirical, literary, and theoretical. First, recent empirical demonstrations of self-relevant cognitive biases suggest that the biases play a role in some fundamental aspect of personality. Second, George Orwell's 1984 suggests the analogy between ego's biases and totalitarian information con&ol. Last, the theories of Loevinger (1976) and Epstein ( 1973 ) suggest the additional analogy between ego's organization and theoretical organizations of scientific knowledge. The first part of this article surveys evidence indicating that ego's cognitive biases are pervasive in and characteristic of normal personalities. The second part sets forth arguments for interpreting the biases as manifestations of an effectively functioning organization of knowledge. The last section develops an explanation for the totalitarian-ego biases by analyzing their role in maintaining cognitive organization and in supporting effective behavior. I . Three Cognitive Biases: Fabrication and Revision of Personal History Ego, as an organization of knowledge (a. conclusion to be developed later), serves the functions of What follows is a portrait of self (or ego-the terms observing (perceiving) and recording (rememberare used interchangeably) constructed by intering) personal experience; it can be characterized, weaving strands drawn from several areas of recent therefore, as a perssnal historian. Many findings research. The most striking features of the portrait are three cognitive biases, which correspond disturbingly to thought control and propaganda devices Acknowledgments are given at the end of the article. Requests for reprints should be sent to Anthony G. that are to be defining characteristics of Greenwald, Department of Psychology, Ohio State Univera totalitarian political system. The epithet for ego, sity, 404C West 17th Avenue, Columbus, Ohio 43210. Copyright 1980 by the American Psychological Association, Inc. 0003466X/80/3S07-0603$00.75 from recent research in personality, cognitive, and social psychology demonstrate that ego fabricates and revises history, thereby engaging in practices not ordinarily admired in historians. These lapses in personal scholarship, or cognitive biases, are discussed below in three categories: egocentricity (self perceived as more central to events than it is), \"beneffectance\" l (self perceived as selectively responsible for desired, but not undesired, outcomes), and conservatism (resistance to cognitive",
"title": ""
},
{
"docid": "5759152f6e9a9cb1e6c72857e5b3ec54",
"text": "Deep multitask networks, in which one neural network produces multiple predictive outputs, can offer better speed and performance than their single-task counterparts but are challenging to train properly. We present a gradient normalization (GradNorm) algorithm that automatically balances training in deep multitask models by dynamically tuning gradient magnitudes. We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting across multiple tasks when compared to single-task networks, static baselines, and other adaptive multitask loss balancing techniques. GradNorm also matches or surpasses the performance of exhaustive grid search methods, despite only involving a single asymmetry hyperparameter α. Thus, what was once a tedious search process that incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks. Ultimately, we will demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.",
"title": ""
},
{
"docid": "6c8445b5fec9022a968d3551efb8972b",
"text": "Face Recognition by a robot or machine is one of the challenging research topics in the recent years. It has become an active research area which crosscuts several disciplines such as image processing, pattern recognition, computer vision, neural networks and robotics. For many applications, the performances of face recognition systems in controlled environments have achieved a satisfactory level. However, there are still some challenging issues to address in face recognition under uncontrolled conditions. The variation in illumination is one of the main challenging problems that a practical face recognition system needs to deal with. It has been proven that in face recognition, differences caused by illumination variations are more significant than differences between individuals (Adini et al., 1997). Various methods have been proposed to solve the problem. These methods can be classified into three categories, named face and illumination modeling, illumination invariant feature extraction and preprocessing and normalization. In this chapter, an extensive and state-of-the-art study of existing approaches to handle illumination variations is presented. Several latest and representative approaches of each category are presented in detail, as well as the comparisons between them. Moreover, to deal with complex environment where illumination variations are coupled with other problems such as pose and expression variations, a good feature representation of human face should not only be illumination invariant, but also robust enough against pose and expression variations. Local binary pattern (LBP) is such a local texture descriptor. In this chapter, a detailed study of the LBP and its several important extensions is carried out, as well as its various combinations with other techniques to handle illumination invariant face recognition under a complex environment. By generalizing different strategies in handling illumination variations and evaluating their performances, several promising directions for future research have been suggested. This chapter is organized as follows. Several famous methods of face and illumination modeling are introduced in Section 2. In Section 3, latest and representative approaches of illumination invariant feature extraction are presented in detail. More attentions are paid on quotient-image-based methods. In Section 4, the normalization methods on discarding low frequency coefficients in various transformed domains are introduced with details. In Section 5, a detailed introduction of the LBP and its several important extensions is presented, as well as its various combinations with other face recognition techniques. In Section 6, comparisons between different methods and discussion of their advantages and disadvantages are presented. Finally, several promising directions as the conclusions are drawn in Section 7.",
"title": ""
},
{
"docid": "25c8d484406e9eb05708e51154bfbf2c",
"text": "Most publications of Context-Aware Recommender Systems (CARS) do not follow a Software Engineering process while building their systems, rather they focus on proposing and testing algorithms. We believe that CARS can be built following a structured development process, therefore this paper reviews Context-Aware Recommender System literature, identify the performed activities, and map such activities to the ISO/IEC 29110 Software Implementation activities. The results shows that even when no publication follows all the activities of the 29110 standard, all together, the publications cover most of the proposed activities of the ISO/IEC 29110 standard.",
"title": ""
},
{
"docid": "1e7c15d90591334adae76d385a875d4c",
"text": "Neural networks have become an increasingly popular solution for network intrusion detection systems (NIDS). Their capability of learning complex patterns and behaviors make them a suitable solution for differentiating between normal traffic and network attacks. However, a drawback of neural networks is the amount of resources needed to train them. Many network gateways and routers devices, which could potentially host an NIDS, simply do not have the memory or processing power to train and sometimes even execute such models. More importantly, the existing neural network solutions are trained in a supervised manner. Meaning that an expert must label the network traffic and update the model manually from time to time. In this paper, we present Kitsune: a plug and play NIDS which can learn to detect attacks on the local network, without supervision, and in an efficient online manner. Kitsune’s core algorithm (KitNET) uses an ensemble of neural networks called autoencoders to collectively differentiate between normal and abnormal traffic patterns. KitNET is supported by a feature extraction framework which efficiently tracks the patterns of every network channel. Our evaluations show that Kitsune can detect various attacks with a performance comparable to offline anomaly detectors, even on a Raspberry PI. This demonstrates that Kitsune can be a practical and economic NIDS. Keywords—Anomaly detection, network intrusion detection, online algorithms, autoencoders, ensemble learning.",
"title": ""
},
{
"docid": "54bf44e04920bdaa7388dbbbbd34a1a8",
"text": "TIDs have been detected using various measurement techniques, including HF sounders, incoherent scatter radars, in-situ measurements, and optical techniques. However, there is still much we do not yet know or understand about TIDs. Observations of TIDs have tended to be sparse, and there is a need for additional observations to provide new scientific insight into the geophysical source phenomenology and wave propagation physics. The dense network of GPS receivers around the globe offers a relatively new data source to observe and monitor TIDs. In this paper, we use Total Electron Content (TEC) measurements from 4000 GPS receivers throughout the continental United States to observe TIDs associated with the 11 March 2011 Tohoku tsunami. The tsunami propagated across the Pacific to the US west coast over several hours, and corresponding TIDs were observed over Hawaii, and via the network of GPS receivers in the US. The network of GPS receivers in effect provides a 2D spatial map of TEC perturbations, which can be used to calculate TID parameters, including horizontal wavelength, speed, and period. Well-formed, planar traveling ionospheric disturbances were detected over the west coast of the US ten hours after the earthquake. Fast Fourier transform analysis of the observed waveforms revealed that the period of the wave was 15.1 minutes with a horizontal wavelength of 194.8 km, phase velocity of 233.0 m/s, and an azimuth of 105.2 (propagating nearly due east in the direction of the tsunami wave). These results are consistent with TID observations in airglow measurements from Hawaii earlier in the day, and with other GPS TEC observations. The vertical wavelength of the TID was found to be 43.5 km. The TIDs moved at the same velocity as the tsunami itself. Much work is still needed in order to fully understand the ocean-atmosphere coupling mechanisms, which could lead to the development of effective tsunami detection/warning systems. The work presented in this paper demonstrates a technique for the study of ionospheric perturbations that can affect navigation, communications and surveillance systems.",
"title": ""
},
{
"docid": "79ad9125b851b6d2c3ed6fb1c5cf48e1",
"text": "In this paper, we extend distant supervision (DS) based on Wikipedia for Relation Extraction (RE) by considering (i) relations defined in external repositories, e.g. YAGO, and (ii) any subset of Wikipedia documents. We show that training data constituted by sentences containing pairs of named entities in target relations is enough to produce reliable supervision. Our experiments with state-of-the-art relation extraction models, trained on the above data, show a meaningful F1 of 74.29% on a manually annotated test set: this highly improves the state-of-art in RE using DS. Additionally, our end-to-end experiments demonstrated that our extractors can be applied to any general text document.",
"title": ""
},
{
"docid": "e0f56e20d509234a45b0a91f8d6b91cb",
"text": "This paper describes recent research findings on resource sharing between trees and crops in the semiarid tropics and attempts to reconcile this information with current knowledge of the interactions between savannah trees and understorey vegetation by examining agroforestry systems from the perspective of succession. In general, productivity of natural vegetation under savannah trees increases as rainfall decreases, while the opposite occurs in agroforestry. One explanation is that in the savannah, the beneficial effects of microclimatic improvements (e.g. lower temperatures and evaporation losses) are greater in more xeric environments. Mature savannah trees have a high proportion of woody above-ground structure compared to foliage, so that the amount of water 'saved' (largely by reduction in soil evaporation) is greater than water 'lost' through transpiration by trees. By contrast, in agroforestry practices such as alley cropping where tree density is high, any beneficial effects of the trees on microclimate are negated by reductions in soil moisture due to increasing interception losses and tree transpiration. While investment in woody structure can improve the water economy beneath agroforestry trees, it inevitably reduces the growth rate of the trees and thus increases the time required for improved understorey productivity. Therefore, agroforesters prefer trees with more direct and immediate benefits to farmers. The greatest opportunity for simultaneous agroforestry practices is therefore to fill niches within the landscape where resources are currently under-utilised by crops. In this way, agroforestry can mimic the large scale patch dynamics and successional progression of a natural ecosystem.",
"title": ""
}
] |
scidocsrr
|
1b3d86cd2e8c0891295b9aa7d28911b2
|
SkinMarks: Enabling Interactions on Body Landmarks Using Conformal Skin Electronics
|
[
{
"docid": "89e8df51a72309dc99789f90e922d1c5",
"text": "Information is traditionally confined to paper or digitally to a screen. In this paper, we introduce WUW, a wearable gestural interface, which attempts to bring information out into the tangible world. By using a tiny projector and a camera mounted on a hat or coupled in a pendant like wearable device, WUW sees what the user sees and visually augments surfaces or physical objects the user is interacting with. WUW projects information onto surfaces, walls, and physical objects around us, and lets the user interact with the projected information through natural hand gestures, arm movements or interaction with the object itself.",
"title": ""
}
] |
[
{
"docid": "724b049bd1ba662ebc29cc9eddad4a82",
"text": "The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the costly manual annotation required by previous methods. By tracking the faces of casual walkers on more than 40 hours of egocentric video, we are able to cover tens of thousands of different identities and automatically extract nearly 5 million pairs of images connected by or from different face tracks, along with their weather and location context, under pose and lighting variations. These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features. Finally, the network is fine-tuned with manually annotated samples. We perform an extensive experimental analysis on wearable data and two standard benchmark datasets based on web images (LFWA and CelebA). Our method outperforms by a large margin a network trained from scratch. Moreover, even without using manually annotated identity labels for pre-training as in previous methods, our approach achieves results that are better than the state of the art.",
"title": ""
},
{
"docid": "ffdc919d6b9fb776c5e2db0b4a8cb3e6",
"text": "In suicidal asphyxia smothering is very rare, especially when caused by winding strips of adhesive tape around the head to cover the nose and mouth. The authors report a very unusual case in which the deceased, a 66-year-old man, was found with two strips of tape wound around his head: the first, more superficial tape was wrapped six times and the second was wrapped nine times. Only integration of the crime scene data with those of the autopsy and the patient's psychological profile enabled identification of the event as suicide.",
"title": ""
},
{
"docid": "1d2f72587e694aa8d6435e176e87d4cb",
"text": "It is well known that the performance of context-based image processing systems can be improved by allowing the processor (e.g., an encoder or a denoiser) a delay of several samples before making a processing decision. Often, however, for such systems, traditional delayed-decision algorithms can become computationally prohibitive due to the growth in the size of the space of possible solutions. In this paper, we propose a reduced-complexity, one-pass, delayed-decision algorithm that systematically reduces the size of the search space, while also preserving its structure. In particular, we apply the proposed algorithm to two examples of adaptive context-based image processing systems, an image coding system that employs a context-based entropy coder, and a spatially adaptive image-denoising system. For these two types of widely used systems, we show that the proposed delayed-decision search algorithm outperforms instantaneous-decision algorithms with only a small increase in complexity. We also show that the performance of the proposed algorithm is better than that of other, higher complexity, delayed-decision algorithms.",
"title": ""
},
{
"docid": "a8fabda3947c639ea0db7aa806dc71f1",
"text": "The field of Artificial Intelligence, which started roughly half a century ago, has a turbulent history. In the 1980s there has been a major paradigm shift towards embodiment. While embodied artificial intelligence is still highly diverse, changing, and far from “theoretically stable”, a certain consensus about the important issues and methods has been achieved or is rapidly emerging. In this non-technical paper we briefly characterize the field, summarize its achievements, and identify important issues for future research. One of the fundamental unresolved problems has been and still is how thinking emerges from an embodied system. Provocatively speaking, the central issue could be captured by the question “How does walking relate to thinking?”",
"title": ""
},
{
"docid": "8ae21da19b8afabb941bc5bb450434a9",
"text": "A 7-month-old child presented with imperforate anus, penoscrotal hypospadias and transposition, and a midline mucosa-lined perineal mass. At surgery the mass was found to be supplied by the median sacral artery. It was excised and the anorectal malformation was repaired by posterior sagittal anorectoplasty. Histologically the mass revealed well-differentiated colonic tissue. The final diagnosis was well-differentiated sacrococcygeal teratoma in association with anorectal malformation.",
"title": ""
},
{
"docid": "accb879062cf9c2e6fa3fb636f33b333",
"text": "The CLEF eRisk 2018 challenge focuses on early detection of signs of depression or anorexia using posts or comments over social media. The eRisk lab has organized two tasks this year and released two different corpora for the individual tasks. The corpora are developed using the posts and comments over Reddit, a popular social media. The machine learning group at Ramakrishna Mission Vivekananda Educational and Research Institute (RKMVERI), India has participated in this challenge and individually submitted five results to accomplish the objectives of these two tasks. The paper presents different machine learning techniques and analyze their performance for early risk prediction of anorexia or depression. The techniques involve various classifiers and feature engineering schemes. The simple bag of words model has been used to perform ada boost, random forest, logistic regression and support vector machine classifiers to identify documents related to anorexia or depression in the individual corpora. We have also extracted the terms related to anorexia or depression using metamap, a tool to extract biomedical concepts. Theerefore, the classifiers have been implemented using bag of words features and metamap features individually and subsequently combining these features. The performance of the recurrent neural network is also reported using GloVe and Fasttext word embeddings. Glove and Fasttext are pre-trained word vectors developed using specific corpora e.g., Wikipedia. The experimental analysis on the training set shows that the ada boost classifier using bag of words model outperforms the other methods for task1 and it achieves best score on the test set in terms of precision over all the runs in the challenge. Support vector machine classifier using bag of words model outperforms the other methods in terms of fmeasure for task2. The results on the test set submitted to the challenge suggest that these framework achieve reasonably good performance.",
"title": ""
},
{
"docid": "d68f1d3762de6db8bf8d67556d4c72ec",
"text": "With the emerging technologies and all associated devices, it is predicted that massive amount of data will be created in the next few years – in fact, as much as 90% of current data were created in the last couple of years – a trend that will continue for the foreseeable future. Sustainable computing studies the process by which computer engineer/scientist designs computers and associated subsystems efficiently and effectively with minimal impact on the environment. However, current intelligent machine-learning systems are performance driven – the focus is on the predictive/classification accuracy, based on known properties learned from the training samples. For instance, most machine-learning-based nonparametric models are known to require high computational cost in order to find the global optima. With the learning task in a large dataset, the number of hidden nodes within the network will therefore increase significantly, which eventually leads to an exponential rise in computational complexity. This paper thus reviews the theoretical and experimental data-modeling literature, in large-scale data-intensive fields, relating to: (1) model efficiency, including computational requirements in learning, and data-intensive areas’ structure and design, and introduces (2) new algorithmic approaches with the least memory requirements and processing to minimize computational cost, while maintaining/improving its predictive/classification accuracy and stability.",
"title": ""
},
{
"docid": "52d6711ebbafd94ab5404e637db80650",
"text": "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.",
"title": ""
},
{
"docid": "d0f29010a0ce8f4eac37fb18e79b9667",
"text": "This paper presents a new concept of device fingerprinting (or profiling) to enhance wireless security using Infinite Hidden Markov Random Field (iHMRF). Wireless device fingerprinting is an emerging approach for detecting spoofing attacks in wireless network. Existing methods utilize either time-independent features or time-dependent features, but not both concurrently due to the complexity of different dynamic patterns. In this paper, we present a unified approach to fingerprinting based on iHMRF. The proposed approach is able to model both time-independent and time-dependent features, and to automatically detect the number of devices that is dynamically varying. We propose the first iHMRF-based online classification algorithm for wireless environment using variational incremental inference, micro-clustering techniques, and batch updates. Extensive simulation evaluations demonstrate the effectiveness and efficiency of this new approach.",
"title": ""
},
{
"docid": "be8e1e4fd9b8ddb0fc7e1364455999e8",
"text": "In this paper, we describe the development and exploitation of a corpus-based tool for the identification of metaphorical patterns in large datasets. The analysis of metaphor as a cognitive and cultural, rather than solely linguistic, phenomenon has become central as metaphor researchers working within ‘Cognitive Metaphor Theory’ have drawn attention to the presence of systematic and pervasive conventional metaphorical patterns in ‘ordinary’ language (e.g. I’m at a crossroads in my life). Cognitive Metaphor Theory suggests that these linguistic patterns reflect the existence of conventional conceptual metaphors, namely systematic cross-domain correspondences in conceptual structure (e.g. LIFE IS A JOURNEY). This theoretical approach, described further in section 2, has led to considerable advances in our understanding of metaphor both as a linguistic device and a cognitive model, and to our awareness of its role in many different genres and discourses. Although some recent research has incorporated corpus linguistic techniques into this framework for the analysis of metaphor, to date, such analyses have primarily involved the concordancing of pre-selected search strings (e.g. Deignan 2005). The method described in this paper represents an attempt to extend the limits of this form of analysis. In our approach, we have applied an existing semantic field annotation tool (USAS) developed at Lancaster University to aid metaphor researchers in searching corpora. We are able to filter all possible candidate semantic fields proposed by USAS to assist in finding possible ‘source’ (e.g. JOURNEY) and ‘target’ (e.g. LIFE) domains, and we can then go on to consider the potential metaphoricity of the expressions included under each possible source domain. This method thus enables us to identify open-ended sets of metaphorical expressions, which are not limited to predetermined search strings. In section 3, we present this emerging methodology for the computer-assisted analysis of metaphorical patterns in discourse. The semantic fields automatically annotated by USAS can be seen as roughly corresponding to the domains of metaphor theory. We have used USAS in combination with key word and domain techniques in Wmatrix (Rayson, 2003) to replicate earlier manual analyses, e.g. machine metaphors in Ken Kesey’s One Flew Over the Cuckoo’s Nest (Semino and Swindlehurst, 1996) and war, machine and organism metaphors in business magazines (Koller, 2004a). These studies are described in section 4.",
"title": ""
},
{
"docid": "83991055d207c47bc2d5af0d83bfcf9c",
"text": "BACKGROUND\nThe present study aimed at investigating the role of depression and attachment styles in predicting cell phone addiction.\n\n\nMETHODS\nIn this descriptive correlational study, a sample including 100 students of Payame Noor University (PNU), Reyneh Center, Iran, in the academic year of 2013-2014 was selected using volunteer sampling. Participants were asked to complete the adult attachment inventory (AAI), Beck depression inventory-13 (BDI-13) and the cell phone overuse scale (COS).\n\n\nFINDINGS\nResults of the stepwise multiple regression analysis showed that depression and avoidant attachment style were the best predictors of students' cell phone addiction (R(2) = 0.23).\n\n\nCONCLUSION\nThe results of this study highlighted the predictive value of depression and avoidant attachment style concerning students' cell phone addiction.",
"title": ""
},
{
"docid": "cbbf9146d190a8dd963b453eb7fbf317",
"text": "Two hundred eighty-eight cases of culture-proven bacterial conjunctivities were evaluated as part of two multicentered, randomized, prospective clinical studies comparing the antibacterial efficacy of topically administered ciprofloxacin 0.3% either with a placebo or with tobramycin 0.3%. In the first study, ciprofloxacin was significantly (P less than .001) more effective than the placebo. It eradicated or reduced the various bacterial pathogens in 93.6% of patients, compared to 59.5% for the placebo. In the second study, ciprofloxacin (94.5%) and tobramycin (91.9%) were equally effective. Topically applied ciprofloxacin eradicated or reduced all isolated bacterial species, attesting to its broad antibacterial spectrum and its potential usefulness in treating external ocular infections.",
"title": ""
},
{
"docid": "79ff4bd891538a0d1b5a002d531257f2",
"text": "Reverse conducting IGBTs are fabricated in a large productive volume for soft switching applications, such as inductive heaters, microwave ovens or lamp ballast, since several years. To satisfy the requirements of hard switching applications, such as inverters in refrigerators, air conditioners or general purpose drives, the reverse recovery behavior of the integrated diode has to be optimized. Two promising concepts for such an optimization are based on a reduction of the charge- carrier lifetime or the anti-latch p+ implantation dose. It is shown that a combination of both concepts will lead to a device with a good reverse recovery behavior, low forward and reverse voltage drop and excellent over current turn- off capability of a trench field-stop IGBT.",
"title": ""
},
{
"docid": "929640bc4813841f1a220e31da3bd631",
"text": "In this paper, a U-slot rectangular microstrip patch antenna is designed in order to overcome the narrowband characteristic and gain broader band. The antenna has dual-band characteristics, so it has wider operating bandwidth. The antenna works at Ku-band and the center frequency is 16GHz. The characteristics are analyzed and optimized with Ansoft HFSS, the simulation results show that the absolute bandwidth and gain of the antenna unit are 2.7GHz and 8.1dB, and of the antenna array are 3.1GHz and 14.4dB. The relative bandwidth reaches to 19.4%, which is much wider than the general bandwidth of about 1% to 7%.",
"title": ""
},
{
"docid": "f5c4bdf959e455193221a1fa76e1895a",
"text": "This book contains a wide variety of hot topics on advanced computational intelligence methods which incorporate the concept of complex and hypercomplex number systems into the framework of artificial neural networks. In most chapters, the theoretical descriptions of the methodology and its applications to engineering problems are excellently balanced. This book suggests that a better information processing method could be brought about by selecting a more appropriate information representation scheme for specific problems, not only in artificial neural networks but also in other computational intelligence frameworks. The advantages of CVNNs and hypercomplex-valued neural networks over real-valued neural networks are confirmed in some case studies but still unclear in general. Hence, there is a need to further explore the difference between them from the viewpoint of nonlinear dynamical systems. Nevertheless, it seems that the applications of CVNNs and hypercomplex-valued neural networks are very promising.",
"title": ""
},
{
"docid": "666d42f889fd6db5d235312929111dae",
"text": "This paper describes a context-dependent social gaze-control system implemented as part of a humanoid social robot. The system enables the robot to direct its gaze at multiple humans who are interacting with each other and with the robot. The attention mechanism of the gaze-control system is based on features that have been proven to guide human attention: nonverbal and verbal cues, proxemics, the visual field of view, and the habituation effect. Our gaze-control system uses Kinect skeleton tracking together with speech recognition and SHORE-based facial expression recognition to implement the same features. As part of a pilot evaluation, we collected the gaze behavior of 11 participants in an eye-tracking study. We showed participants videos of two-person interactions and tracked their gaze behavior. A comparison of the human gaze behavior with the behavior of our gaze-control system running on the same videos shows that it replicated human gaze behavior 89% of the time.",
"title": ""
},
{
"docid": "18233af1857390bff51d2e713bc766d9",
"text": "Name disambiguation is a perennial challenge for any large and growing dataset but is particularly significant for scientific publication data where documents and ideas are linked through citations and depend on highly accurate authorship. Differentiating personal names in scientific publications is a substantial problem as many names are not sufficiently distinct due to the large number of researchers active in most academic disciplines today. As more and more documents and citations are published every year, any system built on this data must be continually retrained and reclassified to remain relevant and helpful. Recently, some incremental learning solutions have been proposed, but most of these have been limited to small-scale simulations and do not exhibit the full heterogeneity of the millions of authors and papers in real world data. In our work, we propose a probabilistic model that simultaneously uses a rich set of metadata and reduces the amount of pairwise comparisons needed for new articles. We suggest an approach to disambiguation that classifies in an incremental fashion to alleviate the need for retraining the model and re-clustering all papers and uses fewer parameters than other algorithms. Using a published dataset, we obtained the highest K-measure which is a geometric mean of cluster and author-class purity. Moreover, on a difficult author block from the Clarivate Analytics Web of Science, we obtain higher precision than other algorithms.",
"title": ""
},
{
"docid": "8cbdd4f368ca9fd7dcf7e4f8c9748412",
"text": "We describe an efficient neural network method to automatically learn sentiment lexicons without relying on any manual resources. The method takes inspiration from the NRC method, which gives the best results in SemEval13 by leveraging emoticons in large tweets, using the PMI between words and tweet sentiments to define the sentiment attributes of words. We show that better lexicons can be learned by using them to predict the tweet sentiment labels. By using a very simple neural network, our method is fast and can take advantage of the same data volume as the NRC method. Experiments show that our lexicons give significantly better accuracies on multiple languages compared to the current best methods.",
"title": ""
},
{
"docid": "7e354ca56591a9116d651b53c6ab744d",
"text": "We have implemented a concurrent copying garbage collector that uses replicating garbage collection. In our design, the client can continuously access the heap during garbage collection. No low-level synchronization between the client and the garbage collector is required on individual object operations. The garbage collector replicates live heap objects and periodically synchronizes with the client to obtain the client's current root set and mutation log. An experimental implementation using the Standard ML of New Jersey system on a shared-memory multiprocessor demonstrates excellent pause time performance and moderate execution time speedups.",
"title": ""
},
{
"docid": "fed4a1b88d839b50ed91715a5f22b813",
"text": "Ensemble methodology, which builds a classification model by integrating multiple classifiers, can be used for improving prediction performance. Researchers from various disciplines such as statistics, pattern recognition, and machine learning have seriously explored the use of ensemble methodology. This paper presents an updated survey of ensemble methods in classification tasks, while introducing a new taxonomy for characterizing them. The new taxonomy, presented from the algorithm designer’s point of view, is based on five dimensions: inducer, combiner, diversity, size, and members dependency. We also propose several selection criteria, presented from the practitioner’s point of view, for choosing the most suitable ensemble method.",
"title": ""
}
] |
scidocsrr
|
39454ec8fb5ccc67fe6e08290f9f114e
|
Multi-armed Bandit Algorithms and Empirical Evaluation
|
[
{
"docid": "038064c2998a5da8664be1ba493a0326",
"text": "The bandit problem is revisited and considered under the PAC model. Our main contribution in this part is to show that given n arms, it suffices to pull the arms O( n 2 log 1 δ ) times to find an -optimal arm with probability of at least 1 − δ. This is in contrast to the naive bound of O( n 2 log n δ ). We derive another algorithm whose complexity depends on the specific setting of the rewards, rather than the worst case setting. We also provide a matching lower bound. We show how given an algorithm for the PAC model Multi-Armed Bandit problem, one can derive a batch learning algorithm for Markov Decision Processes. This is done essentially by simulating Value Iteration, and in each iteration invoking the multi-armed bandit algorithm. Using our PAC algorithm for the multi-armed bandit problem we improve the dependence on the number of actions.",
"title": ""
}
] |
[
{
"docid": "89b8317509d27a6f13d8ba38f52f4816",
"text": "The merging of optimization and simulation technologies has seen a rapid growth in recent years. A Google search on \"Simulation Optimization\" returns more than six thousand pages where this phrase appears. The content of these pages ranges from articles, conference presentations and books to software, sponsored work and consultancy. This is an area that has sparked as much interest in the academic world as in practical settings. In this paper, we first summarize some of the most relevant approaches that have been developed for the purpose of optimizing simulated systems. We then concentrate on the metaheuristic black-box approach that leads the field of practical applications and provide some relevant details of how this approach has been implemented and used in commercial software. Finally, we present an example of simulation optimization in the context of a simulation model developed to predict performance and measure risk in a real world project selection problem.",
"title": ""
},
{
"docid": "19dea4fca2a60fad4b360d34b15480ae",
"text": "We present Neural Autoregressive Distribution Estimation (NADE) models, which are neural network architectures applied to the problem of unsupervised distribution and density estimation. They leverage the probability product rule and a weight sharing scheme inspired from restricted Boltzmann machines, to yield an estimator that is both tractable and has good generalization performance. We discuss how they achieve competitive performance in modeling both binary and real-valued observations. We also present how deep NADE models can be trained to be agnostic to the ordering of input dimensions used by the autoregressive product rule decomposition. Finally, we also show how to exploit the topological structure of pixels in images using a deep convolutional architecture for NADE.",
"title": ""
},
{
"docid": "71570a28c887227b3421b1f91ba61f4c",
"text": "Anomaly based network intrusion detection (ANID) is an important problem that has been researched within diverse research areas and various application domains. Several anomaly based network intrusion detection systems (ANIDS) can be found in the literature. Most ANIDSs employ supervised algorithms, whose performances highly depend on attack-free training data. However, this kind of training data is difficult to obtain in real world network environment. Moreover, with changing network environment or services, patterns of normal traffic will be changed. This leads to high false positive rate of supervised ANIDSs. Using unsupervised anomaly detection techniques, however, the system can be trained with unlabeled data and is capable of detecting previously unseen attacks. We have categorized the existing ANIDSs based on its type, class, nature of detection/ processing, level of security, etc. We also enlist some proximity measures for intrusion data analysis and detection. We also report some experimental results for detection of attacks over the KDD’99 dataset.",
"title": ""
},
{
"docid": "33df3da22e9a24767c68e022bb31bbe5",
"text": "The credit card industry has been growing rapidly recently, and thus huge numbers of consumers’ credit data are collected by the credit department of the bank. The credit scoring manager often evaluates the consumer’s credit with intuitive experience. However, with the support of the credit classification model, the manager can accurately evaluate the applicant’s credit score. Support Vector Machine (SVM) classification is currently an active research area and successfully solves classification problems in many domains. This study used three strategies to construct the hybrid SVM-based credit scoring models to evaluate the applicant’s credit score from the applicant’s input features. Two credit datasets in UCI database are selected as the experimental data to demonstrate the accuracy of the SVM classifier. Compared with neural networks, genetic programming, and decision tree classifiers, the SVM classifier achieved an identical classificatory accuracy with relatively few input features. Additionally, combining genetic algorithms with SVM classifier, the proposed hybrid GA-SVM strategy can simultaneously perform feature selection task and model parameters optimization. Experimental results show that SVM is a promising addition to the existing data mining methods. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6b19893324e4012a622c0250436e1ab3",
"text": "Nowadays, email is one of the fastest ways to conduct communications through sending out information and attachments from one to another. Individuals and organizations are all benefit the convenience from email usage, but at the same time they may also suffer the unexpected user experience of receiving spam email all the time. Spammers flood the email servers and send out mass quantity of unsolicited email to the end users. From a business perspective, email users have to spend time on deleting received spam email which definitely leads to the productivity decrease and cause potential loss for organizations. Thus, how to detect the email spam effectively and efficiently with high accuracy becomes a significant study. In this study, data mining will be utilized to process machine learning by using different classifiers for training and testing and filters for data preprocessing and feature selection. It aims to seek out the optimal hybrid model with higher accuracy or base on other metric’s evaluation. The experiment results show accuracy improvement in email spam detection by using hybrid techniques compared to the single classifiers used in this research. The optimal hybrid model provides 93.00% of accuracy and 7.80% false positive rate for email spam detection.",
"title": ""
},
{
"docid": "c063474634eb427cf0215b4500182f8c",
"text": "Factorization Machines offer good performance and useful embeddings of data. However, they are costly to scale to large amounts of data and large numbers of features. In this paper we describe DiFacto, which uses a refined Factorization Machine model with sparse memory adaptive constraints and frequency adaptive regularization. We show how to distribute DiFacto over multiple machines using the Parameter Server framework by computing distributed subgradients on minibatches asynchronously. We analyze its convergence and demonstrate its efficiency in computational advertising datasets with billions examples and features.",
"title": ""
},
{
"docid": "3ce9e8006e9b6f90b2072e8d44e3d5ff",
"text": "Artificial Intelligence (AI) is an effective science which employs strong enough approaches, methods, and techniques to solve unsolvable real-world based problems. Because of its unstoppable rise towards the future, there are also some discussions about its ethics and safety. Shaping an AI-friendly environment for people and a people-friendly environment for AI can be a possible answer for finding a shared context of values for both humans and robots. In this context, objective of this paper is to address the ethical issues of AI and explore the moral dilemmas that arise from ethical algorithms, from pre-set or acquired values. In addition, the paper will also focus on the subject of AI safety. As general, the paper will briefly analyze the concerns and potential solutions to solving the ethical issues presented and increase readers’ awareness on AI safety as another related research interest.",
"title": ""
},
{
"docid": "c65050bb98a071fa8b60fa262536a476",
"text": "Proliferative periostitis is a pathologic lesion that displays an osteo-productive and proliferative inflammatory response of the periosteum to infection or other irritation. This lesion is a form of chronic osteomyelitis that is often asymptomatic, occurring primarily in children, and found only in the mandible. The lesion can be odontogenic or non-odontogenic in nature. A 12 year-old boy presented with an unusual odontogenic proliferative periostitis that originated from the lower left first molar, however, the radiographic radiolucent area and proliferative response were discovered at the apices of the lower left second molar. The periostitis was treated by single-visit non-surgical endodontic treatment of lower left first molar without antibiotic therapy. The patient has been recalled regularly; the lesion had significantly reduced in size 3-months postoperatively. Extraoral symmetry occurred at approximately one year recall. At the last visit, 2 years after initial treatment, no problems or signs of complications have occurred; the radiographic examination revealed complete resolution of the apical lesion and apical closure of the lower left second molar. Odontogenic proliferative periostitis can be observed at the adjacent normal tooth. Besides, this case demonstrates that non-surgical endodontics is a viable treatment option for management of odontogenic proliferative periostitis.",
"title": ""
},
{
"docid": "1da19f806430077f7ad957dbeb0cb8d1",
"text": "BACKGROUND\nTo date, periorbital melanosis is an ill-defined entity. The condition has been stated to be darkening of the skin around the eyes, dark circles, infraorbital darkening and so on.\n\n\nAIMS\nThis study was aimed at exploring the nature of pigmentation in periorbital melanosis.\n\n\nMETHODS\nOne hundred consecutive patients of periorbital melanosis were examined and investigated to define periorbital melanosis. Extent of periorbital melanosis was determined by clinical examination. Wood's lamp examination was performed in all the patients to determine the depth of pigmentation. A 2-mm punch biopsy was carried out in 17 of 100 patients.\n\n\nRESULTS\nIn 92 (92%) patients periorbital melanosis was an extension of pigmentary demarcation line over the face (PDL-F).\n\n\nCONCLUSION\nPeriorbital melanosis and pigmentary demarcation line of the face are not two different conditions; rather they are two different manifestations of the same disease.",
"title": ""
},
{
"docid": "0b1b4c8d501c3b1ab350efe4f2249978",
"text": "Motivated by formation control of multiple non-holonomic mobile robots, this paper presents a trajectory tracking control scheme design for nonholonomic mobile robots that are equipped with low-level linear and angular velocities control systems. The design includes a nonlinear kinematic trajectory tracking control law and a tracking control gains selection method that provide a means to implement the nonlinear tracking control law systematically based on the dynamic control performance of the robot's low-level control systems. In addition, the proposed scheme, by design, enables the mobile robot to execute reference trajectories that are represented by time-parameterized waypoints. This feature provides the scheme a generic interface with higher-level trajectory planners. The trajectory tracking control scheme is validated using an iRobot Packbot's parameteric model estimated from experimental data.",
"title": ""
},
{
"docid": "505e0e34375e5d7fcab3c8e17b5cedef",
"text": "In spite of the widespread use of CAD systems for design and CAE systems for analysis, the two processes are not well integrated because CAD and CAE models inherently use different types of geometric models and there currently exists no generic, unified model that allows both design and analysis information to be specified and shared. XML has become the de-facto standard for data representation and exchange on the World-Wide Web. This paper proposes a data integration method based on the XML technique in order to resolve the problem of data transmission for CAD and CAE. Designers parametrically model the bridges through 3D CAD platform. CAE analysts conduct explicit dynamic FEA (Finite Element Analysis) on the designed bridge structure. CAD and CAE functions are accomplished through C/S architecture. An XML and Web Service based DAC (Design-Analysis Connection) is developed to maintain a consistence between CAD model and FEA model. The design is then displayed to the customers through B/S mechanism, which provides a convenient method for the customers to participate the design process. Since all the operations are conducted through internet/intranet, customers, designers and analysts are able to participate the design process at different geographical locations. According to the interface procedure of the model transformation compiled in this paper, the finite element model was successfully transformed from CAD system to CAE system.",
"title": ""
},
{
"docid": "a960ced0cd3859c037c43790a6b8436b",
"text": "Ferroresonance is a widely studied phenomenon but it is still not well understood because of its complex behavior. It is “fuzzy-resonance.” A simple graphical approach using fundamental frequency phasors has been presented to elevate the readers understanding. Its occurrence and how it appears is extremely sensitive to the transformer characteristics, system parameters, transient voltages and initial conditions. More efficient transformer core material has lead to its increased occurrence and it has considerable effects on system apparatus and protection. Power system engineers should strive to recognize potential ferroresonant configurations and design solutions to prevent its occurrence.",
"title": ""
},
{
"docid": "20926ad65458e5dc7c187ba40808f547",
"text": "The aim of this paper is to compile a model of IT project success from management's perspective. Therefore, a qualitative research approach is proposed by interviewing IT managers on how their companies evaluate the success of IT projects. The evaluation of the survey provides fourteen success criteria and four success dimensions. This paper also thoroughly analyzes which of these criteria the management considers especially important and which ones are being missed in daily practice. Additionally, it attempts to identify the relevance of the discovered criteria and dimensions with regard to the determination of IT project success. It becomes evident here that the old-fashioned Iron Triangle still plays a leading role, but some long-term strategical criteria, such as value of the project, customer perspective or impact on the organization, have meanwhile caught up or pulled even. TYPE OF PAPER AND",
"title": ""
},
{
"docid": "32e1b7734ba1b26a6a27e0504db07643",
"text": "Due to its high popularity and rich functionalities, the Portable Document Format (PDF) has become a major vector for malware propagation. To detect malicious PDF files, the first step is to extract and de-obfuscate Java Script codes from the document, for which an effective technique is yet to be created. However, existing static methods cannot de-obfuscate Java Script codes, existing dynamic methods bring high overhead, and existing hybrid methods introduce high false negatives. Therefore, in this paper, we present MPScan, a scanner that combines dynamic Java Script de-obfuscation and static malware detection. By hooking the Adobe Reader's native Java Script engine, Java Script source code and op-code can be extracted on the fly after the source code is parsed and then executed. We also perform a multilevel analysis on the resulting Java Script strings and op-code to detect malware. Our evaluation shows that regardless of obfuscation techniques, MPScan can effectively de-obfuscate and detect 98% malicious PDF samples.",
"title": ""
},
{
"docid": "1898ce1b6cb3a195de2d261bfd8bd7ce",
"text": "Unmanned aerial vehicles (UAV) are commonly used for missions in unknown environments, where an exact mathematical model of the environment may not be available. This paper provides a framework for using reinforcement learning to allow the UAV to navigate successfully in such environments. We conducted our simulation and real implementation to show how the UAVs can successfully learn to navigate through an unknown environment. Technical aspects regarding to applying reinforcement learning algorithm to a UAV system and UAV flight control were also addressed. This will enable continuing research using a UAV with learning capabilities in more important applications, such as wildfire monitoring, or search and rescue missions.",
"title": ""
},
{
"docid": "6a26355ef30ba95538c5c89dc07d36f3",
"text": "Gamification evolved to one of the most important trends in technology and therefore gains more and more practical and scientific notice. Yet academia lacks a comprehensive overview of research, even though a review of prior, relevant literature is essential for advancing knowledge in a field. Therefore a novel classification framework for Gamification in Information Systems with the intention to provide a structured, summarized as well as organized overview was constructed to close this gap of research. A literature review on Gamification in quality outlets combined with a Grounded Theory approach served as a starting point. As a result this paper provides a foundation for current and future research to advance the knowledge on Gamification. Moreover it offers a structure for Gamification research which was not available previously. Findings from the literature review were mapped to the classification framework and analyzed. Derived from the classification framework and its outcome future research outlets were identified.",
"title": ""
},
{
"docid": "d846d16aac9067c82dc85b9bc17756e0",
"text": "We present a novel solution to improve the performance of Chinese word segmentation (CWS) using a synthetic word parser. The parser analyses the internal structure of words, and attempts to convert out-of-vocabulary words (OOVs) into in-vocabulary fine-grained sub-words. We propose a pipeline CWS system that first predicts this fine-grained segmentation, then chunks the output to reconstruct the original word segmentation standard. We achieve competitive results on the PKU and MSR datasets, with substantial improvements in OOV recall.",
"title": ""
},
{
"docid": "e09dcb9cdd7f9a8d1c0a0449fd9b11f8",
"text": "Radio-frequency identification (RFID) is being widely used in supply chain and logistics applications for wireless identification and the tracking and tracing of goods, with excellent performance for the long-range interrogation of tagged pallets and cases (up to 4-6 m, with passive tags). Item-level tagging (ILT) has also received much attention, especially in the pharmaceutical and retail industries. Low-frequency (125-134 KHz) and high-frequency (HF) (13.56 MHz) RFID systems have traditionally been used for ILT applications, where the radio-frequency (RF) power from the reader is delivered to the passive tags by inductive coupling. Recently, ultra-HF (UHF) (840-960 MHz) near-field (NF) RFID systems [1] have attracted increasing attention because of the merits of the much higher reading speed and capability to detect a larger number of tags (bulk reading). A UHF NF RFID system is a valuable solution to implement a reliable short-range wireless link (up to a few tens of centimeters) for ILT applications. Because the tags can be made smaller, RFID-based applications can be extended to extremely minuscule items (e.g., retail apparel, jewelry, drugs, rented apparel) as well as the successful implementation of RFID-based storage spaces, smart conveyor belts, and shopping carts.",
"title": ""
},
{
"docid": "2fe5a40499012640b3b4d18b134b3b7e",
"text": "Hollywood has often been called the land of hunches and wild guesses. The uncertainty associated with the predictability of product demand makes the movie business a risky endeavor. Therefore, predicting the box-office receipts of a particular motion picture has intrigued many scholars and industry leaders as a difficult and challenging problem. In this study, with a rather large and feature rich dataset, we explored the use of data mining methods (e.g., artificial neural networks, decision trees and support vector machines along with information fusion based ensembles) to predict the financial performance of a movie at the box-office before its theatrical release. In our prediction models, we have converted the forecasting problem into a classification problem—rather than forecasting the point estimate of box-office receipts; we classified a movie (based on its box-office receipts) into nine categories, ranging from a “flop” to a “blockbuster.” Herein we present our exciting prediction results where we compared individual models to those of the ensamples.",
"title": ""
},
{
"docid": "6aa5b9ffcbecb624224ac0d8153ffcc8",
"text": "The successful implementation of new technologies is dependent on many factors including the efficient management of human resources. Furthermore, recent research indicates that intellectual assets and resources can be utilised much more efficiently and effectively if organisations apply knowledge management techniques for leveraging their human resources and enhancing their personnel management. The human resources departments are well positioned to ensure the success of knowledge management programs, which are directed at capturing, using and re-using employees' knowledge. Through human resources management a culture that encourages the free flow of knowledge for meeting organisational goals can be created. The strategic role of the human resources department in identifying strategic and knowledge gaps using knowledge mapping is discussed in this paper. In addition, the drivers and implementation strategies for knowledge management programs are proposed.",
"title": ""
}
] |
scidocsrr
|
27667e5ff79ce3d365ec5ac624d3901d
|
Enhancing Modern Supervised Word Sense Disambiguation Models by Semantic Lexical Resources
|
[
{
"docid": "b8702cb8d18ae53664f3dfff95152764",
"text": "Word Sense Disambiguation is a longstanding task in Natural Language Processing, lying at the core of human language understanding. However, the evaluation of automatic systems has been problematic, mainly due to the lack of a reliable evaluation framework. In this paper we develop a unified evaluation framework and analyze the performance of various Word Sense Disambiguation systems in a fair setup. The results show that supervised systems clearly outperform knowledge-based models. Among the supervised systems, a linear classifier trained on conventional local features still proves to be a hard baseline to beat. Nonetheless, recent approaches exploiting neural networks on unlabeled corpora achieve promising results, surpassing this hard baseline in most test sets.",
"title": ""
},
{
"docid": "515fac2b02637ddee5e69a8a22d0e309",
"text": "The continuous expansion of the multilingual information society has led in recent years to a pressing demand for multilingual linguistic resources suitable to be used for different applications. In this paper we present the WordNet Domains Hierarchy (WDH), a language-independent resource composed of 164, hierarchically organized, domain labels (e.g. Architecture, Sport, Medicine). Although WDH has been successfully applied to various Natural Language Processing tasks, the first available version presented some problems, mostly related to the lack of a clear semantics of the domain labels. Other correlated issues were the coverage and the balancing of the domains. We illustrate a new version of WDH addressing these problems by an explicit and systematic reference to the Dewey Decimal Classification. The new version of WDH has a better defined semantics and is applicable to a wider range of tasks.",
"title": ""
}
] |
[
{
"docid": "1898536161383682f22126c59e185047",
"text": "E-mail foldering or e-mail classification into user predefined folders can be viewed as a text classification/categorization problem. However, it has some intrinsic properties that make it more difficult to deal with, mainly the large cardinality of the class variable (i.e. the number of folders), the different number of e-mails per class state and the fact that this is a dynamic problem, in the sense that e-mails arrive in our mail-forders following a time-line. Perhaps because of these problems, standard text-oriented classifiers such as Naive Bayes Multinomial do no obtain a good accuracy when applied to e-mail corpora. In this paper, we identify the imbalance among classes/folders as the main problem, and propose a new method based on learning and sampling probability distributions. Our experiments over a standard corpus (ENRON) with seven datasets (e-mail users) show that the results obtained by Naive Bayes Multinomial significantly improve when applying the balancing algorithm first. For the sake of completeness in our experimental study we also compare this with another standard balancing method (SMOTE) and classifiers.",
"title": ""
},
{
"docid": "79f1473d4eb0c456660543fda3a648f1",
"text": "Weexamine the problem of learning and planning on high-dimensional domains with long horizons and sparse rewards. Recent approaches have shown great successes in many Atari 2600 domains. However, domains with long horizons and sparse rewards, such as Montezuma’s Revenge and Venture, remain challenging for existing methods. Methods using abstraction [5, 13] have shown to be useful in tackling long-horizon problems. We combine recent techniques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We construct toy domains that elucidate the problem of long horizons, sparse rewards and high-dimensional inputs, and show that our algorithm significantly outperforms previous methods on these domains. Our abstraction-based approach outperforms Deep QNetworks [11] on Montezuma’s Revenge and Venture, and exhibits backtracking behavior that is absent from previous methods.",
"title": ""
},
{
"docid": "f8984d660f39c66b3bd484ec766fa509",
"text": "The present paper focuses on Cyber Security Awareness Campaigns, and aims to identify key factors regarding security which may lead them to failing to appropriately change people’s behaviour. Past and current efforts to improve information-security practices and promote a sustainable society have not had the desired impact. It is important therefore to critically reflect on the challenges involved in improving information-security behaviours for citizens, consumers and employees. In particular, our work considers these challenges from a Psychology perspective, as we believe that understanding how people perceive risks is critical to creating effective awareness campaigns. Changing behaviour requires more than providing information about risks and reactive behaviours – firstly, people must be able to understand and apply the advice, and secondly, they must be motivated and willing to do so – and the latter requires changes to attitudes and intentions. These antecedents of behaviour change are identified in several psychological models of behaviour. We review the suitability of persuasion techniques, including the widely used ‘fear appeals’. From this range of literature, we extract essential components for an awareness campaign as well as factors which can lead to a campaign’s success or failure. Finally, we present examples of existing awareness campaigns in different cultures (the UK and Africa) and reflect on these.",
"title": ""
},
{
"docid": "025ada63a9347fc064ccc2bf191906c3",
"text": "There is considerable research interest on the meaning and measurement of resilience from a variety of research perspectives including those from the hazards/disasters and global change communities. The identification of standards and metrics for measuring disaster resilience is one of the challenges faced by local, state, and federal agencies, especially in the United States. This paper provides a new framework, the disaster resilience of place (DROP) model, designed to improve comparative assessments of disaster resilience at the local or community level. A candidate set of variables for implementing the model are also presented as a first step towards its implementation. Purchase Export",
"title": ""
},
{
"docid": "f67928d90801523ce9242cb49e035baf",
"text": "Herzberg published the two-factor theory of work motivation in 1959. The theory was highly controversial at the time it was published, claims to be the most replicated study in this area, and provided the foundation for numerous other theories and frameworks in human resource development (Herzberg, 1987). The theory states that job satisfaction and dissatisfaction are affected by two different sets of factors. Therefore, satisfaction and dissatisfaction cannot be measured on the same continuum. This paper examines the historical context in which the theory was developed, the methodology used to develop the theory, the controversy and attempts to duplicate the study, and the theory’s current relevance to HRD. Herzberg’s Two-Factor Theory 3 Herzberg published the two-factor theory of work motivation in 1959. The theory was highly controversial at the time it was published, claims to be the most replicated study in this area, and provided the foundation for numerous other theories and frameworks in human resource development (Herzberg, 1987). The theory states that job satisfaction and dissatisfaction are affected by two different sets of factors. Therefore, satisfaction and dissatisfaction cannot be measured on the same continuum. Herzberg’s research was conducted during the late 1950s within a thirty mile radius of Pittsburg, which was at the time a center for heavy industry. It was a time of full employment and nearly 100% utilization of plants and facilities. Although demographical information of the workers studied was not explicitly stated by the authors in the literature, it is implied that the majority of the workers studied were white males. It was also a period of heavy unionization. This is in stark contrast to the current work environment of customer-service oriented jobs, high unemployment rates, idle and closed plants, the diverse workforce, and the decline of unionization. This paper asks the following research question: is the two-factor theory still relevant considering the historical context in which the theory was developed, the methodology used, and the changed dynamics of the workforce? I attempt to answer this question by conducting an integrative literature review. I will first give an overview of the theory. I will then describe my research method and provide an overview of the literature reviewed. Next, I will discuss key findings. Finally, I will discuss the theoretical and practical implications of this paper. Overview of the Theory The two-factor theory of job satisfaction was the result of a five year research program on job attitudes initiated by a grant from The Buhl Foundation. There was an urgent need at the time Herzberg’s Two-Factor Theory 4 for more and better insight about the attitudes of people towards their jobs due to the prevalence of job dissatisfaction indicators such as strikes, slow downs, and filing of grievances (Herzberg, Mausner, Peterson, & Capwell, 1957). During the first stage of the program, Herzberg and his colleagues conducted a comprehensive literature review of over 2000 writings published between 1900 and 1955. The literature yielded contradictory results, and the research designs of the studies varied widely in quality and the methodologies used (Herzberg, Mausner, & Snyderman, 1959). Based on their review of the literature, Herzberg et al. (1959) made core assumptions on which to base their hypothesis and research design. First, there was enough evidence to assume that there was some relationship between job attitudes and productivity. Second, the characteristics of dissatisfied workers had been well-defined in the existing literature. Third, the factors related to job attitudes had also been previously well-defined. Herzberg et al. (1959) developed an initial hypothesis that satisfaction and dissatisfaction could not be reliably measured on the same continuum. Herzberg et al. next conducted an empirical study to test the hypothesis. After two pilot programs, the design and hypothesis were further developed and expanded (Herzberg et al., 1959). The main hypothesis stated that factors leading to positive attitudes and those leading to negative attitudes will differ. The second hypothesis stated that factors and effects involved in long-range sequences of events would differ from those in short-range sequences. The major study used the critical incident technique and was conducted at nine sites within a 30 mile radius of Pittsburg. A total of 203 accountants and engineers were studied. Participants were led through a semi-structured interview in which they were asked to describe any time when they felt either exceptionally good or bad about their job. After describing the story in detail, they were asked for another story at the other end of the continuum. The Herzberg’s Two-Factor Theory 5 participants were then asked to rate their experience on a scale of one to 21, with one indicating that the experience hardly affected their feelings, and 21 indicating that it was an experience with serious impact. These stories were then categorized into high and low sequences. High sequences had a high impact on job attitude, and low sequences had minimal impact on job attitude. Herzberg et al. (1959) found that Maslow’s theory of personal growth and selfactualization became the keys to understanding the good feelings in these sequences. The authors found certain trends in the characteristics of high and low sequences. In the high sequences, only a small number of factors were responsible for good feelings about the job. All of those factors were related to the intrinsic factors of the job and were predominantly long lasting. When good feelings about the job were short lasting, they stemmed from specific achievements and recognition about those achievements as opposed to the job itself. The high sequence events provide contrast to the low sequence events. It was found that a great many things can be a source of dissatisfaction, but only certain factors can contribute to satisfaction (Herzberg et al., 1959). Low sequence factors were rarely found in the high sequences. Salary was the exception to these findings as it was mentioned with similar frequency in both the high and low range stories. However, when viewed within the context of the events, it became apparent to the researchers that salary is primarily a dissatisfier. When salary was mentioned as a satisfier, it was related to appreciation and recognition of a job done well and not as a factor in itself. From this data, the original hypothesis was restated and became the two-factor theory of job satisfaction (Herzberg et al., 1959). Factors that affect job satisfaction are divided into two categories. Hygiene factors surround the doing of the job. They include supervision, interpersonal relations, physical working conditions, salary, company policy and administration, Herzberg’s Two-Factor Theory 6 benefits, and job security. Motivation factors lead to positive job attitudes because they satisfy the need for self-actualization. Motivation factors are achievement, recognition, the work itself, responsibility, and advancement. The opposite of satisfaction is no satisfaction. The opposite of dissatisfaction is no dissatisfaction. The satisfaction of hygiene needs can prevent dissatisfaction and poor performance, but only the satisfaction of the motivation factors will bring the type of productivity improvement sought by companies (Herzberg et al., 1959). The researchers also examined the impact of the sequences on performance, turnover, attitude toward the company, and mental health (Herzberg et al., 1959). They found that attitudes influence the way the job is done and that favorable attitudes affect performance more than unfavorable attitudes. In terms of turnover, negative attitude resulted in some degree of physical or psychological withdrawal from the job. In relation to attitude toward the company, the study showed that a company can expect the degree of loyalty to vary with the degree of job satisfaction. Finally, the results showed no clear evidence for any effect on mental health, although the participants themselves perceived that a relationship existed. It is important to understand the conventional ideas of job satisfaction at the time Herzberg et al. published this theory in order to fully understand the implications. Conventional explanations of job satisfaction at the time considered satisfaction and dissatisfaction as extremes on a single continuum with a neutral condition in the midpoint in which the individual is neither satisfied nor dissatisfied (Behling, Labovitz, & Kosmo, 1968). Workers shift along this singular scale as factors are changed or introduced. Accordingly, organizations focused on hygiene factors in an attempt to improve productivity. Herzberg et al. (1959) argued that this was the wrong approach. In order to increase satisfaction, the motivation factors must be improved. According to Herzberg et al. (1959), jobs should be restructured to increase the ability of Herzberg’s Two-Factor Theory 7 workers to achieve goals that are meaningfully related to the doing of the job. Job satisfaction can also be reached by matching the individual’s work capacity to the work he will need to do during the selection process. It is equally important to recognize the supervisor’s role in job satisfaction. They must provide recognition when needed and effectively plan and organize the work. Finally, although it is not realistic to allow the worker to set their own goals in most circumstances, the worker can often determine how they will achieve their goal. This will give workers a greater sense of achievement over their work. There are several criticisms of the two-factor theory. They are: (a) the theory appears to be bound to the critical incident method; (b) the theory confuses events causing feelings of satisfaction and dissatisfaction with the agent that caused the event to happen; (c) the reli",
"title": ""
},
{
"docid": "9a70c1dbd61029482dbfa8d39238c407",
"text": "Background: Advertisers optimization is one of the most fundamental tasks in paid search, which is a multi-billion industry as a major part of the growing online advertising market. As paid search is a three-player game (advertisers, search users and publishers), how to optimize large-scale advertisers to achieve their expected performance becomes a new challenge, for which adaptive models have been widely used.",
"title": ""
},
{
"docid": "5b3e9895359948d2190f5d8223a47045",
"text": "Inferring the emotional content of words is important for text-based sentiment analysis, dialogue systems and psycholinguistics, but word ratings are expensive to collect at scale and across languages or domains. We develop a method that automatically extends word-level ratings to unrated words using signed clustering of vector space word representations along with affect ratings. We use our method to determine a word’s valence and arousal, which determine its position on the circumplex model of affect, the most popular dimensional model of emotion. Our method achieves superior out-of-sample word rating prediction on both affective dimensions across three different languages when compared to state-of-theart word similarity based methods. Our method can assist building word ratings for new languages and improve downstream tasks such as sentiment analysis and emotion detection.",
"title": ""
},
{
"docid": "d593b96d11dd8a3516816d85fce5c7a0",
"text": "This paper presents an approach for the integration of Virtual Reality (VR) and Computer-Aided Design (CAD). Our general goal is to develop a VR–CAD framework making possible intuitive and direct 3D edition on CAD objects within Virtual Environments (VE). Such a framework can be applied to collaborative part design activities and to immersive project reviews. The cornerstone of our approach is a model that manages implicit editing of CAD objects. This model uses a naming technique of B-Rep components and a set of logical rules to provide straight access to the operators of Construction History Graphs (CHG). Another set of logical rules and the replay capacities of CHG make it possible to modify in real-time the parameters of these operators according to the user's 3D interactions. A demonstrator of our model has been developed on the OpenCASCADE geometric kernel, but we explain how it can be applied to more standard CAD systems such as CATIA. We combined our VR–CAD framework with multimodal immersive interaction (using 6 DoF tracking, speech and gesture recognition systems) to gain direct and intuitive deformation of the objects' shapes within a VE, thus avoiding explicit interactions with the CHG within a classical WIMP interface. In addition, we present several haptic paradigms specially conceptualized and evaluated to provide an accurate perception of B-Rep components and to help the user during his/her 3D interactions. Finally, we conclude on some issues for future researches in the field of VR–CAD integration.",
"title": ""
},
{
"docid": "19c230e85fb7556b6864ff332412bf71",
"text": "Given a graph G, a proper n − [p]-coloring is a mapping f : V (G) → 2{1,...,n} such that |f(v)| = p for any vertex v ∈ V (G) and f(v) ∩ f(u) = ∅ for any pair of adjacent vertices u and v. n − [p]-coloring is closely related to multicoloring. Finding multicoloring of induced subgraphs of the triangular lattice (called hexagonal graphs) has important applications in cellular networks. In this article we provide an algorithm to find a 7-[3]-coloring of triangle-free hexagonal graphs in linear time, which solves the open problem stated in [10] and improves the result of Sudeep and Vishwanathan [11], who proved the existence of a 14-[6]coloring. ∗This work was supported by grant N206 017 32/2452 for years 2007-2010",
"title": ""
},
{
"docid": "5e2e5ba17b6f44f2032c6c542918e23c",
"text": "BACKGROUND\nSubfertility and poor nutrition are increasing problems in Western countries. Moreover, nutrition affects fertility in both women and men. In this study, we investigate the association between adherence to general dietary recommendations in couples undergoing IVF/ICSI treatment and the chance of ongoing pregnancy.\n\n\nMETHODS\nBetween October 2007 and October 2010, couples planning pregnancy visiting the outpatient clinic of the Department of Obstetrics and Gynaecology of the Erasmus Medical Centre in Rotterdam, the Netherlands were offered preconception counselling. Self-administered questionnaires on general characteristics and diet were completed and checked during the visit. Six questions, based on dietary recommendations of the Netherlands Nutrition Centre, covered the intake of six main food groups (fruits, vegetables, meat, fish, whole wheat products and fats). Using the questionnaire results, we calculated the Preconception Dietary Risk score (PDR), providing an estimate of nutritional habits. Dietary quality increases with an increasing PDR score. We define ongoing pregnancy as an intrauterine pregnancy with positive heart action confirmed by ultrasound. For this analysis we selected all couples (n=199) who underwent a first IVF/ICSI treatment within 6 months after preconception counselling. We applied adjusted logistic regression analysis on the outcomes of interest using SPSS.\n\n\nRESULTS\nAfter adjustment for age of the woman, smoking of the woman, PDR of the partner, BMI of the couple and treatment indication we show an association between the PDR of the woman and the chance of ongoing pregnancy after IVF/ICSI treatment (odds ratio 1.65, confidence interval: 1.08-2.52; P=0.02]. Thus, a one-point increase in the PDR score associates with a 65% increased chance of ongoing pregnancy.\n\n\nCONCLUSIONS\nOur results show that increasing adherence to Dutch dietary recommendations in women undergoing IVF/ICSI treatment increases the chance of ongoing pregnancy. These data warrant further confirmation in couples achieving a spontaneous pregnancy and in randomized controlled trials.",
"title": ""
},
{
"docid": "1fec7e850333576193bce7f4f4ecc2f3",
"text": "We study several machine learning algorithms for cross-lan7guage patent retrieval and classification. In comparison with most of other studies involving machine learning for cross-language information retrieval, which basically used learning techniques for monolingual sub-tasks, our learning algorithms exploit the bilingual training documents and learn a semantic representation from them. We study Japanese–English cross-language patent retrieval using Kernel Canonical Correlation Analysis (KCCA), a method of correlating linear relationships between two variables in kernel defined feature spaces. The results are quite encouraging and are significantly better than those obtained by other state of the art methods. We also investigate learning algorithms for cross-language document classification. The learning algorithm are based on KCCA and Support Vector Machines (SVM). In particular, we study two ways of combining the KCCA and SVM and found that one particular combination called SVM_2k achieved better results than other learning algorithms for either bilingual or monolingual test documents. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c23976667414fd4786ac1d71363ee04d",
"text": "More and more sensitive information is transmitted and stored in computer networks. Security has become a critical issue. As traditional cryptographic systems are now vulnerable to attacks, DNA based cryptography has been identified as a promising technology because of the vast parallelism and extraordinary information density. While a body of research has proposed the DNA based encryption algorithm, no research has provided solutions to distribute complex and long secure keys. This paper introduces a Hamming code and a block cipher mechanism to ensure secure transmission of a secure key. The research overcomes the limitation on the length of the secure key represented by DNA strands. Therefore it proves that real biological DNA strands are useful for encryption computing. To evaluate our method, we apply the block cipher mechanism to optimize a DNA-based implementation of a conventional symmetric encryption algorithm, described as “yet another encryption algorithm”. Moreover, a maximum length matching algorithm is developed to provide immunity against frequency attacks.",
"title": ""
},
{
"docid": "e6b0b5741d5f84ab2116b19c3055200d",
"text": "The query logs from an on-line map query system provide rich cues to understand the behaviors of human crowds. With the growing ability of collecting large scale query logs, the query suggestion has been a topic of recent interest. In general, query suggestion aims at recommending a list of relevant queries w.r.t. users’ inputs via an appropriate learning of crowds’ query logs. In this paper, we are particularly interested in map query suggestions (e.g., the predictions of location-related queries) and propose a novel model Hierarchical Contextual Attention Recurrent Neural Network (HCAR-NN) for map query suggestion in an encoding-decoding manner. Given crowds map query logs, our proposed HCAR-NN not only learns the local temporal correlation among map queries in a query session (e.g., queries in a short-term interval are relevant to accomplish a search mission), but also captures the global longer range contextual dependencies among map query sessions in query logs (e.g., how a sequence of queries within a short-term interval has an influence on another sequence of queries). We evaluate our approach over millions of queries from a commercial search engine (i.e., Baidu Map). Experimental results show that the proposed approach provides significant performance improvements over the competitive existing methods in terms of classical metrics (i.e., Recall@K and MRR) as well as the prediction of crowds’ search missions.",
"title": ""
},
{
"docid": "fdd4295dc3be3ec06c1785f3bdadd00e",
"text": "The paper presents a method for automatically detecting pallets and estimating their position and orientation. For detection we use a sliding window approach with efficient candidate generation, fast integral features and a boosted classifier. Specific information regarding the detection task such as region of interest, pallet dimensions and pallet structure can be used to speed up and validate the detection process. Stereo reconstruction is employed for depth estimation by applying Semi-Global Matching aggregation with Census descriptors. Offline test results show that successful detection is possible under 0.5 seconds.",
"title": ""
},
{
"docid": "0d8821bf99cc0fef70f9ac04bd33ef76",
"text": "Since then, Tesco has expanded across the world. It now has over 2,200 stores including hypermarkets and Tesco Express outlets to meet different customer needs. As a conglomerate Tesco also offers alternative goods and services such as insurance, banking and online shopping. With net profits of around £3.4 billion Tesco has become the largest British retailer and one of the world's leading retail outlets on three continents. Tesco's growth has resulted in a worldwide workforce of over 468,000 employees.",
"title": ""
},
{
"docid": "740c75400509dd66ca05cdad8e562920",
"text": "Arabic optical character recognition (OCR) is the process of converting images that contain Arabic text to a format that can be edited. In this work, a simple approach for Arabic OCR is presented, the proposed method deployed correlation and dynamic-size windowing to segment and to recognize Arabic characters. The proposed coherent template recognition process is characterized by the ability of recognizing Arabic characters with different sizes. Recognition results reveal the robustness of the proposed method.",
"title": ""
},
{
"docid": "a887b4ed84d35c4d27f1c4de3cfd43b9",
"text": "Humic substances (HS) are complex mixtures of natural organic material which are found almost everywhere in the environment, and particularly in soils, sediments, and natural water. HS play key roles in many processes of paramount importance, such as plant growth, carbon storage, and the fate of contaminants in the environment. While most of the research on HS has been traditionally carried out by conventional experimental approaches, over the past 20 years complementary investigations have emerged from the application of computer modeling and simulation techniques. This paper reviews the literature regarding computational studies of HS, with a specific focus on molecular dynamics simulations. Significant achievements, outstanding issues, and future prospects are summarized and discussed.",
"title": ""
},
{
"docid": "589dd2ca6e12841f3dd4a6873e2ea564",
"text": "As many automated test input generation tools for Android need to instrument the system or the app, they cannot be used in some scenarios such as compatibility testing and malware analysis. We introduce DroidBot, a lightweight UI-guided test input generator, which is able to interact with an Android app on almost any device without instrumentation. The key technique behind DroidBot is that it can generate UI-guided test inputs based on a state transition model generated on-the-fly, and allow users to integrate their own strategies or algorithms. DroidBot is lightweight as it does not require app instrumentation, thus users do not need to worry about the inconsistency between the tested version and the original version. It is compatible with most Android apps, and able to run on almost all Android-based systems, including customized sandboxes and commodity devices. Droidbot is released as an open-source tool on GitHub, and the demo video can be found at https://youtu.be/3-aHG_SazMY.",
"title": ""
},
{
"docid": "ed05b17a9d8a3e330b098a7b0b0dcd34",
"text": "Accurate prediction of fault prone modules (a module is equivalent to a C function or a C+ + method) in software development process enables effective detection and identification of defects. Such prediction models are especially beneficial for large-scale systems, where verification experts need to focus their attention and resources to problem areas in the system under development. This paper presents a novel methodology for predicting fault prone modules, based on random forests. Random forests are an extension of decision tree learning. Instead of generating one decision tree, this methodology generates hundreds or even thousands of trees using subsets of the training data. Classification decision is obtained by voting. We applied random forests in five case studies based on NASA data sets. The prediction accuracy of the proposed methodology is generally higher than that achieved by logistic regression, discriminant analysis and the algorithms in two machine learning software packages, WEKA [I. H. Witten et al. (1999)] and See5. The difference in the performance of the proposed methodology over other methods is statistically significant. Further, the classification accuracy of random forests is more significant over other methods in larger data sets.",
"title": ""
},
{
"docid": "54850f62bf84e01716bc009f68aac3d7",
"text": "© 1966 by the Massachusetts Institute of Technology. From Leadership and Motivation, Essays of Douglas McGregor, edited by W. G. Bennis and E. H. Schein (Cambridge, MA: MIT Press, 1966): 3–20. Reprinted with permission. I t has become trite to say that the most significant developments of the next quarter century will take place not in the physical but in the social sciences, that industry—the economic organ of society—has the fundamental know-how to utilize physical science and technology for the material benefit of mankind, and that we must now learn how to utilize the social sciences to make our human organizations truly effective. Many people agree in principle with such statements; but so far they represent a pious hope—and little else. Consider with me, if you will, something of what may be involved when we attempt to transform the hope into reality.",
"title": ""
}
] |
scidocsrr
|
8c37cac0e3e5834d83660c341ce87090
|
Priority based dynamic resource allocation in Cloud computing with modified waiting queue
|
[
{
"docid": "5bb75cabe435f83b4f587bc04ba6cde9",
"text": "Cloud computing represents a novel on-demand computing approach where resources are provided in compliance to a set of predefined non-functional properties specified and negotiated by means of Service Level Agreements (SLAs). In order to avoid costly SLA violations and to timely react to failures and environmental changes, advanced SLA enactment strategies are necessary, which include appropriate resource-monitoring concepts. Currently, Cloud providers tend to adopt existing monitoring tools, as for example those from Grid environments. However, those tools are usually restricted to locality and homogeneity of monitored objects, are not scalable, and do not support mapping of low-level resource metrics e.g., system up and down time to high-level application specific SLA parameters e.g., system availability. In this paper we present a novel framework for managing the mappings of the Low-level resource Metrics to High-level SLAs (LoM2HiS framework). The LoM2HiS framework is embedded into FoSII infrastructure, which facilitates autonomic SLA management and enforcement. Thus, the LoM2HiS framework detects future SLA violation threats and can notify the enactor component to act so as to avert the threats. We discuss the conceptual model of the LoM2HiS framework, followed by the implementation details. Finally, we present the first experimental results and a proof of concept of the LoM2HiS framework.",
"title": ""
}
] |
[
{
"docid": "527c4c17aadb23a991d85511004a7c4f",
"text": "Accurate and robust recognition and prediction of traffic situation plays an important role in autonomous driving, which is a prerequisite for risk assessment and effective decision making. Although there exist a lot of works dealing with modeling driver behavior of a single object, it remains a challenge to make predictions for multiple highly interactive agents that react to each other simultaneously. In this work, we propose a generic probabilistic hierarchical recognition and prediction framework which employs a two-layer Hidden Markov Model (TLHMM) to obtain the distribution of potential situations and a learning-based dynamic scene evolution model to sample a group of future trajectories. Instead of predicting motions of a single entity, we propose to get the joint distribution by modeling multiple interactive agents as a whole system. Moreover, due to the decoupling property of the layered structure, our model is suitable for knowledge transfer from simulation to real world applications as well as among different traffic scenarios, which can reduce the computational efforts of training and the demand for a large data amount. A case study of highway ramp merging scenario is demonstrated to verify the effectiveness and accuracy of the proposed framework.",
"title": ""
},
{
"docid": "e52747da1efda720298f9022fcc2ab99",
"text": "Addiction can be defined as drug-induced changes in the central nervous system (CNS) that produce maladaptive alterations in spontaneous behavior and in the behavioral response to readministration of that drug. Maladaptive behaviors include those identified as criteria for addiction in the DSM-IV. In general what most psychiatric metrics describe as addiction associated behaviors is the emergence of behaviors to obtain drug reward at the expense of engaging in behaviors to seek natural rewards, ranging from biological rewards such as sex to cultural rewards such as stable personal relationships. The substitution of drug reward for natural reward suggests that the neuropathology of addiction may reside in the same neural systems that mediate the detection and acquisition of natural rewards. This postulate forms a primary premise in the search for the neurobiological basis of addiction, and has revealed a circuit consisting of interconnections among limbic cortex, basal ganglia, and brainstem nuclei that is pathologically modified by repeated drug administration. The drug-induced changes in the structure and function of this circuit are progressive, and to some extent parallel the development of the behavioral characteristics of addiction. Over the last decade neurobiologists have come to describe the behavioral transition to addiction as a druginduced neuroplastic process (1–3). In parallel with the development and expression of addictive behaviors, the neurobiology of these two components of the transition to addiction can be described as: (a) the sequence of molecular events that establish the neuroplastic changes leading to addiction, and (b) the neuroplastic changes themselves. Accordingly, a number of molecular neuroplastic alterations have been identified in the brain after repeated drug administration, and some of these appear to be important in the development and/or expression of addictive behaviors. However, the process of identifying drug-induced changes is accelerating and producing a deluge of information that is proving increasingly difficult to integrate into a coherent sequence of neuroplastic changes that mediate addiction. In",
"title": ""
},
{
"docid": "b0687f53ba624136723f477d38e075d1",
"text": "Presenting historical content and information illustratively and interestingly for the audience is an interesting challenge. Now that mobile devices with good computational and graphical capabilities have become wide-spread, Augmented Reality (AR) has become an attractive solution. Historical events can be presented for the tourist in the very locations where they occurred. One of the easiest types of historical content to present in AR is historical photographs. This paper presents mobile applications to show historical photos for tourists in Augmented Reality. We present several on-site pilot cases and give a summary of technical findings and feedback from test users.",
"title": ""
},
{
"docid": "db5dcaddaa38f472afaa84b61e4ea650",
"text": "The dynamics of load, especially induction motors, are the driving force for short-term voltage stability (STVS) problems. In this paper, the equivalent rotation speed of motors is identified online and its recovery time is estimated next to realize an emergency-demand-response (EDR) based under speed load shedding (USLS) scheme to improve STVS. The proposed scheme consists of an EDR program and two regular stages (RSs). In the EDR program, contracted load is used as a fast-response resource rather than the last defense. The estimated recovery time (ERT) is used as the triggering signal for the EDR program. In the RSs, the amount of load to be shed at each bus is determined according to the assigned weights based on ERTs. Case studies on a practical power system in China Southern Power Grid have validated the performance of the proposed USLS scheme under various contingency scenarios. The utilization of EDR resources and the adaptive distribution of shedding amount in RSs guarantee faster voltage recovery. Therefore, USLS offers a new and more effective approach compared with existing under voltage load shedding to improve STVS.",
"title": ""
},
{
"docid": "464b66e2e643096bd344bea8026f4780",
"text": "In this paper we describe an application of our approach to temporal text mining in Competitive Intelligence for the biotechnology and pharmaceutical industry. The main objective is to identify changes and trends of associations among entities of interest that appear in text over time. Text Mining (TM) exploits information contained in textual data in various ways, including the type of analyses that are typically performed in Data Mining [17]. Information Extraction (IE) facilitates the semi-automatic creation of metadata repositories from text. Temporal Text mining combines Information Extraction and Data Mining techniques upon textual repositories and incorporates time and ontologies‟ issues. It consists of three main phases; the Information Extraction phase, the ontology driven generalisation of templates and the discovery of associations over time. Treatment of the temporal dimension is essential to our approach since it influences both the annotation part (IE) of the system as well as the mining part.",
"title": ""
},
{
"docid": "6d7188bd9d7a9a6c80c573d6184d467d",
"text": "Background: Feedback of the weak areas of knowledge in RPD using continuous competency or other test forms is very essential to develop the student knowledge and the syllabus as well. This act should be a regular practice. Aim: To use the outcome of competency test and the objectives structured clinical examination of removable partial denture as a reliable measure to provide a continuous feedback to the teaching system. Method: This sectional study was performed on sixty eight, fifth year students for the period from 2009 to 2010. The experiment was divided into two parts: continuous assessment and the final examination. In the first essay; some basic removable partial denture knowledge, surveying technique, and designing of the metal framework were used to estimate the learning outcome. While in the second essay, some components of the objectives structured clinical examination were compared to the competency test to see the difference in learning outcome. Results: The students’ performance was improved in the final assessment just in some aspects of removable partial denture. However, for the surveying, the students faced some problems. Conclusion: the continuous and final tests can provide a simple tool to advice the teachers for more effective teaching of the RPD. So that the weakness in specific aspects of the RPD syllabus can be detected and corrected continuously from the beginning, during and at the end of the course.",
"title": ""
},
{
"docid": "c8598e04ef93f6127333b79a83508daf",
"text": "Nitric oxide (NO) is an important signaling molecule in multicellular organisms. Most animals produce NO from L-arginine via a family of dedicated enzymes known as NO synthases (NOSes). A rare exception is the roundworm Caenorhabditis elegans, which lacks its own NOS. However, in its natural environment, C. elegans feeds on Bacilli that possess functional NOS. Here, we demonstrate that bacterially derived NO enhances C. elegans longevity and stress resistance via a defined group of genes that function under the dual control of HSF-1 and DAF-16 transcription factors. Our work provides an example of interspecies signaling by a small molecule and illustrates the lifelong value of commensal bacteria to their host.",
"title": ""
},
{
"docid": "eb1fb0b76a94be57c6563f3cc2c3bbd2",
"text": "Recent state-of-the-art Deep Reinforcement Learning algorithms, such as A3C and UNREAL, are designed to train on a single device with only CPU's. Using GPU acceleration for these algorithms results in low GPU utilization, which means the full performance of the GPU is not reached. Motivated by the architecture changes made by the GA3C algorithm, which gave A3C better GPU acceleration, together with the high learning efficiency of the UNREAL algorithm, this paper extends GA3C with the auxiliary tasks from UNREAL to create a Deep Reinforcement Learning algorithm, GUNREAL, with higher learning efficiency and also benefiting from GPU acceleration. We show that our GUNREAL system reaches higher scores on several games in the same amount of time than GA3C.",
"title": ""
},
{
"docid": "fa0d1e0ad3bdc4ea86916035f49e10cb",
"text": "This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. Abstract 19 Abnormal behavior detection has been one of the most important research branches in intelligent video content 20 analysis. In this paper, we propose a novel abnormal behavior detection approach by introducing trajectory 21 sparse reconstruction analysis (SRA). Given a video scenario, we collect trajectories of normal behaviors and 22 extract the control point features of cubic B-spline curves to construct a normal dictionary set, which is further 23 divided into Route sets. On the dictionary set, sparse reconstruction coefficients and residuals of a test trajectory 24 to the Route sets can be calculated with SRA. The minimal residual is used to classify the test behavior into a 25 normal behavior or an abnormal one. SRA is solved by L1-norm minimization, leading to that a few of dictionary 26 samples are used when reconstructing a behavior trajectory, which guarantees that the proposed approach is 27 valid even when the dictionary set is very small. Experimental results with comparisons show that the proposed 28 approach improves the state-of-the-art.",
"title": ""
},
{
"docid": "800e720e61d3713a0625ec6660302b50",
"text": "The increasing amount of available textual information makes necessary the use of Natural Language Processing (NLP) tools. These tools have to be used on large collections of documents in different languages. But NLP is a complex task that relies on many processes and resources. As a consequence, NLP tools must be both configurable and efficient: specific software architectures must be designed for this purpose. We present in this paper the LIMA multilingual analysis platform, developed at CEA LIST. This configurable platform has been designed to develop NLP based industrial applications while keeping enough flexibility to integrate various processes and resources. This design makes LIMA a linguistic analyzer that can handle languages as different as French, English, German, Arabic or Chinese. Beyond its architecture principles and its capabilities as a linguistic analyzer, LIMA also offers a set of tools dedicated to the test and the evaluation of linguistic modules and to the production and the management of new linguistic resources. 1 Context and objectives In this article, we present the LIMA (CEA List Multilingual Analyzer) platform which is, as GATE (Cunningham et al., 2002), together an architecture, a set of tools and resources and an environment for developing applications based on Natural Language Processing (NLP). This platform was developed by the LVIC laboratory of CEA LIST with the following requirements: • multilingualism, with the objective of dealing with a broad spectrum of languages; • a large diversity of applications. LIMA aims at being used as a basic component for various applications that can be text-based applications such as automatic summarization or question-answering but can also be applications dealing with multimedia documents; • extensibility, that is to say the ability to support the addition of new functionalities. As illustrated by Section 2.2, the current version of LIMA mainly performs analyses up to syntactic analysis but we also aim at extending it to semantic and discourse analyses; • the need for efficiency. A platform such as LIMA must be able to process very large corpora both because the processing of such corpora is more and more required by work in Computational Linguistics (see (Pantel et al., 2009) for instance) and because it also has to be used in an industrial context. The first three requirements make necessary to design an architecture based on modularity and flexibility at a high degree. All languages are not characterized by the same set of linguistic phenomena and as a consequence, their processing doesn’t rely on the combination of the same elementary analyses. Moreover, even if a linguistic analysis module can be used for two different languages, the linguistic resources it relies on are generally specific to each language. The same need for modularity and flexibility comes from the diversity of applications LIMA has to deal with: using the same system for lemmatizing a set of keywords from a base of images, a newspaper article or the transcription of a phone conversation is not the best means to have good results in each of these three contexts. Finally, the main difficulty LIMA had to face was to fulfill these requirements without sacrificing efficiency. Several kinds of architectures were already proposed to address these different issues. Process-oriented architectures focus on the combination and the control of a set of modules together with the communication between them. They generally implement a loosely integration by the means of a “glue” that can be a multi-agent system as in TalLab (Wolinski et al., 1998) or a client-server architecture as in FreeLing (Carreras et al., 2004). Data-oriented architectures also represent a weak type of integration by concentrating on the normalization of data between modules, as in the MULTEXT project (Ide and Véronis, 1994) or the LT XML Library (Brew et al., 1999). The TIPSTER-like architectures (Grishman, 1997) go a step further by imposing both a shared representation of data, often by annotation graphs (Bird and Liberman, 1999), and a uniform interface for modules. The GATE platform (Cunningham et al., 2002) is the typical representative of this kind of architectures but TEXTRACT (Neff et al., 2004) and its most recent descendant, UIMA (Ferrucci and Lally, 2004), also belong to this category. Finally, the highest degree of integration is reached by formalism-oriented architectures in which both data and processes are represented through a declarative formalism associated to a kind of inference engine. This approach was initially dedicated to the development of grammars as in ALVEY (Grover et al., 1993) but was also applied more widely through platforms such as ALEP (Simkins, 1994), NooJ (Koeva et al., 2007) or Outilex (Blanc et al., 2006). As it will be illustrated in the following sections, we chose a TIPSTER-like architecture for LIMA as it represents the best trade-off between modularity and efficiency, which are",
"title": ""
},
{
"docid": "d5ddc81c54761bd00d969d3413c94321",
"text": "With increasing popularity and complexity of social networks, community detection in these networks has become an important research area. Several algorithms are available to detect overlapping community structures based on different approaches. Here we propose a two step genetic algorithm to detect overlapping communities based on node representation. First, we find disjoint communities and these disjoint communities are used to find overlapping communities. We use modularity as our optimization function. Experiments are performed on both artificial and real networks to verify efficiency and scalability of our algorithm.",
"title": ""
},
{
"docid": "5816f70a7f4d7d0beb6e0653db962df3",
"text": "Packaging appearance is extremely important in cigarette manufacturing. Typically, there are two types of cigarette packaging defects: (1) cigarette laying defects such as incorrect cigarette numbers and irregular layout; (2) tin paper handle defects such as folded paper handles. In this paper, an automated vision-based defect inspection system is designed for cigarettes packaged in tin containers. The first type of defects is inspected by counting the number of cigarettes in a tin container. First k-means clustering is performed to segment cigarette regions. After noise filtering, valid cigarette regions are identified by estimating individual cigarette area using linear regression. The k clustering centers and area estimation function are learned off-line on training images. The second kind of defect is detected by checking the segmented paper handle region. Experimental results on 500 test images demonstrate the effectiveness of the proposed inspection system. The proposed method also contributes to the general detection and classification system such as identifying mitosis in early diagnosis of cervical cancer.",
"title": ""
},
{
"docid": "c0dbb410ebd6c84bd97b5f5e767186b3",
"text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.",
"title": ""
},
{
"docid": "1234ea50708c6f51e3e764fb008d981c",
"text": "We address the problem of large-scale visual place recognition for situations where the scene undergoes a major change in appearance, for example, due to illumination (day/night), change of seasons, aging, or structural modifications over time such as buildings being built or destroyed. Such situations represent a major challenge for current large-scale place recognition methods. This work has the following three principal contributions. First, we demonstrate that matching across large changes in the scene appearance becomes much easier when both the query image and the database image depict the scene from approximately the same viewpoint. Second, based on this observation, we develop a new place recognition approach that combines (i) an efficient synthesis of novel views with (ii) a compact indexable image representation. Third, we introduce a new challenging dataset of 1,125 camera-phone query images of Tokyo that contain major changes in illumination (day, sunset, night) as well as structural changes in the scene. We demonstrate that the proposed approach significantly outperforms other large-scale place recognition techniques on this challenging data.",
"title": ""
},
{
"docid": "9ba6656cb67dcb72d4ebadcaf9450f40",
"text": "OBJECTIVE\nThe Japan Ankylosing Spondylitis Society conducted a nationwide questionnaire survey of spondyloarthropathies (SpA) in 1990 and 1997, (1) to estimate the prevalence and incidence, and (2) to validate the criteria of Amor and the European Spondylarthropathy Study Group (ESSG) in Japan.\n\n\nMETHODS\nJapan was divided into 9 districts, to each of which a survey supervisor was assigned. According to unified criteria, each supervisor selected all the clinics and hospitals with potential for SpA patients in the district. The study population consisted of all patients with SpA seen at these institutes during a 5 year period (1985-89) for the 1st survey and a 7 year period (1990-96) for the 2nd survey.\n\n\nRESULTS\nThe 1st survey recruited 426 and the 2nd survey 638 cases, 74 of which were registered in both studies. The total number of patients with SpA identified 1985-96 was 990 (760 men, 227 women). They consisted of patients with ankylosing spondylitis (68.3%), psoriatic arthritis (12.7%), reactive arthritis (4.0%), undifferentiated SpA (5.4%), inflammatory bowel disease (2.2%), pustulosis palmaris et plantaris (4.7%), and others (polyenthesitis, etc.) (0.8%). The maximum onset number per year was 49. With the assumption that at least one-tenth of the Japanese population with SpA was recruited, incidence and prevalence were estimated not to exceed 0.48/100,000 and 9.5/100,000 person-years, respectively. The sensitivity was 84.0% for Amor criteria and 84.6 for ESSG criteria.\n\n\nCONCLUSION\nThe incidence and prevalence of SpA in Japanese were estimated to be less than 1/10 and 1/200, respectively, of those among Caucasians. The adaptability of the Amor and ESSG criteria was validated for the Japanese population.",
"title": ""
},
{
"docid": "9c52333616cf2b1dce267333f4fad2ba",
"text": "We present a new type of actuatable display, called Tilt Displays, that provide visual feedback combined with multi-axis tilting and vertical actuation. Their ability to physically mutate provides users with an additional information channel that facilitates a range of new applications including collaboration and tangible entertainment while enhancing familiar applications such as terrain modelling by allowing 3D scenes to be rendered in a physical-3D manner. Through a mobile 3x3 custom built prototype, we examine the design space around Tilt Displays, categorise output modalities and conduct two user studies. The first, an exploratory study examines users' initial impressions of Tilt Displays and probes potential interactions and uses. The second takes a quantitative approach to understand interaction possibilities with such displays, resulting in the production of two user-defined gesture sets: one for manipulating the surface of the Tilt Display, the second for conducting everyday interactions.",
"title": ""
},
{
"docid": "39958f4825796d62e7a5935d04d5175d",
"text": "This paper presents a wireless system which enables real-time health monitoring of multiple patient(s). In health care centers patient's data such asheart rate needs to be constantly monitored. The proposed system monitors the heart rate and other such data of patient's body. For example heart rate is measured through a Photoplethysmograph. A transmitting module is attached which continuously transmits the encoded serial data using Zigbee module. A receiver unit is placed in doctor's cabin, which receives and decodes the data and continuously displays it on a User interface visible on PC/Laptop. Thus doctor can observe and monitor many patients at the same time. System also continuously monitors the patient(s) data and in case of any potential irregularities, in the condition of a patient, the alarm system connected to the system gives an audio-visual warning signal that the patient of a particular room needs immediate attention. In case, the doctor is not in his chamber, the GSM modem connected to the system also sends a message to all the doctors of that unit giving the room number of the patient who needs immediate care.",
"title": ""
},
{
"docid": "f6ed5214bd8d37560a0cb59f12ed7404",
"text": "Smart cities are powered by the ability to self-monitor and respond to signals and data feeds from heterogeneous physical sensors. These physical sensors, however, are fraught with interoperability and dependability challenges. Moreover, they also cannot shed light on human emotions and factors that impact smart city initiatives. Yet everyday, millions of city dwellers share their observations, thoughts, feelings, and experiences about their city through social media updates. This paper describes how citizens can serve as human sensors in providing supplementary, alternate, and complementary sources of information for smart cities. It presents a methodology, based on a probabilistic language model, to extract the perceptions that may be relevant to smart city initiatives from social media updates. Geo-tagged tweets collected over a two-month period from New York City are used to illustrate the potential of social media powered human sensors.",
"title": ""
},
{
"docid": "401aa3faf42ccdc2d63f5d76bd7092e4",
"text": "We introduce a Markov-model-based framework for Moving Target Defense (MTD) analysis. The framework allows modeling of a broad range of MTD strategies, provides general theorems about how the probability of a successful adversary defeating an MTD strategy is related to the amount of time/cost spent by the adversary, and shows how a multilevel composition of MTD strategies can be analyzed by a straightforward combination of the analysis for each one of these strategies. Within the proposed framework we define the concept of security capacity which measures the strength or effectiveness of an MTD strategy: the security capacity depends on MTD specific parameters and more general system parameters. We apply our framework to two concrete MTD strategies.",
"title": ""
},
{
"docid": "ca410a7cf7f36fdd145aed738f147d3f",
"text": "A range of values of a real function f : Ed + Iw can be used to implicitly define a subset of Euclidean space Ed. Such “implicit functions” have many uses in geometric and solid modeling. This paper focuses on the properties and construction of real functions for the representation of rigid solids (compact, semi-analytic, and regular subsets of Ed). We review some known facts about real functions defining compact semi-analytic sets, and their applications. The theory of R-functions developed in (Rvachev, 1982) provides means for constructing real function representations of solids described by the standard (non-regularized) set operations. But solids are not closed under the standard set operations, and such real function representations are rarely available in modem solid modeling systems. More generally, assuring that a real function f represents a regular set may be difficult. Until now, the regularity has either been assumed, or treated in an ad hoc fashion. We show that topological and extremal properties of real functions can be used to test for regularity, and discuss procedures for constructing real functions with desired properties for arbitrary solids.",
"title": ""
}
] |
scidocsrr
|
b0f22bbf9259d0f9e4fca4e924a75f4d
|
Hidden Topic Sentiment Model
|
[
{
"docid": "8d29cf5303d9c94741a8d41ca6c71da9",
"text": "Sentiment analysis or opinion mining aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text. This paper proposes a novel probabilistic modeling framework based on Latent Dirichlet Allocation (LDA), called joint sentiment/topic model (JST), which detects sentiment and topic simultaneously from text. Unlike other machine learning approaches to sentiment classification which often require labeled corpora for classifier training, the proposed JST model is fully unsupervised. The model has been evaluated on the movie review dataset to classify the review sentiment polarity and minimum prior information have also been explored to further improve the sentiment classification accuracy. Preliminary experiments have shown promising results achieved by JST.",
"title": ""
},
{
"docid": "209de57ac23ab35fa731b762a10f782a",
"text": "Although fully generative models have been successfully used to model the contents of text documents, they are often awkward to apply to combinations of text data and document metadata. In this paper we propose a Dirichlet-multinomial regression (DMR) topic model that includes a log-linear prior on document-topic distributions that is a function of observed features of the document, such as author, publication venue, references, and dates. We show that by selecting appropriate features, DMR topic models can meet or exceed the performance of several previously published topic models designed for specific data.",
"title": ""
}
] |
[
{
"docid": "ede29bc41058b246ceb451d5605cce2c",
"text": "Knowledge graphs have challenged the existing embedding-based approaches for representing their multifacetedness. To address some of the issues, we have investigated some novel approaches that (i) capture the multilingual transitions on different language-specific versions of knowledge, and (ii) encode the commonly existing monolingual knowledge with important relational properties and hierarchies. In addition, we propose the use of our approaches in a wide spectrum of NLP tasks that have not been well explored by related works.",
"title": ""
},
{
"docid": "0b1cfaca04454ed4a0d4ad130e3bc234",
"text": "Writing is a critical skill for young learners to master for academic purposes and as a work and life skill. This paper is part of a larger study on the English Language 2010 syllabus and its national curriculum in Singapore particularly in the area of the teaching of writing at the primary levels. In this paper, we report findings from a quantitative content analysis of both the syllabus and the curriculum as “policy texts” (Ball, 2005) to locate alignments and variances in a discussion of their potential impact on classroom instruction. Findings from the analysis of these documents reveal that, on the whole, the national curriculum is aligned not only to current approaches for the teaching of writing but also to the syllabus in terms of instructional principles. However, the findings also reveal a difference in terms of emphasis between both documents that may potentially restrict the realisation of syllabus outcomes in the area of writing instruction at the primary levels.",
"title": ""
},
{
"docid": "77b1e7b6f91cf5e2d4380a9d117ae7d9",
"text": "This paper theoretically introduces and develops a new operation diagram (OPD) and parameter estimator for the synchronous reluctance machine (SynRM). The OPD demonstrates the behavior of the machine's main performance parameters, such as torque, current, voltage, frequency, flux, power factor (PF), and current angle, all in one graph. This diagram can easily be used to describe different control strategies, possible operating conditions, both below- and above-rated speeds, etc. The saturation effect is also discussed with this diagram by finite-element-method calculations. A prototype high-performance SynRM is designed for experimental studies, and then, both machines' [corresponding induction machine (IM)] performances at similar loading and operation conditions are tested, measured, and compared to demonstrate the potential of SynRM. The laboratory measurements (on a standard 15-kW Eff1 IM and its counterpart SynRM) show that SynRM has higher efficiency, torque density, and inverter rating and lower rotor temperature and PF in comparison to IM at the same winding-temperature-rise condition. The measurements show that the torque capability of SynRM closely follows that of IM.",
"title": ""
},
{
"docid": "8686ffed021b68574b4c3547d361eac8",
"text": "* To whom all correspondence should be addressed. Abstract Face detection is an important prerequisite step for successful face recognition. The performance of previous face detection methods reported in the literature is far from perfect and deteriorates ungracefully where lighting conditions cannot be controlled. We propose a method that outperforms state-of-the-art face detection methods in environments with stable lighting. In addition, our method can potentially perform well in environments with variable lighting conditions. The approach capitalizes upon our near-IR skin detection method reported elsewhere [13][14]. It ascertains the existence of a face within the skin region by finding the eyes and eyebrows. The eyeeyebrow pairs are determined by extracting appropriate features from multiple near-IR bands. Very successful feature extraction is achieved by simple algorithmic means like integral projections and template matching. This is because processing is constrained in the skin region and aided by the near-IR phenomenology. The effectiveness of our method is substantiated by comparative experimental results with the Identix face detector [5].",
"title": ""
},
{
"docid": "61556b092c6b5607e8bf2c556202570f",
"text": "The problem of recognizing actions in realistic videos is challenging yet absorbing owing to its great potentials in many practical applications. Most previous research is limited due to the use of simplified action databases under controlled environments or focus on excessively localized features without sufficiently encapsulating the spatio-temporal context. In this paper, we propose to model the spatio-temporal context information in a hierarchical way, where three levels of context are exploited in ascending order of abstraction: 1) point-level context (SIFT average descriptor), 2) intra-trajectory context (trajectory transition descriptor), and 3) inter-trajectory context (trajectory proximity descriptor). To obtain efficient and compact representations for the latter two levels, we encode the spatiotemporal context information into the transition matrix of a Markov process, and then extract its stationary distribution as the final context descriptor. Building on the multichannel nonlinear SVMs, we validate this proposed hierarchical framework on the realistic action (HOHA) and event (LSCOM) recognition databases, and achieve 27% and 66% relative performance improvements over the state-of-the-art results, respectively. We further propose to employ the Multiple Kernel Learning (MKL) technique to prune the kernels towards speedup in algorithm evaluation.",
"title": ""
},
{
"docid": "5bc1c336b8e495e44649365f11af4ab8",
"text": "Convolutional neural networks (CNN) are limited by the lack of capability to handle geometric information due to the fixed grid kernel structure. The availability of depth data enables progress in RGB-D semantic segmentation with CNNs. State-of-the-art methods either use depth as additional images or process spatial information in 3D volumes or point clouds. These methods suffer from high computation and memory cost. To address these issues, we present Depth-aware CNN by introducing two intuitive, flexible and effective operations: depth-aware convolution and depth-aware average pooling. By leveraging depth similarity between pixels in the process of information propagation, geometry is seamlessly incorporated into CNN. Without introducing any additional parameters, both operators can be easily integrated into existing CNNs. Extensive experiments and ablation studies on challenging RGB-D semantic segmentation benchmarks validate the effectiveness and flexibility of our approach.",
"title": ""
},
{
"docid": "bec41dd9e724598c8ab47fa1840cad61",
"text": "Described here is a case of suicide with the use of a chainsaw. A female suffering from schizophrenia committed suicide by an ingenious use of a chainsaw that resulted in the transection of her cervical spine and spinal cord. The findings of the resulting investigation are described and the mechanism of suicides with the use of a chainsaw is reviewed. A dry bone study was realized to determine the bone sections, the correlation between anatomic lesions and characteristics of chainsaw. The damage of organs and soft tissues is compared according to the kinds of chainsaw used.",
"title": ""
},
{
"docid": "9f6f00bf0872c54fbf2ec761bf73f944",
"text": "Nanoscience emerged in the late 1980s and is developed and applied in China since the middle of the 1990s. Although nanotechnologies have been less developed in agronomy than other disciplines, due to less investment, nanotechnologies have the potential to improve agricultural production. Here, we review more than 200 reports involving nanoscience in agriculture, livestock, and aquaculture. The major points are as follows: (1) nanotechnologies used for seeds and water improved plant germination, growth, yield, and quality. (2) Nanotechnologies could increase the storage period for vegetables and fruits. (3) For livestock and poultry breeding, nanotechnologies improved animals immunity, oxidation resistance, and production and decreased antibiotic use and manure odor. For instance, the average daily gain of pig increased by 9.9–15.3 %, the ratio of feedstuff to weight decreased by 7.5–10.3 %, and the diarrhea rate decreased by 55.6–66.7 %. (4) Nanotechnologies for water disinfection in fishpond increased water quality and increased yields and survivals of fish and prawn. (5) Nanotechnologies for pesticides increased pesticide performance threefold and reduced cost by 50 %. (6) Nano urea increased the agronomic efficiency of nitrogen fertilization by 44.5 % and the grain yield by 10.2 %, versus normal urea. (7) Nanotechnologies are widely used for rapid detection and diagnosis, notably for clinical examination, food safety testing, and animal epidemic surveillance. (8) Nanotechnologies may also have adverse effects that are so far not well known.",
"title": ""
},
{
"docid": "33a9140fb57200a489b9150d39f0ab65",
"text": "In this paper, a double-quadrant state-of-charge (SoC)-based droop control method for distributed energy storage system is proposed to reach the proper power distribution in autonomous dc microgrids. In order to prolong the lifetime of the energy storage units (ESUs) and avoid the overuse of a certain unit, the SoC of each unit should be balanced and the injected/output power should be gradually equalized. Droop control as a decentralized approach is used as the basis of the power sharing method for distributed energy storage units. In the charging process, the droop coefficient is set to be proportional to the nth order of SoC, while in the discharging process, the droop coefficient is set to be inversely proportional to the nth order of SoC. Since the injected/output power is inversely proportional to the droop coefficient, it is obtained that in the charging process the ESU with higher SoC absorbs less power, while the one with lower SoC absorbs more power. Meanwhile, in the discharging process, the ESU with higher SoC delivers more power and the one with lower SoC delivers less power. Hence, SoC balancing and injected/output power equalization can be gradually realized. The exponent n of SoC is employed in the control diagram to regulate the speed of SoC balancing. It is found that with larger exponent n, the balancing speed is higher. MATLAB/simulink model comprised of three ESUs is implemented and the simulation results are shown to verify the proposed approach.",
"title": ""
},
{
"docid": "b770124e1e5a7b4161b7f00a9bf3916f",
"text": "In the biomedical domain large amount of text documents are unstructured information is available in digital text form. Text Mining is the method or technique to find for interesting and useful information from unstructured text. Text Mining is also an important task in medical domain. The technique uses for Information retrieval, Information extraction and natural language processing (NLP). Traditional approaches for information retrieval are based on key based similarity. These approaches are used to overcome these problems; Semantic text mining is to discover the hidden information from unstructured text and making relationships of the terms occurring in them. In the biomedical text, the text should be in the form of text which can be present in the books, articles, literature abstracts, and so forth. Most of information is stored in the text format, so in this paper we will focus on the role of ontology for semantic text mining by using WordNet. Specifically, we have presented a model for extracting concepts from text documents using linguistic ontology in the domain of medical.",
"title": ""
},
{
"docid": "5087353b4888832c2c801f06c94d3c67",
"text": "Many Automatic Question Generation (AQG) approaches have been proposed focusing on reading comprehension support; however, none of them addressed academic writing. We conducted a large-scale case study with 25 supervisors and 36 research students enroled in an Engineering Research Method course. We investigated trigger questions, as a form of feedback, produced by supervisors, and how they support these students’ literature review writing. In this paper, we identified the most frequent question types according to Graesser and Person’s Question Taxonomy and discussed how the human experts generate such questions from the source text. Finally, we proposed a more practical Automatic Question Generation Framework for supporting academic writing in engineering education.",
"title": ""
},
{
"docid": "7985e61fc9a4fa1d92fa6fafd4747ff2",
"text": "A single-ended InP transimpedance amplifier (TIA) for next generation high-bandwidth optical fiber communication systems is presented. The TIA exhibits 48 dB-Omega transimpedance and has a 3-dB bandwidth of 92 GHz. The input-referred current noise is 20 pA/radicHz and the transimpedance group delay is below 10 ps over the entire measured frequency range.",
"title": ""
},
{
"docid": "dc2a55da87c78acfd4413ddebdec6a1c",
"text": "The past decade has seen an explosion in the amount of digital information stored in electronic health records (EHRs). While primarily designed for archiving patient information and performing administrative healthcare tasks like billing, many researchers have found secondary use of these records for various clinical informatics applications. Over the same period, the machine learning community has seen widespread advances in the field of deep learning. In this review, we survey the current research on applying deep learning to clinical tasks based on EHR data, where we find a variety of deep learning techniques and frameworks being applied to several types of clinical applications including information extraction, representation learning, outcome prediction, phenotyping, and deidentification. We identify several limitations of current research involving topics such as model interpretability, data heterogeneity, and lack of universal benchmarks. We conclude by summarizing the state of the field and identifying avenues of future deep EHR research.",
"title": ""
},
{
"docid": "dde075f427d729d028d6d382670f8346",
"text": "Using social media Web sites is among the most common activity of today's children and adolescents. Any Web site that allows social interaction is considered a social media site, including social networking sites such as Facebook, MySpace, and Twitter; gaming sites and virtual worlds such as Club Penguin, Second Life, and the Sims; video sites such as YouTube; and blogs. Such sites offer today's youth a portal for entertainment and communication and have grown exponentially in recent years. For this reason, it is important that parents become aware of the nature of social media sites, given that not all of them are healthy environments for children and adolescents. Pediatricians are in a unique position to help families understand these sites and to encourage healthy use and urge parents to monitor for potential problems with cyberbullying, \"Facebook depression,\" sexting, and exposure to inappropriate content.",
"title": ""
},
{
"docid": "02659cbf3b091fc0e4a38a55be77a900",
"text": "This paper proposes a new on-the-fly composition algorithm for Weighted Finite-State Transducers (WFSTs) in large-vocabulary continuous-speech recognition. In general on-the-fly composition, two transducers are composed during decoding, and a Viterbi search is performed based on the composed search space. In this new method, a Viterbi search is performed based on the first of two transducers. The second transducer is only used to rescore the hypotheses generated during the search. Since this rescoring is very efficient, the total amount of computation in the new method is almost the same as when using only the first transducer. In a 30kword vocabulary spontaneous lecture speech transcription task, our proposed method significantly outperformed the general on-the-fly composition method. Furthermore the speed of our method was slightly faster than that of decoding with a single fully composed and optimized WFST, where our method consumed only 20% of the memory usage required for decoding with the single WFST. Finally, we have achieved one-pass real-time speech recognition in an extremely large vocabulary of 1.8 million words.",
"title": ""
},
{
"docid": "88def96b7287ce217f1abf8fb1b413a5",
"text": "Designing a metric manually for unsupervised sequence generation tasks, such as text generation, is essentially difficult. In a such situation, learning a metric of a sequence from data is one possible solution. The previous study, SeqGAN, proposed the framework for unsupervised sequence generation, in which a metric is learned from data, and a generator is optimized with regard to the learned metric with policy gradient, inspired by generative adversarial nets (GANs) and reinforcement learning. In this paper, we make two proposals to learn better metric than SeqGAN’s: partial reward function and expert-based reward function training. The partial reward function is a reward function for a partial sequence of a certain length. SeqGAN employs a reward function for completed sequence only. By combining long-scale and short-scale partial reward functions, we expect a learned metric to be able to evaluate a partial correctness as well as a coherence of a sequence, as a whole. In expert-based reward function training, a reward function is trained to discriminate between an expert (or true) sequence and a fake sequence that is produced by editing an expert sequence. Expert-based reward function training is not a kind of GAN frameworks. This makes the optimization of the generator easier. We examine the effect of the partial reward function and expert-based reward function training on synthetic data and real text data, and show improvements over SeqGAN and the model trained with MLE. Specifically, whereas SeqGAN gains 0.42 improvement of NLL over MLE on synthetic data, our best model gains 3.02 improvement, and whereas SeqGAN gains 0.029 improvement of BLEU over MLE, our best model gains 0.250 improvement.",
"title": ""
},
{
"docid": "b952967acb2eaa9c780bffe211d11fa0",
"text": "Cryptographic message authentication is a growing need for FPGA-based embedded systems. In this paper a customized FPGA implementation of a GHASH function that is used in AES-GCM, a widely-used message authentication protocol, is described. The implementation limits GHASH logic utilization by specializing the hardware implementation on a per-key basis. The implemented module can generate a 128bit message authentication code in both pipelined and unpipelined versions. The pipelined GHASH version achieves an authentication throughput of more than 14 Gbit/s on a Spartan-3 FPGA and 292 Gbit/s on a Virtex-6 device. To promote adoption in the field, the complete source code for this work has been made publically-available.",
"title": ""
},
{
"docid": "e9fa76fba0256cb99abf7992323a674b",
"text": "Identity formation in adolescence is closely linked to searching for and acquiring meaning in one's life. To date little is known about the manner in which these 2 constructs may be related in this developmental stage. In order to shed more light on their longitudinal links, we conducted a 3-wave longitudinal study, investigating how identity processes and meaning in life dimensions are interconnected across time, testing the moderating effects of gender and age. Participants were 1,062 adolescents (59.4% female), who filled in measures of identity and meaning in life at 3 measurement waves during 1 school year. Cross-lagged models highlighted positive reciprocal associations between (a) commitment processes and presence of meaning and (b) exploration processes and search for meaning. These results were not moderated by adolescents' gender or age. Strong identification with present commitments and reduced ruminative exploration helped adolescents in having a clear sense of meaning in their lives. We also highlighted the dual nature of search for meaning. This dimension was sustained by exploration in breadth and ruminative exploration, and it positively predicted all exploration processes. We clarified the potential for a strong sense of meaning to support identity commitments and that the process of seeking life meaning sustains identity exploration across time. (PsycINFO Database Record",
"title": ""
},
{
"docid": "5a2f6825292da2e21c1a47cc0c827a89",
"text": "This paper is the first to review the scene flow estimation field to the best of our knowledge, which analyzes and compares methods, technical challenges, evaluation methodologies and performance of scene flow estimation. Existing algorithms are categorized in terms of scene representation, data source, and calculation scheme, and the pros and cons in each category are compared briefly. The datasets and evaluation protocols are enumerated, and the performance of the most representative methods is presented. A future vision is illustrated with few questions arisen for discussion. This survey presents a general introduction and analysis of scene flow estimation.",
"title": ""
}
] |
scidocsrr
|
18c8b591effe9fe388b7b22fe30573cd
|
MRF Energy Minimization and Beyond via Dual Decomposition
|
[
{
"docid": "8d7a7bc2b186d819b36a0a8a8ba70e39",
"text": "Recent stereo algorithms have achieved impressive results by modelling the disparity image as a Markov Random Field (MRF). An important component of an MRF-based approach is the inference algorithm used to find the most likely setting of each node in the MRF. Algorithms have been proposed which use Graph Cuts or Belief Propagation for inference. These stereo algorithms differ in both the inference algorithm used and the formulation of the MRF. It is unknown whether to attribute the responsibility for differences in performance to the MRF or the inference algorithm. We address this through controlled experiments by comparing the Belief Propagation algorithm and the Graph Cuts algorithm on the same MRF’s, which have been created for calculating stereo disparities. We find that the labellings produced by the two algorithms are comparable. The solutions produced by Graph Cuts have a lower energy than those produced with Belief Propagation, but this does not necessarily lead to increased performance relative to the ground-truth.",
"title": ""
}
] |
[
{
"docid": "5e5fcac49c2ee3f944dbc02fe70461cd",
"text": "Microkernels long discarded as unacceptable because of their lower performance compared with monolithic kernels might be making a comeback in operating systems due to their potentially higher reliability, which many researchers now regard as more important than performance. Each of the four different attempts to improve operating system reliability focuses on preventing buggy device drivers from crashing the system. In the Nooks approach, each driver is individually hand wrapped in a software jacket to carefully control its interactions with the rest of the operating system, but it leaves all the drivers in the kernel. The paravirtual machine approach takes this one step further and moves the drivers to one or more machines distinct from the main one, taking away even more power from the drivers. Both of these approaches are intended to improve the reliability of existing (legacy) operating systems. In contrast, two other approaches replace legacy operating systems with more reliable and secure ones. The multiserver approach runs each driver and operating system component in a separate user process and allows them to communicate using the microkernel's IPC mechanism. Finally, Singularity, the most radical approach, uses a type-safe language, a single address space, and formal contracts to carefully limit what each module can do.",
"title": ""
},
{
"docid": "8e0b61e82179cc39b4df3d06448a3d14",
"text": "The antibacterial activity and antioxidant effect of the compounds α-terpineol, linalool, eucalyptol and α-pinene obtained from essential oils (EOs), against pathogenic and spoilage forming bacteria were determined. The antibacterial activities of these compounds were observed in vitro on four Gram-negative and three Gram-positive strains. S. putrefaciens was the most resistant bacteria to all tested components, with MIC values of 2% or higher, whereas E. coli O157:H7 was the most sensitive strain among the tested bacteria. Eucalyptol extended the lag phase of S. Typhimurium, E. coli O157:H7 and S. aureus at the concentrations of 0.7%, 0.6% and 1%, respectively. In vitro cell growth experiments showed the tested compounds had toxic effects on all bacterial species with different level of potency. Synergistic and additive effects were observed at least one dose pair of combination against S. Typhimurium, E. coli O157:H7 and S. aureus, however antagonistic effects were not found in these combinations. The results of this first study are encouraging for further investigations on mechanisms of antimicrobial activity of these EO components.",
"title": ""
},
{
"docid": "2eab78b8ec65340be1473086f31eb8c4",
"text": "We present a new family of join algorithms, called ripple joins, for online processing of multi-table aggregation queries in a relational database management system (DBMS). Such queries arise naturally in interactive exploratory decision-support applications.\nTraditional offline join algorithms are designed to minimize the time to completion of the query. In contrast, ripple joins are designed to minimize the time until an acceptably precise estimate of the query result is available, as measured by the length of a confidence interval. Ripple joins are adaptive, adjusting their behavior during processing in accordance with the statistical properties of the data. Ripple joins also permit the user to dynamically trade off the two key performance factors of on-line aggregation: the time between successive updates of the running aggregate, and the amount by which the confidence-interval length decreases at each update. We show how ripple joins can be implemented in an existing DBMS using iterators, and we give an overview of the methods used to compute confidence intervals and to adaptively optimize the ripple join “aspect-ratio” parameters. In experiments with an initial implementation of our algorithms in the POSTGRES DBMS, the time required to produce reasonably precise online estimates was up to two orders of magnitude smaller than the time required for the best offline join algorithms to produce exact answers.",
"title": ""
},
{
"docid": "a7187fe4496db8a5ea4a5c550c9167a3",
"text": "We study the point-to-point shortest path problem in a setting where preprocessing is allowed. We improve the reach-based approach of Gutman [17] in several ways. In particular, we introduce a bidirectional version of the algorithm that uses implicit lower bounds and we add shortcut arcs to reduce vertex reaches. Our modifications greatly improve both preprocessing and query times. The resulting algorithm is as fast as the best previous method, due to Sanders and Schultes [28]. However, our algorithm is simpler and combines in a natural way with A search, which yields significantly better query times.",
"title": ""
},
{
"docid": "6ed26bfb94b03c262fe6173a5baaf8f7",
"text": "The main goal of a persuasion dialogue is to persuade, but agents may have a number of additional goals concerning the dialogue duration, how much and what information is shared or how aggressive the agent is. Several criteria have been proposed in the literature covering different aspects of what may matter to an agent, but it is not clear how to combine these criteria that are often incommensurable and partial. This paper is inspired by multi-attribute decision theory and considers argument selection as decision-making where multiple criteria matter. A meta-level argumentation system is proposed to argue about what argument an agent should select in a given persuasion dialogue. The criteria and sub-criteria that matter to an agent are structured hierarchically into a value tree and meta-level argument schemes are formalized that use a value tree to justify what argument the agent should select. In this way, incommensurable and partial criteria can be combined.",
"title": ""
},
{
"docid": "e485aca373cf4543e1a8eeadfa0e6772",
"text": "Identifying peer-review helpfulness is an important task for improving the quality of feedback that students receive from their peers. As a first step towards enhancing existing peerreview systems with new functionality based on helpfulness detection, we examine whether standard product review analysis techniques also apply to our new context of peer reviews. In addition, we investigate the utility of incorporating additional specialized features tailored to peer review. Our preliminary results show that the structural features, review unigrams and meta-data combined are useful in modeling the helpfulness of both peer reviews and product reviews, while peer-review specific auxiliary features can further improve helpfulness prediction.",
"title": ""
},
{
"docid": "da7eae0fc41a9f956a2666a42a30691e",
"text": "Selected findings from the study: – Generally high correlations (.30 – 1.00). – Most correlations very intutive, very few unintuitive. – Some theoretical dimensions hard to separate. – More convincing correlates most with overall quality (.64). – Thought through shows the highest ratings (overall quality 1.8). – Off-topic shows the lowest ratings (overall quality 1.1). www.webis.de Bauhaus-Universität Weimar * www.cs.toronto.edu/compling University of Toronto ** www.ukp.tu-darmstadt.de Technische Universität Darmstadt *** ie.ibm.com IBM Research Ireland ****",
"title": ""
},
{
"docid": "2f122217b79d258e2001bb16d639b6e4",
"text": "Electrochemical Impedance Spectroscopy (EIS) has been recently proposed as a simple non-invasive technique to monitor the amount of fat and liquids contained inside the human body. While the technique capabilities are still questioned, a simple and low cost device capable of performing this kind of measurements would help testing it on many patients with minimal effort. This paper describes an extremely low cost implementation of an EIS system suitable for medical applications that is based on a simple commercial Arduino Board whose cost is below 50$. The circuit takes advantage of the ADC and DAC made available by the microcontroller of the Arduino boards and employs a logarithmic amplifier to extend the impedance measuring range to 6 decades without using complex programmable gain amplifiers. This way the device can use electrodes with sizes in the range of 1 cm2 to 40 cm2. The EIS can be measured in the frequency range of 1 Hz to 100 kHz and in the impedance range of 1 kΩ to 1 GΩ. The instrument automatically compensates the DC voltage due to the skin/electrode contact and runs on batteries. The EIS traces can be stored inside the device and transferred to the PC with a wireless link avoiding safety issues.",
"title": ""
},
{
"docid": "bfc663107f88522f438bd173db2b85ce",
"text": "While much progress has been made in how to encode a text sequence into a sequence of vectors, less attention has been paid to how to aggregate these preceding vectors (outputs of RNN/CNN) into fixed-size encoding vector. Usually, a simple max or average pooling is used, which is a bottom-up and passive way of aggregation and lack of guidance by task information. In this paper, we propose an aggregation mechanism to obtain a fixed-size encoding with a dynamic routing policy. The dynamic routing policy is dynamically deciding that what and how much information need be transferred from each word to the final encoding of the text sequence. Following the work of Capsule Network, we design two dynamic routing policies to aggregate the outputs of RNN/CNN encoding layer into a final encoding vector. Compared to the other aggregation methods, dynamic routing can refine the messages according to the state of final encoding vector. Experimental results on five text classification tasks show that our method outperforms other aggregating models by a significant margin. Related source code is released on our github page1.",
"title": ""
},
{
"docid": "f7beb099d1bab2371807e531734c3b1a",
"text": "In this work, we successfully extended differential transform method (DTM), by presenting and proving new theorems, to the solution of differential–difference equations (DDEs). Theorems are presented in the most general form to cover a wide range of DDEs, being linear or nonlinear and constant or variable coefficient. In order to show the power and the robustness of the method and to illustrate the pertinent features of related theorems, examples are presented. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "305084bdd1a4a33c8d9fd102f864fb52",
"text": "We present a method for hierarchical image segmentation that defines a disaffinity graph on the image, over-segments it into watershed basins, defines a new graph on the basins, and then merges basins with a modified, size-dependent version of single linkage clustering. The quasilinear runtime of the method makes it suitable for segmenting large images. We illustrate the method on the challenging problem of segmenting 3D electron microscopic brain images.",
"title": ""
},
{
"docid": "ac9fb08fd12fc776138b2735cd370118",
"text": "In this paper we study 3D convolutional networks for video understanding tasks. Our starting point is the stateof-the-art I3D model of [3], which “inflates” all the 2D filters of the Inception architecture to 3D. We first consider “deflating” the I3D model at various levels to understand the role of 3D convolutions. Interestingly, we found that 3D convolutions at the top layers of the network contribute more than 3D convolutions at the bottom layers, while also being computationally more efficient. This indicates that I3D is better at capturing high-level temporal patterns than low-level motion signals. We also consider replacing 3D convolutions with spatiotemporal-separable 3D convolutions (i.e., replacing convolution using a kt×k×k filter with 1× k× k followed by kt× 1× 1 filters); we show that such a model, which we call S3D, is 1.5x more computationally efficient (in terms of FLOPS) than I3D, and achieves better accuracy. Finally, we explore spatiotemporal feature gating on top of S3D. The resulting model, which we call S3D-G, outperforms the state-of-the-art I3D model by 3.5% accuracy on Kinetics and reduces the FLOPS by 34%. It also achieves a new state-of-the-art performance when transferred to other action classification (UCF-101 and HMDB51) and detection (UCF-101 and JHMDB) datasets.",
"title": ""
},
{
"docid": "a1f05b8954434a782f9be3d9cd10bb8b",
"text": "Because of their avid use of new media and their increased spending power, children and teens have become primary targets of a new \"media and marketing ecosystem.\" The digital marketplace is undergoing rapid innovation as new technologies and software applications continue to reshape the media landscape and user behaviors. The advertising industry, in many instances led by food and beverage marketers, is purposefully exploiting the special relationship that youth have with new media, as online marketing campaigns create unprecedented intimacies between adolescents and the brands and products that now literally surround them.",
"title": ""
},
{
"docid": "3196c06c66b49c052d07ced0de683d02",
"text": "Programming by Examples (PBE) involves synthesizing intended programs in an underlying domain-specific language from examplebased specifications. PBE systems are already revolutionizing the application domain of data wrangling and are set to significantly impact several other domains including code refactoring. There are three key components in a PBE system. (i) A search algorithm that can efficiently search for programs that are consistent with the examples provided by the user. We leverage a divide-and-conquerbased deductive search paradigm that inductively reduces the problem of synthesizing a program expression of a certain kind that satisfies a given specification into sub-problems that refer to sub-expressions or sub-specifications. (ii) Program ranking techniques to pick an intended program from among the many that satisfy the examples provided by the user. We leverage features of the program structure as well of the outputs generated by the program on test inputs. (iii) User interaction models to facilitate usability and debuggability. We leverage active-learning techniques based on clustering inputs and synthesizing multiple programs. Each of these PBE components leverage both symbolic reasoning and heuristics. We make the case for synthesizing these heuristics from training data using appropriate machine learning methods. This can not only lead to better heuristics, but can also enable easier development, maintenance, and even personalization of a PBE system.",
"title": ""
},
{
"docid": "292db0e308281a3c1c9be44f76eacc93",
"text": "This paper proposes steganalysis methods for extensions of least-significant bit (LSB) overwriting to both of the two lowest bit planes in digital images: there are two distinct embedding paradigms. The author investigates how detectors for standard LSB replacement can be adapted to such embedding, and how the methods of \"structural steganalysis\", which gives the most sensitive detectors for standard LSB replacement, may be extended and applied to make more sensitive purpose-built detectors for two bit plane steganography. The literature contains only one other detector specialized to detect replacement multiple bits, and those presented here are substantially more sensitive. The author also compares the detectability of standard LSB embedding with the two methods of embedding in the lower two bit planes: although the novel detectors have a high accuracy from the steganographer's point of view, the empirical results indicate that embedding in the two lowest bit planes is preferable (in some cases, highly preferable) to embedding in one",
"title": ""
},
{
"docid": "b42f2ef7f854f628eafbb6b9069b31a2",
"text": "We address the problem of real-time generation of smooth and collision-free trajectories for autonomous flight of a quadrotor through 3-D cluttered environments. Our approach starts by generating a sequence of variable sized 3-D free space grids as the virtual flight corridor using an OctoMap-based environment representation and search-based planning method. The key contribution is a quadratic programming-based formulation for generating multi-segment polynomial trajectories that are entirely fit within the corridor, and thus collision-free. Our formulation also allows incorporating higher-order dynamical constraints to ensure that the trajectory is feasible for the platform. A novel non-iterative constraint relaxation method is also proposed for implicit optimization of segment duration. The proposed algorithm runs real-time onboard our quadrotor experimental testbed. Both simulation and online experimental results are presented for performance verification.",
"title": ""
},
{
"docid": "f52d9d04c8c697dbd3d6c4dbb21092f9",
"text": "PURPOSE\nSelective retina therapy (SRT), the confined laser heating and destruction of retinal pigment epithelial cells, has been shown to treat acute types of central serous chorioretinopathy (CSC) successfully without damaging the photoreceptors and thus avoiding laser-induced scotoma. However, a benefit of laser treatment for chronic forms of CSC is questionable. In this study, the efficacy of SRT by means of the previously used 1.7-µs and shorter 300-ns pulse duration was evaluated for both types of CSC, also considering re-treatment for nonresponders.\n\n\nMATERIAL AND METHODS\nIn a two-center trial, 26 patients were treated with SRT for acute (n = 10) and chronic-recurrent CSC (n = 16). All patients presented with subretinal fluid (SRF) in OCT and leakage in fluorescein angiography (FA). SRT was performed using a prototype SRT laser system (frequency-doubled Q-switched Nd:YLF-laser, wavelength 527 nm) with adjustable pulse duration. The following irradiation settings were used: a train of 30 laser pulses with a repetition rate of 100 Hz and pulse durations of 300 ns and 1.7 µs, pulse energy 120-200 µJ, retinal spot size 200 µm. Because SRT lesions are invisible, FA was always performed 1 h after treatment to demonstrate laser outcome (5-8 single spots in the area of leakage). In cases where energy was too low, as indicated by missing FA leakage, energy was adjusted and the patient re-treated immediately. Observation intervals were after 4 weeks and 3 months. In case of nonimprovement of the disease after 3 months, re-treatment was considered.\n\n\nRESULTS\nOf 10 patients with active CSC that presents focal leakage in FA, 5 had completely resolved fluid after 4 weeks and all 10 after 3 months. Mean visual acuity increased from 76.6 ETDRS letters to 85.0 ETDRS letters 3 months after SRT. Chronic-recurrent CSC was characterized by less severe SRF at baseline in OCT and weaker leakage in FA than in acute types. Visual acuity changed from baseline 71.6 to 72.8 ETDRS letters after 3 months. At this time, SRF was absent in 3 out of 16 patients (19%), FA leakage had come to a complete stop in 6 out of 16 patients (38%). In 6 of the remaining chronic CSC patients, repeated SRT with higher pulse energy was considered because of persistent leakage activity. After the re-treatment, SRF resolved completely in 5 patients (83.3%) after only 25 days.\n\n\nCONCLUSION\nSRT showed promising results in treating acute CSC, but was less effective in chronic cases. Interestingly, re-treatment resulted in enhanced fluid resolution and dry conditions after a considerably shorter time in most patients. Therefore, SRT including re-treatment if necessary might be a valuable CSC treatment alternative even in chronic-recurrent cases.",
"title": ""
},
{
"docid": "058ca337a484d557869e08c2b47d79e9",
"text": "The role of inflammation in carcinogenesis has been extensively investigated and well documented. Many biochemical processes that are altered during chronic inflammation have been implicated in tumorigenesis. These include shifting cellular redox balance toward oxidative stress; induction of genomic instability; increased DNA damage; stimulation of cell proliferation, metastasis, and angiogenesis; deregulation of cellular epigenetic control of gene expression; and inappropriate epithelial-to-mesenchymal transition. A wide array of proinflammatory cytokines, prostaglandins, nitric oxide, and matricellular proteins are closely involved in premalignant and malignant conversion of cells in a background of chronic inflammation. Inappropriate transcription of genes encoding inflammatory mediators, survival factors, and angiogenic and metastatic proteins is the key molecular event in linking inflammation and cancer. Aberrant cell signaling pathways comprising various kinases and their downstream transcription factors have been identified as the major contributors in abnormal gene expression associated with inflammation-driven carcinogenesis. The posttranscriptional regulation of gene expression by microRNAs also provides the molecular basis for linking inflammation to cancer. This review highlights the multifaceted role of inflammation in carcinogenesis in the context of altered cellular redox signaling.",
"title": ""
},
{
"docid": "0f3d520a6d09c136816a9e0493c45db1",
"text": "Specular reflection exists widely in photography and causes the recorded color deviating from its true value, thus, fast and high quality highlight removal from a single nature image is of great importance. In spite of the progress in the past decades in highlight removal, achieving wide applicability to the large diversity of nature scenes is quite challenging. To handle this problem, we propose an analytic solution to highlight removal based on an L2 chromaticity definition and corresponding dichromatic model. Specifically, this paper derives a normalized dichromatic model for the pixels with identical diffuse color: a unit circle equation of projection coefficients in two subspaces that are orthogonal to and parallel with the illumination, respectively. In the former illumination orthogonal subspace, which is specular-free, we can conduct robust clustering with an explicit criterion to determine the cluster number adaptively. In the latter, illumination parallel subspace, a property called pure diffuse pixels distribution rule helps map each specular-influenced pixel to its diffuse component. In terms of efficiency, the proposed approach involves few complex calculation, and thus can remove highlight from high resolution images fast. Experiments show that this method is of superior performance in various challenging cases.",
"title": ""
}
] |
scidocsrr
|
8551226584905a5dd6a556ab1ece0e80
|
Cross-Site Scripting Attacks in Social Network APIs
|
[
{
"docid": "2ca43ef1b7a919e1de0ea2bb01b9c308",
"text": "As the popularity of social networks expands, the information users expose to the public has potentially dangerous implications for individual privacy. While social networks allow users to restrict access to their personal data, there is currently no mechanism to enforce privacy concerns over content uploaded by other users. As group photos and stories are shared by friends and family, personal privacy goes beyond the discretion of what a user uploads about himself and becomes an issue of what every network participant reveals. In this paper, we examine how the lack of joint privacy controls over content can inadvertently reveal sensitive information about a user including preferences, relationships, conversations, and photos. Specifically, we analyze Facebook to identify scenarios where conflicting privacy settings between friends will reveal information that at least one user intended remain private. By aggregating the information exposed in this manner, we demonstrate how a user’s private attributes can be inferred from simply being listed as a friend or mentioned in a story. To mitigate this threat, we show how Facebook’s privacy model can be adapted to enforce multi-party privacy. We present a proof of concept application built into Facebook that automatically ensures mutually acceptable privacy restrictions are enforced",
"title": ""
}
] |
[
{
"docid": "5b01c2e7bba6ab1abdda9b1a23568d2a",
"text": "First, we theoretically analyze the MMD-based estimates. Our analysis establishes that, under some mild conditions, the estimate is statistically consistent. More importantly, it provides an upper bound on the error in the estimate in terms of intuitive geometric quantities like class separation and data spread. Next, we use the insights obtained from the theoretical analysis, to propose a novel convex formulation that automatically learns the kernel to be employed in the MMD-based estimation. We design an efficient cutting plane algorithm for solving this formulation. Finally, we empirically compare our estimator with several existing methods, and show significantly improved performance under varying datasets, class ratios, and training sizes.",
"title": ""
},
{
"docid": "3c103640a41779e8069219b9c4849ba7",
"text": "Electronic banking is becoming more popular every day. Financial institutions have accepted the transformation to provide electronic banking facilities to their customers in order to remain relevant and thrive in an environment that is competitive. A contributing factor to the customer retention rate is the frequent use of multiple online functionality however despite all the benefits of electronic banking, some are still hesitant to use it because of security concerns. The perception is that gender, age, education level, salary, culture and profession all have an impact on electronic banking usage. This study reports on how the Knowledge Discovery and Data Mining (KDDM) process was used to determine characteristics and electronic banking behavior of high net worth individuals at a South African bank. Findings JIBC December 2017, Vol. 22, No.3 2 indicate that product range and age had the biggest impact on electronic banking behavior. The value of user segmentation is that the financial institution can provide a more accurate service to their users based on their preferences and online banking behavior.",
"title": ""
},
{
"docid": "2ff290ba8bab0de760c289bff3feee06",
"text": "Bayesian Networks are being used extensively for reasoning under uncertainty. Inference mechanisms for Bayesian Networks are compromised by the fact that they can only deal with propositional domains. In this work, we introduce an extension of that formalism, Hierarchical Bayesian Networks, that can represent additional information about the structure of the domains of variables. Hierarchical Bayesian Networks are similar to Bayesian Networks, in that they represent probabilistic dependencies between variables as a directed acyclic graph, where each node of the graph corresponds to a random variable and is quanti ed by the conditional probability of that variable given the values of its parents in the graph. What extends the expressive power of Hierarchical Bayesian Networks is that a node may correspond to an aggregation of simpler types. A component of one node may itself represent a composite structure; this allows the representation of complex hierarchical domains. Furthermore, probabilistic dependencies can be expressed at any level, between nodes that are contained in the same structure.",
"title": ""
},
{
"docid": "75fb9b4adf41c0a93f72084cc3a7444a",
"text": "OBJECTIVE\nIn this study, we tested an expanded model of Kanter's structural empowerment, which specified the relationships among structural and psychological empowerment, job strain, and work satisfaction.\n\n\nBACKGROUND\nStrategies proposed in Kanter's empowerment theory have the potential to reduce job strain and improve employee work satisfaction and performance in current restructured healthcare settings. The addition to the model of psychological empowerment as an outcome of structural empowerment provides an understanding of the intervening mechanisms between structural work conditions and important organizational outcomes.\n\n\nMETHODS\nA predictive, nonexperimental design was used to test the model in a random sample of 404 Canadian staff nurses. The Conditions of Work Effectiveness Questionnaire, the Psychological Empowerment Questionnaire, the Job Content Questionnaire, and the Global Satisfaction Scale were used to measure the major study variables.\n\n\nRESULTS\nStructural equation modelling analyses revealed a good fit of the hypothesized model to the data based on various fit indices (chi 2 = 1140, df = 545, chi 2/df ratio = 2.09, CFI = 0.986, RMSEA = 0.050). The amount of variance accounted for in the model was 58%. Staff nurses felt that structural empowerment in their workplace resulted in higher levels of psychological empowerment. These heightened feelings of psychological empowerment in turn strongly influenced job strain and work satisfaction. However, job strain did not have a direct effect on work satisfaction.\n\n\nCONCLUSIONS\nThese results provide initial support for an expanded model of organizational empowerment and offer a broader understanding of the empowerment process.",
"title": ""
},
{
"docid": "9e5b5831ebae6fd7d38e3caeedf9a66c",
"text": "This paper introduces the Point-Based Value Iteration (PBVI) algorithm for POMDP planning. PBVI approximates an exact value iteration solution by selecting a small set of representative belief points, and planning for those only. By using stochastic trajectories to choose belief points, and by maintaining only one value hyperplane per point, it is able to successfully solve large problems, including the roboticTagdomain, a POMDP version of the popular game of lasertag.",
"title": ""
},
{
"docid": "b56467b5761a1294bb2b1739d6504ef2",
"text": "This paper presents the creation of a robot capable of drawing artistic portraits. The application is purely entertaining and based on existing tools for face detection and image reconstruction, as well as classical tools for trajectory planning of a 4 DOFs robot arm. The innovation of the application lies in the care we took to make the whole process as human-like as possible. The robot's motions and its drawings follow a style characteristic to humans. The portraits conserve the esthetics features of the original images. The whole process is interactive, using speech recognition and speech synthesis to conduct the scenario",
"title": ""
},
{
"docid": "aa4e3c2db7f1a1ac749d5d34014e26a0",
"text": "In this paper, a novel text clustering technique is proposed to summarize text documents. The clustering method, so called ‘Ensemble Clustering Method’, combines both genetic algorithms (GA) and particle swarm optimization (PSO) efficiently and automatically to get the best clustering results. The summarization with this clustering method is to effectively avoid the redundancy in the summarized document and to show the good summarizing results, extracting the most significant and non-redundant sentence from clustering sentences of a document. We tested this technique with various text documents in the open benchmark datasets, DUC01 and DUC02. To evaluate the performances, we used F-measure and ROUGE. The experimental results show that the performance capability of our method is about 11% to 24% better than other summarization algorithms. Key-Words: Text Summarization; Extractive Summarization; Ensemble Clustering; Genetic Algorithms; Particle Swarm Optimization",
"title": ""
},
{
"docid": "b1c4910538cf73a19e783ed3dfc5f450",
"text": "The electroencephalogram (EEG) signals are commonly used for diagnosis of epilepsy. In this paper, we present a new methodology for EEG-based automated diagnosis of epilepsy. Our method involves detection of key points at multiple scales in EEG signals using a pyramid of difference of Gaussian filtered signals. Local binary patterns (LBPs) are computed at these key points and the histogram of these patterns are considered as the feature set, which is fed to the support vector machine (SVM) for the classification of EEG signals. The proposed methodology has been investigated for the four well-known classification problems namely, 1) normal and epileptic seizure, 2) epileptic seizure and seizure free, 3) normal, epileptic seizure, and seizure free, and 4) epileptic seizure and nonseizure EEG signals using publically available university of Bonn EEG database. Our experimental results in terms of classification accuracies have been compared with existing methods for the classification of the aforementioned problems. Further, performance evaluation on another EEG dataset shows that our approach is effective for classification of seizure and seizure-free EEG signals. The proposed methodology based on the LBP computed at key points is simple and easy to implement for real-time epileptic seizure detection.",
"title": ""
},
{
"docid": "9124e6f3679d4a86b568a2382cad6970",
"text": "Text. Linear Algebra and its Applications, David Lay, 5th edition. ISBN-13: 978-0321982384 The book can be purchased from the University Bookstore, or bought online, but you are responsible for making sure you purchase the correct book. If you buy an older edition, it is your responsibility to make sure you’re reading the correct sections and doing the correct homework problems. I strongly recommend you try for a new or used version of this edition.",
"title": ""
},
{
"docid": "2a3273a7308273887b49f2d6cc99fe68",
"text": "The healthcare industry collects huge amounts of healthcare data which, unfortunately, are not \";mined\"; to discover hidden information for effective decision making. Discovery of hidden patterns and relationships often goes unexploited. Advanced data mining techniques can help remedy this situation. This research has developed a prototype Intelligent Heart Disease Prediction System (IHDPS) using data mining techniques, namely, Decision Trees, Naive Bayes and Neural Network. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. IHDPS can answer complex \";what if\"; queries which traditional decision support systems cannot. Using medical profiles such as age, sex, blood pressure and blood sugar it can predict the likelihood of patients getting a heart disease. It enables significant knowledge, e.g. patterns, relationships between medical factors related to heart disease, to be established. IHDPS is Web-based, user-friendly, scalable, reliable and expandable. It is implemented on the .NET platform.",
"title": ""
},
{
"docid": "52a5f4c15c1992602b8fe21270582cc6",
"text": "This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between linear and cubic in the training set size. SMO’s computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. On realworld sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.",
"title": ""
},
{
"docid": "5c9ba6384b6983a26212e8161e502484",
"text": "The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures. Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies. Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset. This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples – ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks. We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training. Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice.",
"title": ""
},
{
"docid": "acf514a4aa34487121cc853e55ceaed4",
"text": "Stereotype threat spillover is a situational predicament in which coping with the stress of stereotype confirmation leaves one in a depleted volitional state and thus less likely to engage in effortful self-control in a variety of domains. We examined this phenomenon in 4 studies in which we had participants cope with stereotype and social identity threat and then measured their performance in domains in which stereotypes were not \"in the air.\" In Study 1 we examined whether taking a threatening math test could lead women to respond aggressively. In Study 2 we investigated whether coping with a threatening math test could lead women to indulge themselves with unhealthy food later on and examined the moderation of this effect by personal characteristics that contribute to identity-threat appraisals. In Study 3 we investigated whether vividly remembering an experience of social identity threat results in risky decision making. Finally, in Study 4 we asked whether coping with threat could directly influence attentional control and whether the effect was implemented by inefficient performance monitoring, as assessed by electroencephalography. Our results indicate that stereotype threat can spill over and impact self-control in a diverse array of nonstereotyped domains. These results reveal the potency of stereotype threat and that its negative consequences might extend further than was previously thought.",
"title": ""
},
{
"docid": "a9fa8bd1c06d4fe95c837a0990246d6e",
"text": "Efforts in the field of multicultural education for the health professions have focused on increasing trainees' knowledge base and awareness of other cultures, and on teaching technical communication skills in cross-cultural encounters. Yet to be adequately addressed in training are profound issues of racial bias and the often awkward challenge of cross-racial dialogue, both of which likely play some part in well-documented racial disparities in health care encounters. We seek to establish the need for the skill of dialoguing explicitly with patients, colleagues, and others about race and racism and its implications for patient well-being, for clinical practice, and for the ongoing personal and professional development of health care professionals. We present evidence establishing the need to go beyond training in interview skills that efficiently \"extract\" relevant cultural and clinical information from patients. This evidence includes concepts from social psychology that include implicit bias, explicit bias, and aversive racism. Aiming to connect the dots of diverse literatures, we believe health professions educators and institutional leaders can play a pivotal role in reducing racial disparities in health care encounters by actively promoting, nurturing, and participating in this dialogue, modeling its value as an indispensable skill and institutional priority.",
"title": ""
},
{
"docid": "272281eafb06f6c9dd030897e846fd00",
"text": "Cloud computing is emerging as a new paradigm of large-scale distributed computing. It is a framework for enabling convenient, on-demand network access to a shared pool of computing resources. Load balancing is one of the main challenges in cloud computing which is required to distribute the dynamic workload across multiple nodes to ensure that no single node is overwhelmed. It helps in optimal utilization of resources and hence in enhancing the performance of the system. The goal of load balancing is to minimize the resource consumption which will further reduce energy consumption and carbon emission rate that is the dire need of cloud computing. This determines the need of new metrics, energy consumption and carbon emission for energy-efficient load balancing in cloud computing. This paper discusses the existing load balancing techniques in cloud computing and further compares them based on various parameters like performance, scalability, associated overhead etc. that are considered in different techniques. It further discusses these techniques from energy consumption and carbon emission perspective.",
"title": ""
},
{
"docid": "7b7e7db68753dc40fce611ce06dc7c74",
"text": "Ontology learning is the process of acquiring (constructing or integrating) an ontology (semi-) automatically. Being a knowledge acquisition task, it is a complex activity, which becomes even more complex in the context of the BOEMIE project, due to the management of multimedia resources and the multi-modal semantic interpretation that they require. The purpose of this chapter is to present a survey of the most relevant methods, techniques and tools used for the task of ontology learning. Adopting a practical perspective, an overview of the main activities involved in ontology learning is presented. This breakdown of the learning process is used as a basis for the comparative analysis of existing tools and approaches. The comparison is done along dimensions that emphasize the particular interests of the BOEMIE project. In this context, ontology learning in BOEMIE is treated and compared to the state of the art, explaining how BOEMIE addresses problems observed in existing systems and contributes to issues that are not frequently considered by existing approaches.",
"title": ""
},
{
"docid": "f00b0b00dffcae8f5f0bce8c17abc8b6",
"text": "From a marketing communication point of view, new digital marketing channels, such as Internet and mobile phones, are considered to be powerful opportunities to reach consumers by allowing interactivity and personalisation of the content and context of the message. The increased number of media has, however, led to a harder competition for consumers’ attention. Given the potential of digital media it is interesting to understand how consumers are going to relate to mobile marketing efforts. The purpose of the paper was to explore consumers’ responsiveness to mobile marketing communication. With mobile marketing we refer to the use of SMS and MMS as marketing media in push campaigns. It is argued in the paper that consumer responsiveness is a function of personally perceived relevance of the marketing message as well as on the disturbance/acceptance of the context of receiving the message. A relevance/disturbance framework can thus measure the effectiveness of mobile marketing communication. An empirical study was conducted in Finland, where responsiveness to mobile marketing was benchmarked against e-mail communication. Findings from this study indicated that responsiveness to mobile marketing communication varies among consumers. Compared to traditional direct mail and commercial email communication, the responsiveness to mobile marketing was considerably lower. However, even if the majority of consumers showed low responsiveness to mobile marketing there were also consumers who welcome such messages.",
"title": ""
},
{
"docid": "1e30d2f8e11bfbd868fdd0dfc0ea4179",
"text": "In this paper, I study how companies can use their personnel data and information from job satisfaction surveys to predict employee quits. An important issue discussed at length in the paper is how employers can ensure the anonymity of employees in surveys used for management and HR analytics. I argue that a simple mechanism where the company delegates the implementation of job satisfaction surveys to an external consulting company can be optimal. In the subsequent empirical analysis, I use a unique combination of firm-level data (personnel records) and information from job satisfaction surveys to assess the benefits for companies using data in their decision-making. Moreover, I show how companies can move from a descriptive to a predictive approach.",
"title": ""
},
{
"docid": "e5ec413c71f8f4012a94e20f7a575e68",
"text": "It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: 1) the slow gradient-based learning algorithms are extensively used to train neural networks, and 2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these traditional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses the input weights and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide the best generalization performance at extremely fast learning speed. The experimental results based on real-world benchmarking function approximation and classification problems including large complex applications show that the new algorithm can produce best generalization performance in some cases and can learn much faster than traditional popular learning algorithms for feedforward neural networks.",
"title": ""
},
{
"docid": "5d5e42cdb2521c5712b372acaf7fb25a",
"text": "Unsupervised anomaly detection on multior high-dimensional data is of great importance in both fundamental machine learning research and industrial applications, for which density estimation lies at the core. Although previous approaches based on dimensionality reduction followed by density estimation have made fruitful progress, they mainly suffer from decoupled model learning with inconsistent optimization goals and incapability of preserving essential information in the low-dimensional space. In this paper, we present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection. Our model utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction error for each input data point, which is further fed into a Gaussian Mixture Model (GMM). Instead of using decoupled two-stage training and the standard Expectation-Maximization (EM) algorithm, DAGMM jointly optimizes the parameters of the deep autoencoder and the mixture model simultaneously in an end-to-end fashion, leveraging a separate estimation network to facilitate the parameter learning of the mixture model. The joint optimization, which well balances autoencoding reconstruction, density estimation of latent representation, and regularization, helps the autoencoder escape from less attractive local optima and further reduce reconstruction errors, avoiding the need of pre-training. Experimental results on several public benchmark datasets show that, DAGMM significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14% improvement based on the standard F1 score.",
"title": ""
}
] |
scidocsrr
|
6312640f0ae54aa4a6eda88155c848a5
|
A Distributed Hash Table based Address Resolution Scheme for Large-Scale Ethernet Networks
|
[
{
"docid": "39ab78b58f6ace0fc29f18a1c4ed8ebc",
"text": "We survey recent developments in the design of large-capacity content-addressable memory (CAM). A CAM is a memory that implements the lookup-table function in a single clock cycle using dedicated comparison circuitry. CAMs are especially popular in network routers for packet forwarding and packet classification, but they are also beneficial in a variety of other applications that require high-speed table lookup. The main CAM-design challenge is to reduce power consumption associated with the large amount of parallel active circuitry, without sacrificing speed or memory density. In this paper, we review CAM-design techniques at the circuit level and at the architectural level. At the circuit level, we review low-power matchline sensing techniques and searchline driving approaches. At the architectural level we review three methods for reducing power consumption.",
"title": ""
}
] |
[
{
"docid": "56dd606611efb0b34777925d2c2be312",
"text": "A number of cutaneous disorders encountered by the dermatologist have overlapping cardiac pathology. In recent years, many genetic linkages common to pathological processes in the cutaneous and cardiovascular systems have been identified. This review will describe primary cutaneous disorders with potential cardiac manifestations, including congenital syndromes, inherited cutaneous disorders associated with later cardiovascular disease, and syndromes associated with early cardiovascular pathology. The dermatologist may be the first to diagnose cutaneous findings associated with underlying cardiovascular disease; therefore, it is of prime importance for the dermatologist to be aware of these associations and to direct the appropriate workup.",
"title": ""
},
{
"docid": "4a3496a835d3948299173b4b2767d049",
"text": "We describe an augmented reality (AR) system that allows multiple participants to interact with 2D and 3D data using tangible user interfaces. The system features face-to-face communication, collaborative viewing and manipulation of 3D models, and seamless access to 2D desktop applications within the shared 3D space. All virtual content, including 3D models and 2D desktop windows, is attached to tracked physical objects in order to leverage the efficiencies of natural two-handed manipulation. The presence of 2D desktop space within 3D facilitates data exchange between the two realms, enables control of 3D information by 2D applications, and generally increases productivity by providing access to familiar tools. We present a general concept for a collaborative tangible AR system, including a comprehensive set of interaction techniques, a distributed hardware setup, and a component-based software architecture that can be flexibly configured using XML. We show the validity of our concept with an implementation of an application scenario from the automotive industry.",
"title": ""
},
{
"docid": "72e87f243e7b90b447b2aa78998f1995",
"text": "Classifying the stance expressed in online microblogging social media is an emerging problem in opinion mining. We propose a probabilistic approach to stance classification in tweets, which models stance, target of stance, and sentiment of tweet, jointly. Instead of simply conjoining the sentiment or target variables as extra variables to the feature space, we use a novel formulation to incorporate three-way interactions among sentiment-stance-input variables and three-way interactions among target-stance-input variables. The proposed specification intuitively aims to discriminate sentiment features from target features for stance classification. In addition, regularizing a single stance classifier, which handles all targets, acts as a soft weight-sharing among them. We demonstrate that discriminative training of this model achieves the state-of-the-art results in supervised stance classification, and its generative training obtains competitive results in the weakly supervised setting.",
"title": ""
},
{
"docid": "ad0e550f829ad359a87ab821b4229c40",
"text": "Amphetamine and substituted amphetamines, including methamphetamine, methylphenidate (Ritalin), methylenedioxymethamphetamine (ecstasy), and the herbs khat and ephedra, encompass the only widely administered class of drugs that predominantly release neurotransmitter, in this case principally catecholamines, by a non-exocytic mechanism. These drugs play important medicinal and social roles in many cultures, exert profound effects on mental function and behavior, and can produce neurodegeneration and addiction. Numerous questions remain regarding the unusual molecular mechanisms by which these compounds induce catecholamine release. We review current issues on the two apparent primary mechanisms--the redistribution of catecholamines from synaptic vesicles to the cytosol, and induction of reverse transport of transmitter through plasma membrane uptake carriers--and on additional drug effects that affect extracellular catecholamine levels, including uptake inhibition, effects on exocytosis, neurotransmitter synthesis, and metabolism.",
"title": ""
},
{
"docid": "2d62232cfe79a122d661ae7f05a4f883",
"text": "The main purpose of this paper is to examine some (potential) applications of quantum computation in AI and to review the interplay between quantum theory and AI. For the readers who are not familiar with quantum computation, a brief introduction to it is provided, and a famous but simple quantum algorithm is introduced so that they can appreciate the power of quantum computation. Also, a (quite personal) survey of quantum computation is presented in order to give the readers a (unbalanced) panorama of the field. The author hopes that this paper will be a useful map for AI researchers who are going to explore further and deeper connections between AI and quantum computation as well as quantum theory although some parts of the map are very rough and other parts are empty, and waiting for the readers to fill in.",
"title": ""
},
{
"docid": "a28199159d7508a7ef57cd20adf084c2",
"text": "Brain-computer interfaces (BCIs) translate brain activity into signals controlling external devices. BCIs based on visual stimuli can maintain communication in severely paralyzed patients, but only if intact vision is available. Debilitating neurological disorders however, may lead to loss of intact vision. The current study explores the feasibility of an auditory BCI. Sixteen healthy volunteers participated in three training sessions consisting of 30 2-3 min runs in which they learned to increase or decrease the amplitude of sensorimotor rhythms (SMR) of the EEG. Half of the participants were presented with visual and half with auditory feedback. Mood and motivation were assessed prior to each session. Although BCI performance in the visual feedback group was superior to the auditory feedback group there was no difference in performance at the end of the third session. Participants in the auditory feedback group learned slower, but four out of eight reached an accuracy of over 70% correct in the last session comparable to the visual feedback group. Decreasing performance of some participants in the visual feedback group is related to mood and motivation. We conclude that with sufficient training time an auditory BCI may be as efficient as a visual BCI. Mood and motivation play a role in learning to use a BCI.",
"title": ""
},
{
"docid": "14e0664fcbc2e29778a1ccf8744f4ca5",
"text": "Mobile offloading migrates heavy computation from mobile devices to cloud servers using one or more communication network channels. Communication interfaces vary in speed, energy consumption and degree of availability. We assume two interfaces: WiFi, which is fast with low energy demand but not always present and cellular, which is slightly slower has higher energy consumption but is present at all times. We study two different communication strategies: one that selects the best available interface for each transmitted packet and the other multiplexes data across available communication channels. Since the latter may experience interrupts in the WiFi connection packets can be delayed. We call it interrupted strategy as opposed to the uninterrupted strategy that transmits packets only over currently available networks. Two key concerns of mobile offloading are the energy use of the mobile terminal and the response time experienced by the user of the mobile device. In this context, we investigate three different metrics that express the energy-performance tradeoff, the known Energy-Response time Weighted Sum (EWRS), the Energy-Response time Product (ERP) and the Energy-Response time Weighted Product (ERWP) metric. We apply the metrics to the two different offloading strategies and find that the conclusions drawn from the analysis depend on the considered metric. In particular, while an additive metric is not normalised, which implies that the term using smaller scale is always favoured, the ERWP metric, which is new in this paper, allows to assign importance to both aspects without being misled by different scales. It combines the advantages of an additive metric and a product. The interrupted strategy can save energy especially if the focus in the tradeoff metric lies on the energy aspect. In general one can say that the uninterrupted strategy is faster, while the interrupted strategy uses less energy. A fast connection improves the response time much more than the fast repair of a failed connection. In conclusion, a short down-time of the transmission channel can mostly be tolerated.",
"title": ""
},
{
"docid": "f33ca4cfba0aab107eb8bd6d3d041b74",
"text": "Deep neural networks (DNNs) require very large amounts of computation both for training and for inference when deployed in the field. A common approach to implementing DNNs is to recast the most computationally expensive operations as general matrix multiplication (GEMM). However, as we demonstrate in this paper, there are a great many different ways to express DNN convolution operations using GEMM. Although different approaches all perform the same number of operations, the size of temorary data structures differs significantly. Convolution of an input matrix with dimensions C × H × W , requires O(KCHW ) additional space using the classical im2col approach. More recently memory-efficient approaches requiring just O(KCHW ) auxiliary space have been proposed. We present two novel GEMM-based algorithms that require just O(MHW ) and O(KW ) additional space respectively, where M is the number of channels in the result of the convolution. These algorithms dramatically reduce the space overhead of DNN convolution, making it much more suitable for memory-limited embedded systems. Experimental evaluation shows that our lowmemory algorithms are just as fast as the best patch-building approaches despite requiring just a fraction of the amount of additional memory. Our low-memory algorithms have excellent data locality which gives them a further edge over patch-building algorithms when multiple cores are used. As a result, our low memory algorithms often outperform the best patch-building algorithms using multiple threads.",
"title": ""
},
{
"docid": "ecabfcbb40fc59f1d1daa02502164b12",
"text": "We present a generalized line histogram technique to compute global rib-orientation for detecting rotated lungs in chest radiographs. We use linear structuring elements, such as line seed filters, as kernels to convolve with edge images, and extract a set of lines from the posterior rib-cage. After convolving kernels in all possible orientations in the range [0, π], we measure the angle for which the line histogram has maximum magnitude. This measure provides a good approximation of the global chest rib-orientation for each lung. A chest radiograph is said to be upright if the difference between the orientation angles of both lungs with respect to the horizontal axis, is negligible. We validate our method on sets of normal and abnormal images and argue that rib orientation can be used for rotation detection in chest radiographs as aid in quality control during image acquisition, and to discard images from training and testing data sets. In our test, we achieve a maximum accuracy of 90%.",
"title": ""
},
{
"docid": "4760b5cb00d6dce8e62984590c807590",
"text": "OBJECTIVE\nHyperglycemia improves when patients with type 2 diabetes are placed on a weight-loss diet. Improvement typically occurs soon after diet implementation. This rapid response could result from low fuel supply (calories), lower carbohydrate content of the weight-loss diet, and/or weight loss per se. To differentiate these effects, glucose, insulin, C-peptide and glucagon were determined during the last 24 h of a 3-day period without food (severe calorie restriction) and a calorie-sufficient, carbohydrate-free diet.\n\n\nRESEARCH DESIGN\nSeven subjects with untreated type 2 diabetes were studied. A randomized-crossover design with a 4-week washout period between arms was used.\n\n\nMETHODS\nResults from both the calorie-sufficient, carbohydrate-free diet and the 3-day fast were compared with the initial standard diet consisting of 55% carbohydrate, 15% protein and 30% fat.\n\n\nRESULTS\nThe overnight fasting glucose concentration decreased from 196 (standard diet) to 160 (carbohydrate-free diet) to 127 mg/dl (fasting). The 24 h glucose and insulin area responses decreased by 35% and 48% on day 3 of the carbohydrate-free diet, and by 49% and 69% after fasting. Overnight basal insulin and glucagon remained unchanged.\n\n\nCONCLUSIONS\nShort-term fasting dramatically lowered overnight fasting and 24 h integrated glucose concentrations. Carbohydrate restriction per se could account for 71% of the reduction. Insulin could not entirely explain the glucose responses. In the absence of carbohydrate, the net insulin response was 28% of the standard diet. Glucagon did not contribute to the metabolic adaptations observed.",
"title": ""
},
{
"docid": "acbf633cbf612cd0d203d9c191a156da",
"text": "In this work an efficient parallel implementation of the Chirp Scaling Algorithm for Synthetic Aperture Radar processing is presented. The architecture selected for the implementation is the general purpose graphic processing unit, as it is well suited for scientific applications and real-time implementation of algorithms. The analysis of a first implementation led to several improvements which resulted in an important speed-up. Details of the issues found are explained, and the performance improvement of their correction explicitly shown.",
"title": ""
},
{
"docid": "e5155f7df0bc1025dcd2864b2ed53a8e",
"text": "Unlike standard object classification, where the image to be classified contains one or multiple instances of the same object, indoor scene classification is quite different since the image consists of multiple distinct objects. Furthermore, these objects can be of varying sizes and are present across numerous spatial locations in different layouts. For automatic indoor scene categorization, large-scale spatial layout deformations and scale variations are therefore two major challenges and the design of rich feature descriptors which are robust to these challenges is still an open problem. This paper introduces a new learnable feature descriptor called “spatial layout and scale invariant convolutional activations” to deal with these challenges. For this purpose, a new convolutional neural network architecture is designed which incorporates a novel “spatially unstructured” layer to introduce robustness against spatial layout deformations. To achieve scale invariance, we present a pyramidal image representation. For feasible training of the proposed network for images of indoor scenes, this paper proposes a methodology, which efficiently adapts a trained network model (on a large-scale data) for our task with only a limited amount of available training data. The efficacy of the proposed approach is demonstrated through extensive experiments on a number of data sets, including MIT-67, Scene-15, Sports-8, Graz-02, and NYU data sets.",
"title": ""
},
{
"docid": "8240e0ebc13c75d774f7cc8576f78bfc",
"text": "We have built an anatomically correct testbed (ACT) hand with the purpose of understanding the intrinsic biomechanical and control features in human hands that are critical for achieving robust, versatile, and dexterous movements, as well as rich object and world exploration. By mimicking the underlying mechanics and controls of the human hand in a hardware platform, our goal is to achieve previously unmatched grasping and manipulation skills. In this paper, the novel constituting mechanisms, unique muscle to joint relationships, and movement demonstrations of the thumb, index finger, middle finger, and wrist of the ACT Hand are presented. The grasping and manipulation abilities of the ACT Hand are also illustrated. The fully functional ACT Hand platform allows for the possibility to design and experiment with novel control algorithms leading to a deeper understanding of human dexterity.",
"title": ""
},
{
"docid": "5490e43dc61771bd713f7916a1643aef",
"text": "This paper describes the integration of Amazon Alexa with the Talkamatic Dialogue Manager (TDM), and shows how flexible dialogue skills and rapid prototyping of dialogue apps can be brought to the Alexa platform. 1. Alexa Amazon’s Alexa 1 is a spoken dialogue interface open to third party developers who want to develop their own Alexa ”skills”. Alexa has received a lot of attention and has brought renewed interest to conversational interfaces. It has strong STT (Speech To Text), TTS (Text To Speech) and NLU (Natural Language Understanding) capabilities, but provides less support in the areas of dialogue management and generation, essentially leaving these tasks to the skill developer. See Figure 1 for an overview of the Alexa architecture. An Alexa Skill definition is more or less domain-specific. It also includes generation of natural language output, which makes it language specific. Leaving NLG to the Skill developer works fairly well when performing simple tasks but for domains demanding more complex conversational capabilities, development will be more challenging. Localizing skills to new languages will be another challenge especially if the languages is grammatically more cimplex than English. 2. TDM TDM (Talkamatic Dialogue Manager) [1, 2] is a Dialogue Manager with built-in multimodality, multilinguality, and multi-domain support, and an SDK enabling rapid development of conversational interfaces with a high degree of naturalness and usability. The basic principle behind TDM is separation of concerns – do not mix different kinds of knowledge. TDM keeps the following kinds of knowledge separated from each other: • Dialogue knowledge • Domain knowledge • General linguistic knowledge of a particular language • Domain-specific language • Integration to services and data Dialogue knowledge is encoded in the TDM DME (Dialogue Move Engine). Domain knowledge is declared in the DDD (see below). General linguistic knowledge is described in the Resource Grammar Library. Domain-specific language is described in the DDD-specific grammar. The Service and data integration is described by the Service Interface, a part of the DDD. 1https://developer.amazon.com/alexa 2www.talkamatic.se The dialogue knowledge encoded in TDM enables it to handle a host of dialogue behaviours, including but not limited to: • Overand other-answering (giving more or other information than requested) • Embedded subdialogues (multiple conversational threads) • Task recognition and clarification from incomplete user utterances • Grounding (verification) and correction TDM also supports localisation of applications to new languages (provided that STT and TTS is available). The currently supported and tested languages are English, Mandarin Chinese, Dutch and French. Support for more languages will be added in the future. 3. The relation Alexa – TDM We see the combination of TDM and Alexa as a perfect match. The strengths of the Alexa dialogue platform include the nicely integrated functionality for STT, NLU, and TTS, along with the integration with the Echo hardware. The strengths of TDM are centered on the Dialogue Management component and the multilingual generation. The strengths of the two platforms are thus complementary and non-overlapping. 4. TDM Alexa integration See Figure 2 for an overview of the Alexa-TDM integration. A wrapper around TDM receives intents (e.g. requests and questions) and slots (parameters) from Alexa, which are then translated to their TDM counterparts (request-, askand answermoves) and passed to TDM. The TDM DME (Dialogue Move Engine) then handles dialogue management (updating the information state based on observed dialogue moves, and selecting the best next system move) and the utterance generation (translating the system moves into text), which are then passed back to Alexa using the TDM wrapper. 5. Dialogue Domain Descriptions A TDM application (corresponding roughly to an Alexa skill) is defined by a DDD a Dialogue Domain Description. The DDD is a mostly declarative description of a particular dialogue subject. Firstly, it contains information about what information (basically intentions and slots) is available in a dialogue context, and how this information is related (dialogue plans). Secondly, it contains information about how users and the system speak about this information (grammar). Lastly it contains information about how the information in the dialogue is related to the real world (service interface). Copyright © 2017 ISCA INTERSPEECH 2017: Show & Tell Contribution August 20–24, 2017, Stockholm, Sweden",
"title": ""
},
{
"docid": "18216c0745ae3433b3b7f89bb7876a49",
"text": "This paper presents research using full body skeletal movements captured using video-based sensor technology developed by Vicon Motion Systems, to train a machine to identify different human emotions. The Vicon system uses a series of 6 cameras to capture lightweight markers placed on various points of the body in 3D space, and digitizes movement into x, y, and z displacement data. Gestural data from five subjects was collected depicting four emotions: sadness, joy, anger, and fear. Experimental results with different machine learning techniques show that automatic classification of this data ranges from 84% to 92% depending on how it is calculated. In order to put these automatic classification results into perspective a user study on the human perception of the same data was conducted with average classification accuracy of 93%.",
"title": ""
},
{
"docid": "6a91c45e0cfac9dd472f68aec15889eb",
"text": "UNLABELLED\nThe Insight Toolkit offers plenty of features for multidimensional image analysis. Current implementations, however, often suffer either from a lack of flexibility due to hard-coded C++ pipelines for a certain task or by slow execution times, e.g. caused by inefficient implementations or multiple read/write operations for separate filter execution. We present an XML-based wrapper application for the Insight Toolkit that combines the performance of a pure C++ implementation with an easy-to-use graphical setup of dynamic image analysis pipelines. Created XML pipelines can be interpreted and executed by XPIWIT in console mode either locally or on large clusters. We successfully applied the software tool for the automated analysis of terabyte-scale, time-resolved 3D image data of zebrafish embryos.\n\n\nAVAILABILITY AND IMPLEMENTATION\nXPIWIT is implemented in C++ using the Insight Toolkit and the Qt SDK. It has been successfully compiled and tested under Windows and Unix-based systems. Software and documentation are distributed under Apache 2.0 license and are publicly available for download at https://bitbucket.org/jstegmaier/xpiwit/downloads/.\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online.",
"title": ""
},
{
"docid": "e84856804fd03b5334353937e9db4f81",
"text": "The probabilistic method comes up in various fields in mathematics. In these notes, we will give a brief introduction to graph theory and applications of the probabilistic method in proving bounds for Ramsey numbers and a theorem in graph cuts. This method is based on the following idea: in order to prove the existence of an object with some desired property, one defines a probability space on some larger class of objects, and then shows that an element of this space has the desired property with positive probability. The elements contained in this probability space may be of any kind. We will illustrate the probabilistic method by giving applications in graph theory.",
"title": ""
},
{
"docid": "bf1bcf55307b02adca47ff696be6f801",
"text": "INTRODUCTION\nMobile phones are ubiquitous in society and owned by a majority of psychiatric patients, including those with severe mental illness. Their versatility as a platform can extend mental health services in the areas of communication, self-monitoring, self-management, diagnosis, and treatment. However, the efficacy and reliability of publicly available applications (apps) have yet to be demonstrated. Numerous articles have noted the need for rigorous evaluation of the efficacy and clinical utility of smartphone apps, which are largely unregulated. Professional clinical organizations do not provide guidelines for evaluating mobile apps.\n\n\nMATERIALS AND METHODS\nGuidelines and frameworks are needed to evaluate medical apps. Numerous frameworks and evaluation criteria exist from the engineering and informatics literature, as well as interdisciplinary organizations in similar fields such as telemedicine and healthcare informatics.\n\n\nRESULTS\nWe propose criteria for both patients and providers to use in assessing not just smartphone apps, but also wearable devices and smartwatch apps for mental health. Apps can be evaluated by their usefulness, usability, and integration and infrastructure. Apps can be categorized by their usability in one or more stages of a mental health provider's workflow.\n\n\nCONCLUSIONS\nUltimately, leadership is needed to develop a framework for describing apps, and guidelines are needed for both patients and mental health providers.",
"title": ""
},
{
"docid": "d8e5936d24cf47f6cf0aad9792c53874",
"text": "This paper argues that the term ‘passive’ has been systematically misapplied to a class of impersonal constructions that suppress the realization of a syntactic subject. The reclassification of these constructions highlights a typological contrast between two types of verbal diathesis and clarifies the status of putative ‘passives of unaccusatives ’ and ‘transitive passives’ in Balto-Finnic and Balto-Slavic. Impersonal verb forms differ from passives in two key respects : they are insensitive to the argument structure of a verb and can be formed from unergatives or unaccusatives, and they may retain direct objects. As with other subjectless forms of personal verbs, there is a strong tendency to interpret the suppressed subject of an impersonal as an indefinite human agent. Hence impersonalization is often felicitious only for verbs that select human subjects.",
"title": ""
},
{
"docid": "41cfa1840ef8b6f35865b220c087302b",
"text": "Ultra-high voltage (>10 kV) power devices based on SiC are gaining significant attentions since Si power devices are typically at lower voltage levels. In this paper, a world record 22kV Silicon Carbide (SiC) p-type ETO thyristor is developed and reported as a promising candidate for ultra-high voltage applications. The device is based on a 2cm2 22kV p type gate turn off thyristor (p-GTO) structure. Its static as well as dynamic performances are analyzed, including the anode to cathode blocking characteristics, forward conduction characteristics at different temperatures, turn-on and turn-off dynamic performances. The turn-off energy at 6kV, 7kV and 8kV respectively is also presented. In addition, theoretical boundary of the reverse biased safe operation area (RBSOA) of the 22kV SiC ETO is obtained by simulations and the experimental test also demonstrated a wide RBSOA.",
"title": ""
}
] |
scidocsrr
|
99d036798fbfe4d1b87b7d6aa11d8577
|
Why Aren't Operating Systems Getting Faster As Fast as Hardware?
|
[
{
"docid": "ef241b52d4f4fdc892071f684b387242",
"text": "A description is given of Sprite, an experimental network operating system under development at the University of California at Berkeley. It is part of a larger research project, SPUR, for the design and construction of a high-performance multiprocessor workstation with special hardware support of Lisp applications. Sprite implements a set of kernel calls that provide sharing, flexibility, and high performance to networked workstations. The discussion covers: the application interface: the basic kernel structure; management of the file name space and file data, virtual memory; and process migration.<<ETX>>",
"title": ""
}
] |
[
{
"docid": "ca906d18fca3f4ee83224b7728cbd379",
"text": "AIM\nTo investigate the effect of some psychosocial variables on nurses' job satisfaction.\n\n\nBACKGROUND\nNurses' job satisfaction is one of the most important factors in determining individuals' intention to stay or leave a health-care organisation. Literature shows a predictive role of work climate, professional commitment and work values on job satisfaction, but their conjoint effect has rarely been considered.\n\n\nMETHODS\nA cross-sectional questionnaire survey was adopted. Participants were hospital nurses and data were collected in 2011.\n\n\nRESULTS\nProfessional commitment and work climate positively predicted nurses' job satisfaction. The effect of intrinsic vs. extrinsic work value orientation on job satisfaction was completely mediated by professional commitment.\n\n\nCONCLUSIONS\nNurses' job satisfaction is influenced by both contextual and personal variables, in particular work climate and professional commitment. According to a more recent theoretical framework, work climate, work values and professional commitment interact with each other in determining nurses' job satisfaction.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\nNursing management must be careful to keep the context of work tuned to individuals' attitude and vice versa. Improving the work climate can have a positive effect on job satisfaction, but its effect may be enhanced by favouring strong professional commitment and by promoting intrinsic more than extrinsic work values.",
"title": ""
},
{
"docid": "98fee7d44d311692677f3ace1e79b045",
"text": "Generative adversarial networks (GANs) has achieved great success in the field of image processing, Adversarial Neural Machine Translation(NMT) is the application of GANs to machine translation. Unlike previous work training NMT model through maximizing the likelihood of the human translation, Adversarial NMT minimizes the distinction between human translation and the translation generated by a NMT model. Even though Adversarial NMT has achieved impressive results, while using little in the way of prior knowledge. In this paper, we integrated bilingual dictionaries to Adversarial NMT by leveraging a character model. Extensive experiment shows that our proposed methods can achieve remarkable improvement on the translation quality of Adversarial NMT, and obtain better result than several strong baselines.",
"title": ""
},
{
"docid": "4163070f45dd4d252a21506b1abcfff4",
"text": "Nowadays, security solutions are mainly focused on providing security defences, instead of solving one of the main reasons for security problems that refers to an appropriate Information Systems (IS) design. In fact, requirements engineering often neglects enough attention to security concerns. In this paper it will be presented a case study of our proposal, called SREP (Security Requirements Engineering Process), which is a standard-centred process and a reuse-based approach which deals with the security requirements at the earlier stages of software development in a systematic and intuitive way by providing a security resources repository and by integrating the Common Criteria into the software development lifecycle. In brief, a case study is shown in this paper demonstrating how the security requirements for a security critical IS can be obtained in a guided and systematic way by applying SREP.",
"title": ""
},
{
"docid": "da8cdee004db530e262a13e21daf4970",
"text": "Arcing between the plasma and the wafer, kit, or target in PVD processes can cause significant wafer damage and foreign material contamination which limits wafer yield. Monitoring the plasma and quickly detecting this arcing phenomena is critical to ensuring that today's PVD processes run optimally and maximize product yield. This is particularly true in 300mm semiconductor manufacturing, where energies used are higher and more product is exposed to the plasma with each wafer run than in similar 200mm semiconductor manufacturing processes.",
"title": ""
},
{
"docid": "ae4974a3d7efedab7cd6651101987e79",
"text": "Fisher Kernels and Deep Learning were two developments with significant impact on large-scale object categorization in the last years. Both approaches were shown to achieve state-of-the-art results on large-scale object categorization datasets, such as ImageNet. Conceptually, however, they are perceived as very different and it is not uncommon for heated debates to spring up when advocates of both paradigms meet at conferences or workshops. In this work, we emphasize the similarities between both architectures rather than their differences and we argue that such a unified view allows us to transfer ideas from one domain to the other. As a concrete example we introduce a method for learning a support vector machine classifier with Fisher kernel at the same time as a task-specific data representation. We reinterpret the setting as a multi-layer feed forward network. Its final layer is the classifier, parameterized by a weight vector, and the two previous layers compute Fisher vectors, parameterized by the coefficients of a Gaussian mixture model. We introduce a gradient descent based learning algorithm that, in contrast to other feature learning techniques, is not just derived from intuition or biological analogy, but has a theoretical justification in the framework of statistical learning theory. Our experiments show that the new training procedure leads to significant improvements in classification accuracy while preserving the modularity and geometric interpretability of a support vector machine setup.",
"title": ""
},
{
"docid": "ee15c7152a2e2b9f372ca97283a3c114",
"text": "Essential oil (EO) of the leaves of Eugenia uniflora L. (Brazilian cherry tree) was evaluated for its antioxidant, antibacterial and antifungal properties. The acute toxicity of the EO administered by oral route was also evaluated in mice. The EO exhibited antioxidant activity in the DPPH, ABTS and FRAP assays and reduced lipid peroxidation in the kidney of mice. The EO also showed antimicrobial activity against two important pathogenic bacteria, Staphylococcus aureus and Listeria monocytogenes, and against two fungi of the Candida species, C. lipolytica and C. guilliermondii. Acute administration of the EO by the oral route did not cause lethality or toxicological effects in mice. These findings suggest that the EO of the leaves of E. uniflora may have the potential for use in the pharmaceutical industry.",
"title": ""
},
{
"docid": "45447ab4e0a8bd84fcf683ac482f5497",
"text": "Most of the current learning analytic techniques have as starting point the data recorded by Learning Management Systems (LMS) about the interactions of the students with the platform and among themselves. But there is a tendency on students to rely less on the functionality offered by the LMS and use more applications that are freely available on the net. This situation is magnified in studies in which students need to interact with a set of tools that are easily installed on their personal computers. This paper shows an approach using Virtual Machines by which a set of events occurring outside of the LMS are recorded and sent to a central server in a scalable and unobtrusive manner.",
"title": ""
},
{
"docid": "b87be040dae4d38538159876e01f310b",
"text": "We present data from detailed observations of CityWall, a large multi-touch display installed in a central location in Helsinki, Finland. During eight days of installation, 1199 persons interacted with the system in various social configurations. Videos of these encounters were examined qualitatively as well as quantitatively based on human coding of events. The data convey phenomena that arise uniquely in public use: crowding, massively parallel interaction, teamwork, games, negotiations of transitions and handovers, conflict management, gestures and overt remarks to co-present people, and \"marking\" the display for others. We analyze how public availability is achieved through social learning and negotiation, why interaction becomes performative and, finally, how the display restructures the public space. The multi-touch feature, gesture-based interaction, and the physical display size contributed differentially to these uses. Our findings on the social organization of the use of public displays can be useful for designing such systems for urban environments.",
"title": ""
},
{
"docid": "213ff71ab1c6ac7915f6fb365100c1f5",
"text": "Action anticipation and forecasting in videos do not require a hat-trick, as far as there are signs in the context to foresee how actions are going to be deployed. Capturing these signs is hard because the context includes the past. We propose an end-to-end network for action anticipation and forecasting with memory, to both anticipate the current action and foresee the next one. Experiments on action sequence datasets show excellent results indicating that training on histories with a dynamic memory can significantly improve forecasting performance.",
"title": ""
},
{
"docid": "8dc366f9bdcb8ade26c1dc5557c9e3e0",
"text": "While the idea that querying mechanisms for complex relationships (otherwise known as Semantic Associations) should be integral to Semantic Web search technologies has recently gained some ground, the issue of how search results will be ranked remains largely unaddressed. Since it is expected that the number of relationships between entities in a knowledge base will be much larger than the number of entities themselves, the likelihood that Semantic Association searches would result in an overwhelming number of results for users is increased, therefore elevating the need for appropriate ranking schemes. Furthermore, it is unlikely that ranking schemes for ranking entities (documents, resources, etc.) may be applied to complex structures such as Semantic Associations.In this paper, we present an approach that ranks results based on how predictable a result might be for users. It is based on a relevance model SemRank, which is a rich blend of semantic and information-theoretic techniques with heuristics that supports the novel idea of modulative searches, where users may vary their search modes to effect changes in the ordering of results depending on their need. We also present the infrastructure used in the SSARK system to support the computation of SemRank values for resulting Semantic Associations and their ordering.",
"title": ""
},
{
"docid": "9df78ef5769ed4da768d1a7b359794ab",
"text": "We describe a computer-aided optimization technique for the efficient and reliable design of compact wide-band waveguide septum polarizers (WSP). Wide-band performance is obtained by a global optimization which considers not only the septum section but also several step discontinuities placed before the ridge-to-rectangular bifurcation and the square-to-circular discontinuity. The proposed technique mnakes use of a dynamical optimization procedure which has been tested by designing several WSP operating in different frequency bands. In this work two examples are reported, one operating at Ku band and a very wideband prototype (3.4-4.2 GHz) operating in the C band. The component design, entirely carried out at computer level, has demonstrated significant advantages in terms of development times and no need of post manufacturing adjustments. The very satisfactory agreement between experimental and theoretical results further confirm the validity of the proposed technique.",
"title": ""
},
{
"docid": "81bfa507b8cd849f30c410ba96b0034e",
"text": "Augmented reality (AR) makes it possible to create games in which virtual objects are overlaid on the real world, and real objects are tracked and used to control virtual ones. We describe the development of an AR racing game created by modifying an existing racing game, using an AR infrastructure that we developed for use with the XNA game development platform. In our game, the driver wears a tracked video see-through head-worn display, and controls the car with a passive tangible controller. Other players can participate by manipulating waypoints that the car must pass and obstacles with which the car can collide. We discuss our AR infrastructure, which supports the creation of AR applications and games in a managed code environment, the user interface we developed for the AR racing game, the game's software and hardware architecture, and feedback and observations from early demonstrations.",
"title": ""
},
{
"docid": "7d6c87baff95b89d975b98bcf8a132c0",
"text": "There is precisely one complete language processing system to date: the human brain. Though there is debate on how much built-in bias human learne rs might have, we definitely acquire language in a primarily unsupervised fashio n. On the other hand, computational approaches to language processing are almost excl usively supervised, relying on hand-labeled corpora for training. This reliance is largel y due to unsupervised approaches having repeatedly exhibited discouraging performance. In particular, the problem of learning syntax (grammar) from completely unannotated text has r eceived a great deal of attention for well over a decade, with little in the way of positive results. We argue that previous methods for this task have generally underperformed becaus of the representations they used. Overly complex models are easily distracted by non-sy ntactic correlations (such as topical associations), while overly simple models aren’t r ich enough to capture important first-order properties of language (such as directionality , adjacency, and valence). In this work, we describe several syntactic representation s and associated probabilistic models which are designed to capture the basic character of natural language syntax as directly as possible. First, we examine a nested, distribut ional method which induces bracketed tree structures. Second, we examine a dependency model which induces word-to-word dependency structures. Finally, we demonstrate that these two models perform better in combination than they do alone. With these representations , high-quality analyses can be learned from surprisingly little text, with no labeled exam ples, in several languages (we show experiments with English, German, and Chinese). Our re sults show above-baseline performance in unsupervised parsing in each of these langua ges. Grammar induction methods are useful since parsed corpora e xist for only a small number of languages. More generally, most high-level NLP tasks , uch as machine translation",
"title": ""
},
{
"docid": "a12422abe3e142b83f5f242dc754cca1",
"text": "Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that have tried to either map representations between the two domains, or learn to extract features that are domain-invariant. In this work, we approach the problem in a new light by learning in an unsupervised manner a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.",
"title": ""
},
{
"docid": "db2b94a49d4907504cf2444305287ec8",
"text": "In this paper, we propose a principled Tag Disentangled Generative Adversarial Networks (TDGAN) for re-rendering new images for the object of interest from a single image of it by specifying multiple scene properties (such as viewpoint, illumination, expression, etc.). The whole framework consists of a disentangling network, a generative network, a tag mapping net, and a discriminative network, which are trained jointly based on a given set of images that are completely/partially tagged (i.e., supervised/semi-supervised setting). Given an input image, the disentangling network extracts disentangled and interpretable representations, which are then used to generate images by the generative network. In order to boost the quality of disentangled representations, the tag mapping net is integrated to explore the consistency between the image and its tags. Furthermore, the discriminative network is introduced to implement the adversarial training strategy for generating more realistic images. Experiments on two challenging datasets demonstrate the state-of-the-art performance of the proposed framework in the problem of interest.",
"title": ""
},
{
"docid": "ee4c6084527c6099ea5394aec66ce171",
"text": "Gualzru’s path to the Advertisement World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Fernando Fernández, Moisés Mart́ınez, Ismael Garćıa-Varea, Jesús Mart́ınez-Gómez, Jose Pérez-Lorenzo, Raquel Viciana, Pablo Bustos, Luis J. Manso, Luis Calderita, Marco Antonio Gutiérrez Giraldo, Pedro Núñez, Antonio Bandera, Adrián Romero-Garcés, Juan Bandera and Rebeca Marfil",
"title": ""
},
{
"docid": "4fcb316e01885475920f8b91f6a4c00d",
"text": "Transportation, as a means for moving goods and people between different locations, is a vital element of modern society. In this paper, we discuss how big data technology infrastructure fits into the current development of China, and provide suggestions for improvement. We discuss the current situation of China's transportation system, and outline relevant big data technologies that are being used in the transportation domain. Finally we point out opportunities for improvement of China's transportation system, through standardisation, integration of big data analytics in a national framework, and point to the future of transportation in China and beyond.",
"title": ""
},
{
"docid": "e5380801d69c3acf7bfe36e868b1dadb",
"text": "Skin-mountable chemical sensors using flexible chemically sensitive nanomaterials are of great interest for electronic skin (e-skin) application. To build these sensors, the emerging atomically thin two-dimensional (2D) layered semiconductors could be a good material candidate. Herein, we show that a large-area WS2 film synthesized by sulfurization of a tungsten film exhibits high humidity sensing performance both in natural flat and high mechanical flexible states (bending curvature down to 5 mm). The conductivity of as-synthesized WS2 increases sensitively over a wide relative humidity range (up to 90%) with fast response and recovery times in a few seconds. By using graphene as electrodes and thin polydimethylsiloxane (PDMS) as substrate, a transparent, flexible, and stretchable humidity sensor was fabricated. This senor can be well laminated onto skin and shows stable water moisture sensing behaviors in the undeformed relaxed state as well as under compressive and tensile loadings. Furthermore, its high sensing performance enables real-time monitoring of human breath, indicating a potential mask-free breath monitoring for healthcare application. We believe that such a skin-activity compatible WS2 humidity sensor may shed light on developing low power consumption wearable chemical sensors based on 2D semiconductors.",
"title": ""
},
{
"docid": "ca356c9bec43950b14014ff3cbb6909b",
"text": "Microbionic robots are used to access small tedious spaces with high maneuverability. These robots are employed in surveying and inspection of pipelines as well as tracking and examination of animal or human body during surgical activities. Relatively bigger and powerful robots are used for searching people trapped under wreckages and dirt after disasters. In order to achieve high maneuverability and to tackle various critical scenarios, a novel design of multi-segment Vermicular Robot is proposed with an adaptable actuation mechanism. Owing to the 3 Degrees of freedom (Dof) actuation mechanism, it will not only have faster forward motion but its full hemispherical turning capability would allow the robot to sharply steer as well as lift with smaller radii. The Robot will have the capability to simultaneously follow peristaltic motion (elongation/retraction) as well as looper motion (lifting body up/down). The paper presents locomotion patterns of the Vermicular Robot having Canfield actuation mechanism and highlights various scenarios in order to avoid obstacles en-route.",
"title": ""
},
{
"docid": "55ec669a67b88ff0b6b88f1fa6408df9",
"text": "This paper proposes low overhead training techniques for a wireless communication system equipped with a Multifunctional Reconfigurable Antenna (MRA) capable of dynamically changing beamwidth and beam directions. A novel microelectromechanical system (MEMS) MRA antenna is presented with radiation patterns (generated using complete electromagnetic full-wave analysis) which are used to quantify the communication link performance gains. In particular, it is shown that using the proposed Exhaustive Training at Reduced Frequency (ETRF) consistently results in a reduction in training overhead. It is also demonstrated that further reduction in training overhead is possible using statistical or MUSIC-based training schemes. Bit Error Rate (BER) and capacity simulations are carried out using an MRA, which can tilt its radiation beam into one of Ndir = 4 or 8 directions with variable beamwidth (≈2π/Ndir). The performance of each training scheme is quantified for OFDM systems operating in frequency selective channels with and without Line of Sight (LoS). We observe 6 dB of gain at BER = 10-4 and 6 dB improvement in capacity (at capacity = 6 bits/sec/subcarrier) are achievable for an MRA with Ndir= 8 as compared to omni directional antennas using ETRF scheme in a LoS environment.",
"title": ""
}
] |
scidocsrr
|
5980e86122b46c47ecb9d1277583bc83
|
The cognitive benefits of interacting with nature.
|
[
{
"docid": "44ef307640f82994887b011395eba3fc",
"text": "An analysis of the underlying similarities between the Eastern meditation tradition and attention restoration theory (ART) provides a basis for an expanded framework for studying directed attention. The focus of the analysis is the active role the individual can play in the preservation and recovery of the directed attention capacity. Two complementary strategies are presented that can help individuals more effectively manage their attentional resource. One strategy involves avoiding unnecessary costs in terms of expenditure of directed attention. The other involves enhancing the effect of restorative opportunities. Both strategies are hypothesized to be more effective if one gains generic knowledge, self-knowledge, and specific skills. The interplay between a more active form of mental involvement and the more passive approach of meditation appears to have far-reaching ramifications for managing directed attention. Research on mental restoration has focused on the role of the environment and especially the natural environment. Such settings have been shown to 480 AUTHOR’S NOTE: This article benefited greatly from the many improvements in organization, expression, and content made by Rachel Kaplan and the many suggestions concerning consistency, clarity, and accuracy made by Terry Hartig. Thanks also to the SESAME group for providing a supportive environment for exploring many of the themes discussed here. The project was funded in part by USDA Forest Service, North Central Experiment Station, Urban Forestry Unit Co-operative Agreements. ENVIRONMENT AND BEHAVIOR, Vol. 33 No. 4, July 2001 480-506 © 2001 Sage Publications © 2001 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. at PENNSYLVANIA STATE UNIV on April 17, 2008 http://eab.sagepub.com Downloaded from reduce both stress and directed attention fatigue (DAF) (Hartig & Evans, 1993). Far less emphasis, however, has been placed on the possibility of active participation by the individual in need of recovery. A major purpose of this article is to explore the potential of this mostly neglected component of the restorative process. The article examines the role of attention in the restoration process both from the perspective of attention restoration theory (ART) and by exploring insights from the meditation tradition. The two perspectives are different in important ways, most notably in the active role that is played by the individual. At the same time, there are interesting common issues. In particular, I explore two ways to bring these frameworks together, namely preserving directed attention by avoiding costs (i.e., things that drain the attentional resource) and recovering attention through enhancement of the restorative process. These lead to a variety of tools and strategies that are available in the quest for restoration and a set of hypotheses concerning their expected effect on an individual’s effectiveness.",
"title": ""
}
] |
[
{
"docid": "cfb7fb13adf09f5cb5657ff7f42c41e5",
"text": "The antenna design for ultra wideband (UWB) signal radiation is one of the main challenges of the UWB system, especially when low-cost, geometrically small and radio efficient structures are required for typical applications. This study presents a novel printed loop antenna with introducing an L shape portion to its arms. The antenna offers excellent performance for lower-band frequency of UWB system, ranging from 3.1 GHz to 5.1 GHz. The antenna exhibits a 10 dB return loss bandwidth over the entire frequency band. The antenna is designed on FR4 substrate and fed with 50 ohms coupled tapered transmission line. It is found that the lower frequency band depends on the L portion of the loop antenna; however the upper frequency limit was decided by the taper transmission line. Though with very simple geometry, the results are satisfactory.",
"title": ""
},
{
"docid": "ce5c5d0d0cb988c96f0363cfeb9610d4",
"text": "Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration/snapshotting and the diversity/compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and/or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.",
"title": ""
},
{
"docid": "712636d3a1dfe2650c0568c8f7cf124c",
"text": "Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the unimportant connections with small weights and retraining the network given the sparsity constraint. In the final D (re-Dense) step, we increase the model capacity by removing the sparsity constraint, re-initialize the pruned parameters from zero and retrain the whole dense network. Experiments show that DSD training can improve the performance for a wide range of CNNs, RNNs and LSTMs on the tasks of image classification, caption generation and speech recognition. On ImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93 dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’t change the network architecture or incur any inference overhead. The consistent and significant performance gain of DSD experiments shows the inadequacy of the current training methods for finding the best local optimum, while DSD effectively achieves superior optimization performance for finding a better solution. DSD models are available to download at https://songhan.github.io/DSD.",
"title": ""
},
{
"docid": "33b8417f25b56e5ea9944f9f33fc162c",
"text": "Researchers have attempted to model information diffusion and topic trends and lifecycle on online social networks. They have investigated the role of content, social connections and communities, familiarity and behavioral similarity in this context. The current article presents a survey of representative models that perform topic analysis, capture information diffusion, and explore the properties of social connections in the context of online social networks. The article concludes with a set of outlines of open problems and possible directions of future research interest. This article is intended for researchers to identify the current literature, and explore possibilities to improve the art.",
"title": ""
},
{
"docid": "ba69ac7c4667eb64e45564e5a5b822d2",
"text": "Multi-unit recordings with tetrodes have been used in brain studies for many years, but surprisingly, scarcely in the cerebellum. The cerebellum is subdivided in multiple small functional zones. Understanding the proper features of the cerebellar computations requires a characterization of neuronal activity within each area. By allowing simultaneous recordings of neighboring cells, tetrodes provide a helpful technique to study the dynamics of the cerebellar local networks. Here, we discuss experimental configurations to optimize such recordings and demonstrate their use in the different layers of the cerebellar cortex. We show that tetrodes can also be used to perform simultaneous recordings from neighboring units in freely moving rats using a custom-made drive, thus permitting studies of cerebellar network dynamics in a large variety of behavioral conditions.",
"title": ""
},
{
"docid": "df2c52d659bff75639783332b9bcd571",
"text": "The Alt-Right is a neo-fascist white supremacist movement that is involved in violent extremism and shows signs of engagement in extensive disinformation campaigns. Using social media data mining, this study develops a deeper understanding of such targeted disinformation campaigns and the ways they spread. It also adds to the available literature on the endogenous and exogenous influences within the US far right, as well as motivating factors that drive disinformation campaigns, such as geopolitical strategy. This study is to be taken as a preliminary analysis to indicate future methods and follow-on research that will help develop an integrated approach to understanding the strategies and associations of the modern fascist movement.",
"title": ""
},
{
"docid": "3181171d92ce0a8d3a44dba980c0cc5f",
"text": "Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as -greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent’s surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the k-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques.",
"title": ""
},
{
"docid": "06465bde1eb562e90e609a31ed2dfe70",
"text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/stanford/autumn2016/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. If you are scanning your document by cellphone, please check the Piazza forum for recommended cellphone scanning apps and best practices.",
"title": ""
},
{
"docid": "a926341e8b663de6c412b8e3a61ee171",
"text": "— Studies within the EHEA framework include the acquisition of skills such as the ability to learn autonomously, which requires students to devote much of their time to individual and group work to reinforce and further complement the knowledge acquired in the classroom. In order to consolidate the results obtained from classroom activities, lecturers must develop tools to encourage learning and facilitate the process of independent learning. The aim of this work is to present the use of virtual laboratories based on Easy Java Simulations to assist in the understanding and testing of electrical machines. con los usuarios integrándose fácilmente en plataformas de e-aprendizaje. Para nuestra aplicación hemos escogido el Java Ejs (Easy Java Simulations), ya que es una herramienta de software gratuita, diseñada para el desarrollo de laboratorios virtuales interactivos, dispone de elementos visuales parametrizables",
"title": ""
},
{
"docid": "2ff15076533d1065209e0e62776eaa69",
"text": "In less than a decade, Cubesats have evolved from purely educational tools to a standard platform for technology demonstration and scientific instrumentation. The use of COTS (Commercial-Off-The-Shelf) components and the ongoing miniaturization of several technologies have already led to scattered instances of missions with promising scientific value. Furthermore, advantages in terms of development cost and development time with respect to larger satellites, as well as the possibility of launching several dozens of Cubesats with a single rocket launch, have brought forth the potential for radically new mission architectures consisting of very large constellations or clusters of Cubesats. These architectures promise to combine the temporal resolution of GEO missions with the spatial resolution of LEO missions, thus breaking a traditional tradeoff in Earth observation mission design. This paper assesses the current capabilities of Cubesats with respect to potential employment in Earth observation missions. A thorough review of Cubesat bus technology capabilities is performed, identifying potential limitations and their implications on 17 different Earth observation payload technologies. These results are matched to an exhaustive review of scientific requirements in the field of Earth observation, assessing the possibilities of Cubesats to cope with the requirements set for each one of 21 measurement categories. Based on this review, several Earth observation measurements are identified that can potentially be compatible with the current state-of-the-art of Cubesat technology although some of them have actually never been addressed by any Cubesat mission. Simultaneously, other measurements are identified which are unlikely to be performed by Cubesats in the next few years due to insuperable constraints. Ultimately, this paper is intended to supply a box of ideas for universities to design future Cubesat missions with high",
"title": ""
},
{
"docid": "b8e8404c061350aeba92f6ed1ecea1f1",
"text": "We consider a single-product revenue management problem where, given an initial inventory, the objective is to dynamically adjust prices over a finite sales horizon to maximize expected revenues. Realized demand is observed over time, but the underlying functional relationship between price and mean demand rate that governs these observations (otherwise known as the demand function or demand curve) is not known. We consider two instances of this problem: (i) a setting where the demand function is assumed to belong to a known parametric family with unknown parameter values; and (ii) a setting where the demand function is assumed to belong to a broad class of functions that need not admit any parametric representation. In each case we develop policies that learn the demand function “on the fly,” and optimize prices based on that. The performance of these algorithms is measured in terms of the regret: the revenue loss relative to the maximal revenues that can be extracted when the demand function is known prior to the start of the selling season. We derive lower bounds on the regret that hold for any admissible pricing policy, and then show that our proposed algorithms achieve a regret that is “close” to this lower bound. The magnitude of the regret can be interpreted as the economic value of prior knowledge on the demand function, manifested as the revenue loss due to model uncertainty.",
"title": ""
},
{
"docid": "329343cec99c221e6f6ce8e3f1dbe83f",
"text": "Artificial Neural Networks (ANN) play a very vital role in making stock market predictions. As per the literature survey, various researchers have used various approaches to predict the prices of stock market. Some popular approaches used by researchers are Artificial Neural Networks, Genetic Algorithms, Fuzzy Logic, Auto Regressive Models and Support Vector Machines. This study presents ANN based computational approach for predicting the one day ahead closing prices of companies from the three different sectors:IT Sector (Wipro, TCS and Infosys), Automobile Sector (Maruti Suzuki Ltd.) and Banking Sector (ICICI Bank). Different types of artificial neural networks based models like Back Propagation Neural Network (BPNN), Radial Basis Function Neural Network (RBFNN), Generalized Regression Neural Network (GRNN) and Layer Recurrent Neural Network (LRNN) have been studied and used to forecast the short term and long term share prices of Wipro, TCS, Infosys, Maruti Suzuki and ICICI Bank. All the networks were trained with the 1100 days of trading data and predicted the prices up to next 6 months. Predicted output was generated through available historical data. Experimental results show that BPNN model gives minimum error (MSE) as compared to the RBFNN and GRNN models. GRNN model performs better as compared to RBFNN model. Forecasting performance of LRNN model is found to be much better than other three models. Keywordsartificial intelligence, back propagation, mean square error, artificial neural network.",
"title": ""
},
{
"docid": "3f6cbad208a819fc8fc6a46208197d59",
"text": "The use of visemes as atomic speech units in visual speech analysis and synthesis systems is well-established. Viseme labels are determined using a many-to-one phoneme-to-viseme mapping. However, due to the visual coarticulation effects, an accurate mapping from phonemes to visemes should define a many-to-many mapping scheme. In this research it was found that neither the use of standardized nor speaker-dependent many-to-one viseme labels could satisfy the quality requirements of concatenative visual speech synthesis. Therefore, a novel technique to define a many-to-many phoneme-to-viseme mapping scheme is introduced, which makes use of both treebased and k-means clustering approaches. We show that these many-to-many viseme labels more accurately describe the visual speech information as compared to both phoneme-based and many-toone viseme-based speech labels. In addition, we found that the use of these many-to-many visemes improves the precision of the segment selection phase in concatenative visual speech synthesis using limited speech databases. Furthermore, the resulting synthetic visual speech was both objectively and subjectively found to be of higher quality when the many-to-many visemes are used to describe the speech database as well as the synthesis targets.",
"title": ""
},
{
"docid": "a4ddf6920fa7a5c09fa0f62f9b96a2e3",
"text": "In this paper, a class of single-phase Z-source (ZS) ac–ac converters is proposed with high-frequency transformer (HFT) isolation. The proposed HFT isolated (HFTI) ZS ac–ac converters possess all the features of their nonisolated counterparts, such as providing wide range of buck-boost output voltage with reversing or maintaining the phase angle, suppressing the in-rush and harmonic currents, and improved reliability. In addition, the proposed converters incorporate HFT for electrical isolation and safety, and therefore can save an external bulky line frequency transformer, for applications such as dynamic voltage restorers, etc. The proposed HFTI ZS converters are obtained from conventional (nonisolated) ZS ac–ac converters by adding only one extra bidirectional switch, and replacing two inductors with an HFT, thus saving one magnetic core. The switching signals for buck and boost modes are presented with safe-commutation strategy to remove the switch voltage spikes. A quasi-ZS-based HFTI ac–ac is used to discuss the operation principle and circuit analysis of the proposed class of HFTI ZS ac–ac converters. Various ZS-based HFTI proposed ac–ac converters are also presented thereafter. Moreover, a laboratory prototype of the proposed converter is constructed and experiments are conducted to produce output voltage of 110 Vrms / 60 Hz, which verify the operation of the proposed converters.",
"title": ""
},
{
"docid": "de73980005a62a24820ed199fab082a3",
"text": "Natural language interfaces offer end-users a familiar and convenient option for querying ontology-based knowledge bases. Several studies have shown that they can achieve high retrieval performance as well as domain independence. This paper focuses on usability and investigates if NLIs are useful from an end-user’s point of view. To that end, we introduce four interfaces each allowing a different query language and present a usability study benchmarking these interfaces. The results of the study reveal a clear preference for full sentences as query language and confirm that NLIs are useful for querying Semantic Web data.",
"title": ""
},
{
"docid": "404bd4b3c7756c87805fa286415aac43",
"text": "Although key techniques for next-generation wireless communication have been explored separately, relatively little work has been done to investigate their potential cooperation for performance optimization. To address this problem, we propose a holistic framework for robust 5G communication based on multiple-input-multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM). More specifically, we design a new framework that supports: 1) index modulation based on OFDM (OFDM–M) [1]; 2) sub-band beamforming and channel estimation to achieve massive path gains by exploiting multiple antenna arrays [2]; and 3) sub-band pre-distortion for peak-to-average-power-ratio (PAPR) reduction [3] to significantly decrease the PAPR and communication errors in OFDM-IM by supporting a linear behavior of the power amplifier in the modem. The performance of the proposed framework is evaluated against the state-of-the-art QPSK, OFDM-IM [1] and QPSK-spatiotemporal QPSK-ST [2] schemes. The results show that our framework reduces the bit error rate (BER), mean square error (MSE) and PAPR compared to the baselines by approximately 6–13dB, 8–13dB, and 50%, respectively.",
"title": ""
},
{
"docid": "bc6cbf7da118c01d74914d58a71157ac",
"text": "Currently, there are increasing interests in text-to-speech (TTS) synthesis to use sequence-to-sequence models with attention. These models are end-to-end meaning that they learn both co-articulation and duration properties directly from text and speech. Since these models are entirely data-driven, they need large amounts of data to generate synthetic speech with good quality. However, in challenging speaking styles, such as Lombard speech, it is difficult to record sufficiently large speech corpora. Therefore, in this study we propose a transfer learning method to adapt a sequence-to-sequence based TTS system of normal speaking style to Lombard style. Moreover, we experiment with a WaveNet vocoder in synthesis of Lombard speech. We conducted subjective evaluations to assess the performance of the adapted TTS systems. The subjective evaluation results indicated that an adaptation system with the WaveNet vocoder clearly outperformed the conventional deep neural network based TTS system in synthesis of Lombard speech.",
"title": ""
},
{
"docid": "e0d4ab67dc39967b7daa4dc438ef79f5",
"text": "Biclustering techniques have been widely used to identify homogeneous subgroups within large data matrices, such as subsets of genes similarly expressed across subsets of patients. Mining a max-sum sub-matrix is a related but distinct problem for which one looks for a (non-necessarily contiguous) rectangular sub-matrix with a maximal sum of its entries. Le Van et al. [6] already illustrated its applicability to gene expression analysis and addressed it with a constraint programming (CP) approach combined with large neighborhood search (CP-LNS). In this work, we exhibit some key properties of this NP-hard problem and define a bounding function such that larger problems can be solved in reasonable time. Two different algorithms are proposed in order to exploit the highlighted characteristics of the problem: a CP approach with a global constraint (CPGC) and mixed integer linear programming (MILP). Practical experiments conducted both on synthetic and real gene expression data exhibit the characteristics of these approaches and their relative benefits over the original CP-LNS method. Overall, the CPGC approach tends to be the fastest to produce a good solution. Yet, the MILP formulation is arguably the easiest to formulate and can also be competitive.",
"title": ""
},
{
"docid": "7233197435b777dcd07a2c66be32dea9",
"text": "We present an automated assembly system that directs the actions of a team of heterogeneous robots in the completion of an assembly task. From an initial user-supplied geometric specification, the system applies reasoning about the geometry of individual parts in order to deduce how they fit together. The task is then automatically transformed to a symbolic description of the assembly-a sort of blueprint. A symbolic planner generates an assembly sequence that can be executed by a team of collaborating robots. Each robot fulfills one of two roles: parts delivery or parts assembly. The latter are equipped with specialized tools to aid in the assembly process. Additionally, the robots engage in coordinated co-manipulation of large, heavy assemblies. We provide details of an example furniture kit assembled by the system.",
"title": ""
},
{
"docid": "9fb492c57ef0795a9d71cd94a8ebc8f4",
"text": "The increasing reliance on Computational Intelligence techniques like Artificial Neural Networks and Genetic Algorithms to formulate trading decisions have sparked off a chain of research into financial forecasting and trading trend identifications. Many research efforts focused on enhancing predictive capability and identifying turning points. Few actually presented empirical results using live data and actual technical trading rules. This paper proposed a novel RSPOP Intelligent Stock Trading System, that combines the superior predictive capability of RSPOP FNN and the use of widely accepted Moving Average and Relative Strength Indicator Trading Rules. The system is demonstrated empirically using real live stock data to achieve significantly higher Multiplicative Returns than a conventional technical rule trading system. It is able to outperform the buy-and-hold strategy and generate several folds of dollar returns over an investment horizon of four years. The Percentage of Winning Trades was increased significantly from an average of 70% to more than 92% using the system as compared to the conventional trading system; demonstrating the system’s ability to filter out erroneous trading signals generated by technical rules and to preempt any losing trades. The system is designed based on the premise that it is possible to capitalize on the swings in a stock counter’s price, without a need for predicting target prices.",
"title": ""
}
] |
scidocsrr
|
c60f950e1314bcb4f2ad7f9430549c5a
|
Advances in Vision-Based Lane Detection: Algorithms, Integration, Assessment, and Perspectives on ACP-Based Parallel Vision
|
[
{
"docid": "d5abd8f68a9f77ed84ec1381584357a4",
"text": "In this paper, we study how to test the intelligence of an autonomous vehicle. Comprehensive testing is crucial to both vehicle manufactories and customers. Existing testing approaches can be categorized into two kinds: scenario-based testing and functionality-based testing. We first discuss the shortcomings of these two kinds of approaches, and then propose a new testing framework to combine the benefits of them. Based on the new semantic diagram definition for the intelligence of autonomous vehicles, we explain how to design a task for autonomous vehicle testing and how to evaluate test results. Experiments show that this new approach provides a quantitative way to test the intelligence of an autonomous vehicle.",
"title": ""
},
{
"docid": "4be5587ed82e57340a5e4c19191ed986",
"text": "Lane detection can provide important information for safety driving. In this paper, a real time vision-based lane detection method is presented to find the position and type of lanes in each video frame. In the proposed lane detection method, lane hypothesis is generated and verified based on an effective combination of lane-mark edge-link features. First, lane-mark candidates are searched inside region of interest (ROI). During this searching process, an extended edge-linking algorithm with directional edge-gap closing is used to produce more complete edge-links, and features like lane-mark edge orientation and lane-mark width are used to select candidate lane-mark edge-link pairs. For the verification of lane-mark candidates, color is checked inside the region enclosed by candidate edge-link pairs in YUV color space. Additionally, the continuity of the lane is estimated employing a Bayesian probability model based on lane-mark color and edge-link length ratio. Finally, a simple lane departure model is built to detect lane departures based on lane locations in the image Experiment results show that the proposed lane detection method can work robustly in real-time, and can achieve an average speed of 30~50ms per frame for 180x120 image size, with a correct detection rate over 92%.",
"title": ""
}
] |
[
{
"docid": "45879e14f7fe6fe527739d74595b46dd",
"text": "Malware is one of the most damaging security threats facing the Internet today. Despite the burgeoning literature, accurate detection of malware remains an elusive and challenging endeavor due to the increasing usage of payload encryption and sophisticated obfuscation methods. Also, the large variety of malware classes coupled with their rapid proliferation and polymorphic capabilities and imperfections of real-world data (noise, missing values, etc) continue to hinder the use of more sophisticated detection algorithms. This paper presents a novel machine learning based framework to detect known and newly emerging malware at a high precision using layer 3 and layer 4 network traffic features. The framework leverages the accuracy of supervised classification in detecting known classes with the adaptability of unsupervised learning in detecting new classes. It also introduces a tree-based feature transformation to overcome issues due to imperfections of the data and to construct more informative features for the malware detection task. We demonstrate the effectiveness of the framework using real network data from a large Internet service provider.",
"title": ""
},
{
"docid": "37fa4ea4d9002483dd2231d215806955",
"text": "This paper proposes an Encryption Scheme that possess the following property : An adversary, who knows the encryption algorithm and is given the cyphertext, cannot obtain any information about the clear-text.\n Any implementation of a Public Key Cryptosystem, as proposed by Diffie and Hellman in [8], should possess this property.\n Our Encryption Scheme follows the ideas in the number theoretic implementations of a Public Key Cryptosystem due to Rivest, Shamir and Adleman [13], and Rabin [12].",
"title": ""
},
{
"docid": "48e48660a711f1cf2d4d7703368b73c9",
"text": "Growing evidence suggests that transcriptional regulators and secreted RNA molecules encapsulated within membrane vesicles modify the phenotype of target cells. Membrane vesicles, actively released by cells, represent a mechanism of intercellular communication that is conserved evolutionarily and involves the transfer of molecules able to induce epigenetic changes in recipient cells. Extracellular vesicles, which include exosomes and microvesicles, carry proteins, bioactive lipids, and nucleic acids, which are protected from enzyme degradation. These vesicles can transfer signals capable of altering cell function and/or reprogramming targeted cells. In the present review we focus on the extracellular vesicle-induced epigenetic changes in recipient cells that may lead to phenotypic and functional modifications. The relevance of these phenomena in stem cell biology and tissue repair is discussed.",
"title": ""
},
{
"docid": "9359e42a21f6ed463176bbaaf9eaf387",
"text": "The existing identity authentication of IoT devices mostly depends on an intermediary institution, i.e., a CA server, which suffers from the single-point-failure attack. Even worse, the critical data of authenticated devices can be tampered by inner attacks without being identified. To address these issues, we utilize blockchain technology, which serves as a secure tamper-proof distributed ledger to IoT devices. In the proposed method, we assign a unique ID for each individual device and record them into the blockchain, so that they can authenticate each other without a central authority. We also design a data protection mechanism by hashing significant data (i.e. firmware) into the blockchain where any state changes of the data can be detected immediately. Finally, we implement a prototype based on an open source blockchain platform Hyperledger Fabric to verify the proposed system.",
"title": ""
},
{
"docid": "6e8d1b5c2183ce09aadb09e4ff215241",
"text": "The widely used ChestX-ray14 dataset addresses an important medical image classification problem and has the following caveats: 1) many lung pathologies are visually similar, 2) a variant of diseases including lung cancer, tuberculosis, and pneumonia are present in a single scan, i.e. multiple labels and 3) The incidence of healthy images is much larger than diseased samples, creating imbalanced data. These properties are common in medical domain. Existing literature uses stateof-the-art DensetNet/Resnet models being transfer learned where output neurons of the networks are trained for individual diseases to cater for multiple diseases labels in each image. However, most of them don’t consider relationship between multiple classes. In this work we have proposed a novel error function, Multi-label Softmax Loss (MSML), to specifically address the properties of multiple labels and imbalanced data. Moreover, we have designed deep network architecture based on fine-grained classification concept that incorporates MSML. We have evaluated our proposed method on various network backbones and showed consistent performance improvements of AUC-ROC scores on the ChestX-ray14 dataset. The proposed error function provides a new method to gain improved performance across wider medical datasets.",
"title": ""
},
{
"docid": "ec0f7117acc67ae85b381b1d5f2dc5fa",
"text": "We propose a generalized focal loss function based on the Tversky index to address the issue of data imbalance in medical image segmentation. Compared to the commonly used Dice loss, our loss function achieves a better trade off between precision and recall when training on small structures such as lesions. To evaluate our loss function, we improve the attention U-Net model by incorporating an image pyramid to preserve contextual features. We experiment on the BUS 2017 dataset and ISIC 2018 dataset where lesions occupy 4.84% and 21.4% of the images area and improve segmentation accuracy when compared to the standard U-Net by 25.7% and 3.6%, respectively.",
"title": ""
},
{
"docid": "efd27e1838d48342b5331b1b504d6a69",
"text": "The microflora of Tibetan kefir grains was investigated by culture- independent methods. Denaturing gradient gel electrophoresis (DGGE) of partially amplified 16S rRNA for bacteria and 26S rRNA for yeasts, followed by sequencing of the most intense bands, showed that the dominant microorganisms were Pseudomonas sp., Leuconostoc mesenteroides, Lactobacillus helveticus, Lactobacillus kefiranofaciens, Lactococcus lactis, Lactobacillus kefiri, Lactobacillus casei, Kazachstania unispora, Kluyveromyces marxianus, Saccharomyces cerevisiae, and Kazachstania exigua. The bacterial communities between three kinds of Tibetan kefir grains showed 78-84% similarity, and yeasts 80-92%. The microflora is held together in the matrix of fibrillar material composed largely of a water-insoluble polysaccharide.",
"title": ""
},
{
"docid": "976dc6591e21e96ddb9ac6133a47e2ec",
"text": "Object detection is a fundamental problem in image understanding. One popular solution is the R-CNN framework [15] and its fast versions [14, 27]. They decompose the object detection problem into two cascaded easier tasks: 1) generating object proposals from images, 2) classifying proposals into various object categories. Despite that we are handling with two relatively easier tasks, they are not solved perfectly and there's still room for improvement. In this paper, we push the \"divide and conquer\" solution even further by dividing each task into two sub-tasks. We call the proposed method \"CRAFT\" (Cascade Regionproposal-network And FasT-rcnn), which tackles each task with a carefully designed network cascade. We show that the cascade structure helps in both tasks: in proposal generation, it provides more compact and better localized object proposals, in object classification, it reduces false positives (mainly between ambiguous categories) by capturing both inter-and intra-category variances. CRAFT achieves consistent and considerable improvement over the state-of the-art on object detection benchmarks like PASCAL VOC 07/12 and ILSVRC.",
"title": ""
},
{
"docid": "493c45304bd5b7dd1142ace56e94e421",
"text": "While closed timelike curves (CTCs) are not known to exist, studying their consequences has led to nontrivial insights in general relativity, quantum information, and other areas. In this paper we show that if CTCs existed, then quantum computers would be no more powerful than classical computers: both would have the (extremely large) power of the complexity class PSPACE, consisting of all problems solvable by a conventional computer using a polynomial amount of memory. This solves an open problem proposed by one of us in 2005, and gives an essentially complete understanding of computational complexity in the presence of CTCs. Following the work of Deutsch, we treat a CTC as simply a region of spacetime where a “causal consistency” condition is imposed, meaning that Nature has to produce a (probabilistic or quantum) fixed-point of some evolution operator. Our conclusion is then a consequence of the following theorem: given any quantum circuit (not necessarily unitary), a fixed-point of the circuit can be (implicitly) computed in polynomial space. This theorem might have independent applications in quantum information.",
"title": ""
},
{
"docid": "ea3b6ec7e56d8924c24e001383c330c5",
"text": "Leveraging class semantic descriptions and examples of known objects, zero-shot learning makes it possible to train a recognition model for an object class whose examples are not available. In this paper, we propose a novel zero-shot learning model that takes advantage of clustering structures in the semantic embedding space. The key idea is to impose the structural constraint that semantic representations must be predictive of the locations of their corresponding visual exemplars. To this end, this reduces to training multiple kernel-based regressors from semantic representation-exemplar pairs from labeled data of the seen object categories. Despite its simplicity, our approach significantly outperforms existing zero-shot learning methods on standard benchmark datasets, including the ImageNet dataset with more than 20,000 unseen categories.",
"title": ""
},
{
"docid": "1b9bcb2ab5bc0b2b2e475066a1f78fbe",
"text": "Fragility curves are becoming increasingly common components of flood risk assessments. This report introduces the concept of the fragility curve and shows how fragility curves are related to more familiar reliability concepts, such as the deterministic factor of safety and the relative reliability index. Examples of fragility curves are identified in the literature on structures and risk assessment to identify what methods have been used to develop fragility curves in practice. Four basic approaches are identified: judgmental, empirical, hybrid, and analytical. Analytical approaches are, by far, the most common method encountered in the literature. This group of methods is further decomposed based on whether the limit state equation is an explicit function or an implicit function and on whether the probability of failure is obtained using analytical solution methods or numerical solution methods. Advantages and disadvantages of the various approaches are considered. DISCLAIMER: The contents of this report are not to be used for advertising, publication, or promotional purposes. Citation of trade names does not constitute an official endorsement or approval of the use of such commercial products. All product names and trademarks cited are the property of their respective owners. The findings of this report are not to be construed as an official Department of the Army position unless so designated by other authorized documents. DESTROY THIS REPORT WHEN NO LONGER NEEDED. DO NOT RETURN IT TO THE ORIGINATOR.",
"title": ""
},
{
"docid": "305f385c343a89e566aa13634964992d",
"text": "Trend-following (TF) strategies use fixed trading mechanism in order to take advantages from the long-term market moves without regards to the past price performance.In contrast with most prediction tools that stemmed from soft computing such as neural networks to predict a future trend, TF just rides on the current trend pattern to decide on buying or selling. While TF is widely applied in currency markets with a good track record for major currency pairs [1], it is doubtful that if TF can be applied in stock market. In this paper a new TF model that features both strategies of evaluating the trend by static and adaptive rules, is created from simulations and later verified on Hong Kong Hang Seng future indices. The model assesses trend profitability from the statistical features of the return distribution of the asset under consideration. The results and examples facilitate some insights on the merits of using the trend following model.",
"title": ""
},
{
"docid": "e4feba407b080e377b15f9784e98c99d",
"text": "The Ritvo Autism Asperger Diagnostic Scale-Revised (RAADS-R) is a valid and reliable instrument to assist the diagnosis of adults with Autism Spectrum Disorders (ASD). The 80-question scale was administered to 779 subjects (201 ASD and 578 comparisons). All ASD subjects met inclusion criteria: DSM-IV-TR, ADI/ADOS diagnoses and standardized IQ testing. Mean scores for each of the questions and total mean ASD vs. the comparison groups' scores were significantly different (p < .0001). Concurrent validity with Constantino Social Responsiveness Scale-Adult = 95.59%. Sensitivity = 97%, specificity = 100%, test-retest reliability r = .987. Cronbach alpha coefficients for the subscales and 4 derived factors were good. We conclude that the RAADS-R is a useful adjunct diagnostic tool for adults with ASD.",
"title": ""
},
{
"docid": "9dd245f75092adc8d8bb2b151275789b",
"text": "Current model free learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.",
"title": ""
},
{
"docid": "2f7a63571f8d695d402a546a457470c4",
"text": "Why does Deep Learning work? What representations does it capture? How do higher-order representations emerge? We study these questions from the perspective of group theory, thereby opening a new approach towards a theory of Deep learning. One factor behind the recent resurgence of the subject is a key algorithmic step called pretraining: first search for a good generative model for the input samples, and repeat the process one layer at a time. We show deeper implications of this simple principle, by establishing a connection with the interplay of orbits and stabilizers of group actions. Although the neural networks themselves may not form groups, we show the existence of shadow groups whose elements serve as close approximations. Over the shadow groups, the pretraining step, originally introduced as a mechanism to better initialize a network, becomes equivalent to a search for features with minimal orbits. Intuitively, these features are in a way the simplest. Which explains why a deep learning network learns simple features first. Next, we show how the same principle, when repeated in the deeper layers, can capture higher order representations, and why representation complexity increases as the layers get deeper.",
"title": ""
},
{
"docid": "a9775a1819327d4d9cf228a3371d784f",
"text": "Permissionless blockchains protocols such as Bitcoin are inherently limited in transaction throughput and latency. Current efforts to address this key issue focus on off-chain payment channels that can be combined in a Payment-Channel Network (PCN) to enable an unlimited number of payments without requiring to access the blockchain other than to register the initial and final capacity of each channel. While this approach paves the way for low latency and high throughput of payments, its deployment in practice raises several privacy concerns as well as technical challenges related to the inherently concurrent nature of payments that have not been sufficiently studied so far. In this work, we lay the foundations for privacy and concurrency in PCNs, presenting a formal definition in the Universal Composability framework as well as practical and provably secure solutions. In particular, we present Fulgor and Rayo. Fulgor is the first payment protocol for PCNs that provides provable privacy guarantees for PCNs and is fully compatible with the Bitcoin scripting system. However, Fulgor is a blocking protocol and therefore prone to deadlocks of concurrent payments as in currently available PCNs. Instead, Rayo is the first protocol for PCNs that enforces non-blocking progress (i.e., at least one of the concurrent payments terminates). We show through a new impossibility result that non-blocking progress necessarily comes at the cost of weaker privacy. At the core of Fulgor and Rayo is Multi-Hop HTLC, a new smart contract, compatible with the Bitcoin scripting system, that provides conditional payments while reducing running time and communication overhead with respect to previous approaches. Our performance evaluation of Fulgor and Rayo shows that a payment with 10 intermediate users takes as few as 5 seconds, thereby demonstrating their feasibility to be deployed in practice.",
"title": ""
},
{
"docid": "1a592e514e27c4421c28434847f167cb",
"text": "There appears to be a paradox between the methods of product valuation considered theoretically most suitable and those effectively used by enterprises according to empirical studies already made. This research strives to clarify which factors explain enterprises continued use of theoretically inadequate methods. The study aims to identify the methods used by Portuguese small and medium size enterprises to value products and to analyze if the management accounting software influences the methods used. Accounting managers from 58 enterprises in 11 Portuguese districts were interviewed. The interviewees stated that the management accounting software influences the method of indirect cost distribution, and the association of these two variables is statistically significant. However, the individual analysis of the interviews led to the detection of a third variable, namely the way in which the product valuation was conceived, and this influences the previous two variables simultaneously. This evidence suggests that the conditioning that accounting managers believed was exerted by the management accounting software on indirect cost distribution was in fact the result of the direct influence of a third variable on the first two, namely the way in which the method was conceived. Key-words: management, accounting, software, SME, Portugal",
"title": ""
},
{
"docid": "00cd2e5ddec4789a119f6d79b39cebcb",
"text": "The historical and presettlement relationships between drought and wildfire are well documented in North America, with forest fire occurrence and area clearly increasing in response to drought. There is also evidence that drought interacts with other controls (forest productivity, topography, fire weather, management activities) to affect fire intensity, severity, extent, and frequency. Fire regime characteristics arise across many individual fires at a variety of spatial and temporal scales, so both weather and climate - including short- and long-term droughts - are important and influence several, but not all, aspects of fire regimes. We review relationships between drought and fire regimes in United States forests, fire-related drought metrics and expected changes in fire risk, and implications for fire management under climate change. Collectively, this points to a conceptual model of fire on real landscapes: fire regimes, and how they change through time, are products of fuels and how other factors affect their availability (abundance, arrangement, continuity) and flammability (moisture, chemical composition). Climate, management, and land use all affect availability, flammability, and probability of ignition differently in different parts of North America. From a fire ecology perspective, the concept of drought varies with scale, application, scientific or management objective, and ecosystem.",
"title": ""
},
{
"docid": "5e75a4ea83600736c601e46cb18aa2c9",
"text": "This paper deals with a low-cost 24GHz Doppler radar sensor for traffic surveillance. The basic building blocks of the transmit/receive chain, namely the antennas, the balanced power amplifier (PA), the dielectric resonator oscillator (DRO), the low noise amplifier (LNA) and the down conversion diode mixer are presented underlining the key technologies and manufacturing approaches by means the required performances can be attained while keeping industrial costs extremely low.",
"title": ""
},
{
"docid": "9c43ce72f77582848fd7603b9c5a9319",
"text": "This article discusses the various algorithms that make up the Netflix recommender system, and describes its business purpose. We also describe the role of search and related algorithms, which for us turns into a recommendations problem as well. We explain the motivations behind and review the approach that we use to improve the recommendation algorithms, combining A/B testing focused on improving member retention and medium term engagement, as well as offline experimentation using historical member engagement data. We discuss some of the issues in designing and interpreting A/B tests. Finally, we describe some current areas of focused innovation, which include making our recommender system global and language aware.",
"title": ""
}
] |
scidocsrr
|
73bb818bb334ad6ab435b268b712b0a8
|
Soft and Declarative Fishing of Information in Big Data Lake
|
[
{
"docid": "bb03f7d799b101966b4ea6e75cd17fea",
"text": "Fuzzy decision trees (FDTs) have shown to be an effective solution in the framework of fuzzy classification. The approaches proposed so far to FDT learning, however, have generally neglected time and space requirements. In this paper, we propose a distributed FDT learning scheme shaped according to the MapReduce programming model for generating both binary and multiway FDTs from big data. The scheme relies on a novel distributed fuzzy discretizer that generates a strong fuzzy partition for each continuous attribute based on fuzzy information entropy. The fuzzy partitions are, therefore, used as an input to the FDT learning algorithm, which employs fuzzy information gain for selecting the attributes at the decision nodes. We have implemented the FDT learning scheme on the Apache Spark framework. We have used ten real-world publicly available big datasets for evaluating the behavior of the scheme along three dimensions: 1) performance in terms of classification accuracy, model complexity, and execution time; 2) scalability varying the number of computing units; and 3) ability to efficiently accommodate an increasing dataset size. We have demonstrated that the proposed scheme turns out to be suitable for managing big datasets even with a modest commodity hardware support. Finally, we have used the distributed decision tree learning algorithm implemented in the MLLib library and the Chi-FRBCS-BigData algorithm, a MapReduce distributed fuzzy rule-based classification system, for comparative analysis.",
"title": ""
}
] |
[
{
"docid": "229c701c28a0398045756170aff7788e",
"text": "This paper presents Point Convolutional Neural Networks (PCNN): a novel framework for applying convolutional neural networks to point clouds. The framework consists of two operators: extension and restriction, mapping point cloud functions to volumetric functions and vise-versa. A point cloud convolution is defined by pull-back of the Euclidean volumetric convolution via an extension-restriction mechanism.\n The point cloud convolution is computationally efficient, invariant to the order of points in the point cloud, robust to different samplings and varying densities, and translation invariant, that is the same convolution kernel is used at all points. PCNN generalizes image CNNs and allows readily adapting their architectures to the point cloud setting.\n Evaluation of PCNN on three central point cloud learning benchmarks convincingly outperform competing point cloud learning methods, and the vast majority of methods working with more informative shape representations such as surfaces and/or normals.",
"title": ""
},
{
"docid": "0f1babfaa92c9f0a34c28814ca80cba0",
"text": "We examine who the winners are in science problem solving contests characterized by open broadcast of problem information, self-selection of external solvers to discrete problems from the laboratories of large R&D intensive companies and blind review of solution submissions. Analyzing a unique dataset of 166 science challenges involving over 12,000 scientists revealed that technical and social marginality, being a source of different perspectives and heuristics, plays an important role in explaining individual success in problem solving. The provision of a winning solution was positively related to increasing distance between the solver’s field of technical expertise and the focal field of the problem. Female solvers – known to be in the “outer circle” of the scientific establishment performed significantly better than men in developing successful solutions. Our findings contribute to the emerging literature on open and distributed innovation by demonstrating the value of openness, at least narrowly defined by disclosing problems, in removing barriers to entry to non-obvious individuals. We also contribute to the knowledge-based theory of the firm by showing the effectiveness of a market-mechanism to draw out knowledge from diverse external sources to solve internal problems",
"title": ""
},
{
"docid": "bb408cedbb0fc32f44326eff7a7390f7",
"text": "A fully integrated SONET OC-192 transmitter IC using a standard CMOS process consists of an input data register, FIFO, CMU, and 16:1 multiplexer to give a 10Gb/s serial output. A higher FEC rate, 10.7Gb/s, is supported. This chip, using a 0.18/spl mu/m process, exceeds SONET requirements, dissipating 450mW.",
"title": ""
},
{
"docid": "1dc5a78a3a9c072f1f71da4aa257d3f2",
"text": "A Bayesian network is a graphical model that encodes probabilistic relationships among variables of interest. When used in conjunction with statistical techniques, the graphical model has several advantages for data analysis. One, because the model encodes dependencies among all variables, it readily handles situations where some data entries are missing. Two, a Bayesian network can be used to learn causal relationships, and hence can be used to gain understanding about a problem domain and to predict the consequences of intervention. Three, because the model has both a causal and probabilistic semantics, it is an ideal representation for combining prior knowledge (which often comes in causal form) and data. Four, Bayesian statistical methods in conjunction with Bayesian networks o er an e cient and principled approach for avoiding the over tting of data. In this paper, we discuss methods for constructing Bayesian networks from prior knowledge and summarize Bayesian statistical methods for using data to improve these models. With regard to the latter task, we describe methods for learning both the parameters and structure of a Bayesian network, including techniques for learning with incomplete data. In addition, we relate Bayesian-network methods for learning to techniques for supervised and unsupervised learning. We illustrate the graphical-modeling approach using a real-world case study.",
"title": ""
},
{
"docid": "55702c5dd8986f2510b06bc15870566a",
"text": "Queuing networks are used widely in computer simulation studies. Examples of queuing networks can be found in areas such as the supply chains, manufacturing work flow, and internet routing. If the networks are fairly small in size and complexity, it is possible to create discrete event simulations of the networks without incurring significant delays in analyzing the system. However, as the networks grow in size, such analysis can be time consuming, and thus require more expensive parallel processing computers or clusters. We have constructed a set of tools that allow the analyst to simulate queuing networks in parallel, using the fairly inexpensive and commonly available graphics processing units (GPUs) found in most recent computing platforms. We present an analysis of a GPU-based algorithm, describing benefits and issues with the GPU approach. The algorithm clusters events, achieving speedup at the expense of an approximation error which grows as the cluster size increases. We were able to achieve 10-x speedup using our approach with a small error in a specific implementation of a synthetic closed queuing network simulation. This error can be mitigated, based on error analysis trends, obtaining reasonably accurate output statistics. The experimental results of the mobile ad hoc network simulation show that errors occur only in the time-dependent output statistics.",
"title": ""
},
{
"docid": "39fcc45d79680c7e231643d6c75aee18",
"text": "This paper presents a Kernel Entity Salience Model (KESM) that improves text understanding and retrieval by better estimating entity salience (importance) in documents. KESM represents entities by knowledge enriched distributed representations, models the interactions between entities and words by kernels, and combines the kernel scores to estimate entity salience. The whole model is learned end-to-end using entity salience labels. The salience model also improves ad hoc search accuracy, providing effective ranking features by modeling the salience of query entities in candidate documents. Our experiments on two entity salience corpora and two TREC ad hoc search datasets demonstrate the effectiveness of KESM over frequency-based and feature-based methods. We also provide examples showing how KESM conveys its text understanding ability learned from entity salience to search.",
"title": ""
},
{
"docid": "0e6bdfbfb3d47042a3a4f38c0260180c",
"text": "Named Entity Recognition is an important task but is still relatively new for Vietnamese. It is partly due to the lack of a large annotated corpus. In this paper, we present a systematic approach in building a named entity annotated corpus while at the same time building rules to recognize Vietnamese named entities. The resulting open source system achieves an F-measure of 83%, which is better compared to existing Vietnamese NER systems. © 2010 Springer-Verlag Berlin Heidelberg. Index",
"title": ""
},
{
"docid": "4e922bcd8fb6904ac40459cd04959fca",
"text": "Sentence simplification aims to simplify the content and structure of complex sentences, and thus make them easier to interpret for human readers, and easier to process for downstream NLP applications. Recent advances in neural machine translation have paved the way for novel approaches to the task. In this paper, we adapt an architecture with augmented memory capacities called Neural Semantic Encoders (Munkhdalai and Yu, 2017) for sentence simplification. Our experiments demonstrate the effectiveness of our approach on different simplification datasets, both in terms of automatic evaluation measures and human judgments.",
"title": ""
},
{
"docid": "7c7ccdc70ffc9c0236dd3eaf141308cb",
"text": "In recent years prominent intellectuals have raised ethical concerns about the consequences of artificial intelligence. One concern is that an autonomous agent might modify itself to become \" superintelligent \" and, in supremely effective pursuit of poorly specified goals, destroy all of humanity. This paper considers and rejects the possibility of this outcome. We argue that this scenario depends on an agent's ability to rapidly improve its ability to predict its environment through self-modification. Using a Bayesian model of a reasoning agent, we show that there are important limitations to how an agent may improve its predictive ability through self-modification alone. We conclude that concern about this artificial intelligence outcome is misplaced and better directed at policy questions around data access and storage. The appetite of the public and prominent intellectuals for the study of the ethical implications of artificial intelligence has increased in recent years. One captivating possibility is that artificial intelligence research might result in a 'superintelli-gence' that puts humanity at risk. (Russell 2014) has called for AI researchers to consider this possibility seriously because , however unlikely, its mere possibility is grave. (Bostrom 2014) argues for the importance of considering the risks of artificial intelligence as a research agenda. For Bostrom, the potential risks of artificial intelligence are not just at the scale of industrial mishaps or weapons of mass destruction. Rather, Bostrom argues that artificial intelligence has the potential to threaten humanity as a whole and determine the fate of the universe. We approach this grand thesis with a measure of skepticism. Nevertheless, we hope that by elucidating the argument and considering potential objections in good faith, we can get a better grip on the realistic ethical implications of artificial intelligence. This paper is in that spirit. We consider the argument for this AI doomsday scenario proposed by Bostrom (Bostrom 2014). Section 1 summarizes Bostrom's argument and motivates the work of the rest of the paper. In focuses on the conditions of an \" intelligence explosion \" that would lead to a dominant machine intelligence averse to humanity. Section 2 argues that rather than speculating broadly about general artificial intelligence, we can predict outcomes of artificial intelligence by considering more narrowly a few tasks that are essential to instrumental reasoning. Section 3 considers recalcitrance, the resistance of a system to improvements to its own intelligence, and the ways it can limit intelligence explosion. Section 4 contains …",
"title": ""
},
{
"docid": "b04a1c4a52cfe9310ff1e895ccdec35c",
"text": "The problem of recovering the sparse and low-rank components of a matrix captures a broad spectrum of applications. Authors in [4] proposed the concept of ”rank-sparsity incoherence” to characterize the fundamental identifiability of the recovery, and derived practical sufficient conditions to ensure the high possibility of recovery. This exact recovery is achieved via solving a convex relaxation problem where the l1 norm and the nuclear norm are utilized for being surrogates of the sparsity and low-rank. Numerically, this convex relaxation problem was reformulated into a semi-definite programming (SDP) problem whose dimension is considerably enlarged, and this SDP reformulation was proposed to be solved by generic interior-point solvers in [4]. This paper focuses on the algorithmic improvement for the sparse and low-rank recovery. In particular, we observe that the convex relaxation problem generated by the approach of [4] is actually well-structured in both the objective function and constraint, and it fits perfectly the applicable range of the classical alternating direction method (ADM). Hence, we propose the ADM approach for accomplishing the sparse and low-rank recovery, by taking full exploitation to the high-level separable structure of the convex relaxation problem. Preliminary numerical results are reported to verify the attractive efficiency of the ADM approach for recovering sparse and low-rank components of matrices.",
"title": ""
},
{
"docid": "a6426f7c52e0666744c4ec2760cc0046",
"text": "Growing concern about diet and health has led to development of healthier food products. In general consumer perception towards the intake of meat and meat products is unhealthy because it may increase the risk of diseases like cardiovascular diseases, obesity and cancer, because of its high fat content (especially saturated fat) and added synthetic antioxidants and antimicrobials. Addition of plant derivatives having antioxidant components including vitamins A, C and E, minerals, polyphenols, flavanoids and terpenoids in meat products may decrease the risk of several degenerative diseases. To change consumer attitudes towards meat consumption, the meat industry is undergoing major transformations by addition of nonmeat ingredients as animal fat replacers, natural antioxidants and antimicrobials, preferably derived from plant sources.",
"title": ""
},
{
"docid": "0dea4d44a4b525a91898498fadf57b8c",
"text": "Online review platforms have become a basis for many consumers to make informed decisions. This type of platforms is rich in review messages and review contributors. For marketers, the platforms’ practical importance is its influence on business outcomes. In the individual level, however, little research has investigated the impacts of a platform on consumer decision-making process. In this research, we use the heuristic-systematic model to explain how consumers establish their decision based on processing review messages on the platform. We build a research model and propose impacts of different constructs established from the systematic and heuristic processing of review messages. Survey data from a Chinese online review platform generally supports our hypotheses, except that the heuristic cue, source credibility, fails to affect consumers’ behavioral intention. Based on the findings, we discuss implications for both researchers and practitioners. We further point out limitations and suggest opportunities for future research.",
"title": ""
},
{
"docid": "60c42e3d0d0e82200a80b469a61f1921",
"text": "BACKGROUND\nDespite using sterile technique for catheter insertion, closed drainage systems, and structured daily care plans, catheter-associated urinary tract infections (CAUTIs) regularly occur in acute care hospitals. We believe that meaningful reduction in CAUTI rates can only be achieved by reducing urinary catheter use.\n\n\nMETHODS\nWe used an interventional study of a hospital-wide, multidisciplinary program to reduce urinary catheter use and CAUTIs on all patient care units in a 300-bed, community teaching hospital in Connecticut. Our primary focus was the implementation of a nurse-directed urinary catheter removal protocol. This protocol was linked to the physician's catheter insertion order. Three additional elements included physician documentation of catheter insertion criteria, a device-specific charting module added to physician electronic progress notes, and biweekly unit-specific feedback on catheter use rates and CAUTI rates in a multidisciplinary forum.\n\n\nRESULTS\nWe achieved a 50% hospital-wide reduction in catheter use and a 70% reduction in CAUTIs over a 36-month period, although there was wide variation from unit to unit in catheter reduction efforts, ranging from 4% (maternity) to 74% (telemetry).\n\n\nCONCLUSION\nUrinary catheter use, and ultimately CAUTI rates, can be effectively reduced by the diligent application of relatively few evidence-based interventions. Aggressive implementation of the nurse-directed catheter removal protocol was associated with lower catheter use rates and reduced infection rates.",
"title": ""
},
{
"docid": "904e63188b0a9772f1f81bbf42be65a1",
"text": "Malicious URLs have become a channel for Internet criminal activities such as drive-by-download, spamming and phishing. Applications for the detection of malicious URLs are accurate but slow (because they need to download the content or query some Internet host information). In this paper we present a novel lightweight filter based only on the URL string itself to use before existing processing methods. We run experiments on a large dataset and demonstrate a 75% reduction in workload size while retaining at least 90% of malicious URLs. Existing methods do not scale well with the hundreds of millions of URLs encountered every day as the problem is a heavily-imbalanced, large-scale binary classification problem. Our proposed method is able to handle nearly two million URLs in less than five minutes. We generate two filtering models by using lexical features and descriptive features, and then combine the filtering results. The on-line learning algorithms are applied here not only for dealing with large-scale data sets but also for fitting the very short lifetime characteristics of malicious URLs. Our filter can significantly reduce the volume of URL queries on which further analysis needs to be performed, saving both computing time and bandwidth used for content retrieval.",
"title": ""
},
{
"docid": "3f292307824ed0b4d7fd59824ff9dd2b",
"text": "The aim of this qualitative study was to obtain a better understanding of the developmental trajectories of persistence and desistence of childhood gender dysphoria and the psychosexual outcome of gender dysphoric children. Twenty five adolescents (M age 15.88, range 14-18), diagnosed with a Gender Identity Disorder (DSM-IV or DSM-IV-TR) in childhood, participated in this study. Data were collected by means of biographical interviews. Adolescents with persisting gender dysphoria (persisters) and those in whom the gender dysphoria remitted (desisters) indicated that they considered the period between 10 and 13 years of age to be crucial. They reported that in this period they became increasingly aware of the persistence or desistence of their childhood gender dysphoria. Both persisters and desisters stated that the changes in their social environment, the anticipated and actual feminization or masculinization of their bodies, and the first experiences of falling in love and sexual attraction had influenced their gender related interests and behaviour, feelings of gender discomfort and gender identification. Although, both persisters and desisters reported a desire to be the other gender during childhood years, the underlying motives of their desire seemed to be different.",
"title": ""
},
{
"docid": "154f5455f593e8ebf7058cc0a32426a2",
"text": "Many life-log analysis applications, which transfer data from cameras and sensors to a Cloud and analyze them in the Cloud, have been developed with the spread of various sensors and Cloud computing technologies. However, difficulties arise because of the limitation of the network bandwidth between the sensors and the Cloud. In addition, sending raw sensor data to a Cloud may introduce privacy issues. Therefore, we propose distributed deep learning processing between sensors and the Cloud in a pipeline manner to reduce the amount of data sent to the Cloud and protect the privacy of the users. In this paper, we have developed a pipeline-based distributed processing method for the Caffe deep learning framework and investigated the processing times of the classification by varying a division point and the parameters of the network models using data sets, CIFAR-10 and ImageNet. The experiments show that the accuracy of deep learning with coarse-grain data is comparable to that with the default parameter settings, and the proposed distributed processing method has performance advantages in cases of insufficient network bandwidth with actual sensors and a Cloud environment.",
"title": ""
},
{
"docid": "149de84d7cbc9ea891b4b1297957ade7",
"text": "Deep convolutional neural networks (CNNs) have had a major impact in most areas of image understanding, including object category detection. In object detection, methods such as R-CNN have obtained excellent results by integrating CNNs with region proposal generation algorithms such as selective search. In this paper, we investigate the role of proposal generation in CNN-based detectors in order to determine whether it is a necessary modelling component, carrying essential geometric information not contained in the CNN, or whether it is merely a way of accelerating detection. We do so by designing and evaluating a detector that uses a trivial region generation scheme, constant for each image. Combined with SPP, this results in an excellent and fast detector that does not require to process an image with algorithms other than the CNN itself. We also streamline and simplify the training of CNN-based detectors by integrating several learning steps in a single algorithm, as well as by proposing a number of improvements that accelerate detection.",
"title": ""
},
{
"docid": "c21280fa617bcf55991702211f1fde8b",
"text": "How useful can machine learning be in a quantum laboratory? Here we raise the question of the potential of intelligent machines in the context of scientific research. A major motivation for the present work is the unknown reachability of various entanglement classes in quantum experiments. We investigate this question by using the projective simulation model, a physics-oriented approach to artificial intelligence. In our approach, the projective simulation system is challenged to design complex photonic quantum experiments that produce high-dimensional entangled multiphoton states, which are of high interest in modern quantum experiments. The artificial intelligence system learns to create a variety of entangled states and improves the efficiency of their realization. In the process, the system autonomously (re)discovers experimental techniques which are only now becoming standard in modern quantum optical experiments-a trait which was not explicitly demanded from the system but emerged through the process of learning. Such features highlight the possibility that machines could have a significantly more creative role in future research.",
"title": ""
}
] |
scidocsrr
|
3bd533a37441497ae2b6fd1f8abe8f6e
|
A resilience-based framework for evaluating adaptive co-management : Linking ecology , economics and society in a complex world
|
[
{
"docid": "5eb526843c41d2549862b60c17110b5b",
"text": "■ Abstract We explore the social dimension that enables adaptive ecosystem-based management. The review concentrates on experiences of adaptive governance of socialecological systems during periods of abrupt change (crisis) and investigates social sources of renewal and reorganization. Such governance connects individuals, organizations, agencies, and institutions at multiple organizational levels. Key persons provide leadership, trust, vision, meaning, and they help transform management organizations toward a learning environment. Adaptive governance systems often self-organize as social networks with teams and actor groups that draw on various knowledge systems and experiences for the development of a common understanding and policies. The emergence of “bridging organizations” seem to lower the costs of collaboration and conflict resolution, and enabling legislation and governmental policies can support self-organization while framing creativity for adaptive comanagement efforts. A resilient social-ecological system may make use of crisis as an opportunity to transform into a more desired state.",
"title": ""
}
] |
[
{
"docid": "9a87f11fed489f58b0cdd15b329e5245",
"text": "BACKGROUND\nBracing is an effective strategy for scoliosis treatment, but there is no consensus on the best type of brace, nor on the way in which it should act on the spine to achieve good correction. The aim of this paper is to present the family of SPoRT (Symmetric, Patient-oriented, Rigid, Three-dimensional, active) braces: Sforzesco (the first introduced), Sibilla and Lapadula.\n\n\nMETHODS\nThe Sforzesco brace was developed following specific principles of correction. Due to its overall symmetry, the brace provides space over pathological depressions and pushes over elevations. Correction is reached through construction of the envelope, pushes, escapes, stops, and drivers. The real novelty is the drivers, introduced for the first time with the Sforzesco brace; they allow to achieve the main action of the brace: a three-dimensional elongation pushing the spine in a down-up direction.Brace prescription is made plane by plane: frontal (on the \"slopes\", another novelty of this concept, i.e. the laterally flexed sections of the spine), horizontal, and sagittal. The brace is built modelling the trunk shape obtained either by a plaster cast mould or by CAD-CAM construction. Brace checking is essential, since SPoRT braces are adjustable and customisable according to each individual curve pattern.Treatment time and duration is individually tailored (18-23 hours per day until Risser 3, then gradual reduction). SEAS (Scientific Exercises Approach to Scoliosis) exercises are a key factor to achieve success.\n\n\nRESULTS\nThe Sforzesco brace has shown to be more effective than the Lyon brace (matched case/control), equally effective as the Risser plaster cast (prospective cohort with retrospective controls), more effective than the Risser cast + Lyon brace in treating curves over 45 degrees Cobb (prospective cohort), and is able to improve aesthetic appearance (prospective cohort).\n\n\nCONCLUSIONS\nThe SPoRT concept of bracing (three-dimensional elongation pushing in a down-up direction) is different from the other corrective systems: 3-point, traction, postural, and movement-based. The Sforzesco brace, being comparable to casting, may be the best brace for the worst cases.",
"title": ""
},
{
"docid": "0158d18bbe621196f144ae9ed4b5db2d",
"text": "We introduce a novel metric for speech recognition success in voice search tasks, designed to reflect the impact of speech recognition errors on user's overall experience with the system. The computation of the metric is seeded using intuitive labels from human subjects and subsequently automated by replacing human annotations with a machine learning algorithm. The results show that search-based recognition accuracy is significantly higher than accuracy based on sentence error rate computation, and that the automated system is very successful in replicating human judgments regarding search quality results.",
"title": ""
},
{
"docid": "5923cd462b5b09a3aabd0fbf5c36f00c",
"text": "Exoskeleton robots are used as assistive limbs for elderly persons, rehabilitation for paralyzed persons or power augmentation purposes for healthy persons. The similarity of the exoskeleton robots and human body neuro-muscular system maximizes the device performance. Human body neuro-muscular system provides a flexible and safe movement capability with minimum energy consumption by varying the stiffness of the human joints regularly. Similar to human body, variable stiffness actuators should be used to provide a flexible and safe movement capability in exoskeletons. In the present day, different types of variable stiffness actuator designs are used, and the studies on these actuators are still continuing rapidly. As exoskeleton robots are mobile devices working with the equipment such as batteries, the motors used in the design are expected to have minimal power requirements. In this study, antagonistic, pre-tension and controllable transmission ratio type variable stiffness actuators are compared in terms of energy efficiency and power requirement at an optimal (medium) walking speed for ankle joint. In the case of variable stiffness, the results show that the controllable transmission ratio type actuator compared with the antagonistic design is more efficient in terms of energy consumption and power requirement.",
"title": ""
},
{
"docid": "3fd52b589a58f449ab1c03a19a034a2d",
"text": "This paper presents a low-power high-bit-rate phase modulator based on a digital PLL with single-bit TDC and two-point injection scheme. At high bit rates, this scheme requires a controlled oscillator with wide tuning range and becomes critically sensitive to the delay spread between the two injection paths, considerably degrading the achievable error-vector magnitude and causing significant spectral regrowth. A multi-capacitor-bank oscillator topology with an automatic background regulation of the gains of the banks and a digital adaptive filter for the delay-spread correction are introduced. The phase modulator fabricated in a 65-nm CMOS process synthesizes carriers in the 2.9-to-4.0-GHz range from a 40-MHz crystal reference and it is able to produce a phase change up to ±π with 10-bit resolution in a single reference cycle. Measured EVM at 3.6 GHz is -36 dB for a 10-Mb/s GMSK and a 20-Mb/s QPSK modulation. Power dissipation is 5 mW from a 1.2-V voltage supply, leading to a total energy consumption of 0.25 nJ/bit.",
"title": ""
},
{
"docid": "7394baa66902d1330cd0fbf27c0b0d98",
"text": "With the world turning into a global village due to technological advancements, automation in all aspects of life is gaining momentum. Wireless technologies address the everincreasing demands of portable and flexible communications. Wireless ad-hoc networks, which allow communication between devices without the need for any central infrastructure, are gaining significance, particularly for monitoring and surveillance applications. A relatively new research area of ad-hoc networks is flying ad-hoc networks (FANETs), governing the autonomous movement of unmanned aerial vehicles (UAVs) [1]. In such networks multiple UAVs are allowed to communicate so that an ad-hoc network is established between them. All UAVs in the network carry UAV-to-UAV communication and only groups of UAVs interact with the ground station. This feature eliminates the need for deployment of complex hardware in each UAV. Moreover, even if one of the UAV communication links breaks down; there is no link breakage with the base station due to the ad-hoc network between UAVs.",
"title": ""
},
{
"docid": "9b5eca94a1e02e97e660d0f5e445a8a1",
"text": "PURPOSE\nThe purpose of this study was to evaluate the effect of individualized repeated intravitreal injections of ranibizumab (Lucentis, Genentech, South San Francisco, CA) on visual acuity and central foveal thickness (CFT) for branch retinal vein occlusion-induced macular edema.\n\n\nMETHODS\nThis study was a prospective interventional case series. Twenty-eight eyes of 28 consecutive patients diagnosed with branch retinal vein occlusion-related macular edema treated with repeated intravitreal injections of ranibizumab (when CFT was >225 microm) were evaluated. Optical coherence tomography and fluorescein angiography were performed monthly.\n\n\nRESULTS\nThe mean best-corrected distance visual acuity improved from 62.67 Early Treatment of Diabetic Retinopathy Study letters (logarithm of the minimum angle of resolution = 0.74 +/- 0.28 [mean +/- standard deviation]) at baseline to 76.8 Early Treatment of Diabetic Retinopathy Study letters (logarithm of the minimum angle of resolution = 0.49 +/- 0.3; statistically significant, P < 0.001) at the end of the follow-up (9 months). The mean letter gain (including the patients with stable and worse visual acuities) was 14.3 letters (2.9 lines). During the same period, 22 of the 28 eyes (78.6%) showed improved visual acuity, 4 (14.2%) had stable visual acuity, and 2 (7.14%) had worse visual acuity compared with baseline. The mean CFT improved from 349 +/- 112 microm at baseline to 229 +/- 44 microm (significant, P < 0.001) at the end of follow-up. A mean of six injections was performed during the follow-up period. Our subgroup analysis indicated that patients with worse visual acuity at presentation (<or=50 letters in our series) showed greater visual benefit from treatment. \"Rebound\" macular edema was observed in 5 patients (17.85%) at the 3-month follow-up visit and in none at the 6- and 9-month follow-ups. In 18 of the 28 patients (53.6%), the CFT was <225 microm at the last follow-up visit, and therefore, further treatment was not instituted. No ocular or systemic side effects were noted.\n\n\nCONCLUSION\nIndividualized repeated intravitreal injections of ranibizumab showed promising short-term results in visual acuity improvement and decrease in CFT in patients with macular edema associated with branch retinal vein occlusion. Further studies are needed to prove the long-term effect of ranibizumab treatment on patients with branch retinal vein occlusion.",
"title": ""
},
{
"docid": "b8217034df7563c8b6c0b3191ab8232a",
"text": "BACKGROUND\nA holistic approach to health requires the development of tools that would allow to measure the inner world of individuals within its physical, mental and social dimensions.\n\n\nOBJECTIVES\nTo create the Physical, Mental and Social Well-being scale (PMSW-21) that allows a holistic representation of various dimensions of well-being in such a way as they are perceived by the individuals and how affected their health.\n\n\nMATERIAL AND METHODS\nThe study was conducted on the sample of 406 inhabitants of Warsaw involving in the Social Participation in Health Reform project. The PMSW-21 scale included: headache, tiredness, abdominal pain, palpitation, joint pain, backache, sleep disturbance (physical domain), anxiety, guiltiness, helplessness, hopelessness, sadness, self-dissatisfaction, hostility (mental domain), security, communicability, protection, loneliness, rejection, sociability and appreciation (social domain). The five criterial variables of health and seven of life experiences were adopted to assess the discriminative power of the PMSW-21 scale.\n\n\nRESULTS\nThe total well-being scale as well as its physical, mental and social domains showed high reliability (Cronbach a 0.81, 0.77, 0.90, 0.72, respectively). The analysis confirmed the construct validity. All the items stronger correlated with their own domain than with the others (ranges for physical: 0.41 - 0.55, mental: 0.49 - 0.80 and social: 0.31 - 0.50). The total scale demonstrate high sensitivity; it significantly differentiated almost all criterial variables. Physical domain showed high sensitivity for health as well as for negative life events variables, while the mental and social domains were more sensitive for life events.\n\n\nCONCLUSIONS\nThe analysis confirmed the usefulness of PMSW-21 scale for measure the holistic well-being. The reliability of the total scale and its domains, construct validity and sensitivity for health and life determinants were at acceptable level.",
"title": ""
},
{
"docid": "63685d4935ae48e36d6d83073cd50616",
"text": "Graphs provide a powerful means for representing complex interactions between entities. Recently, new deep learning approaches have emerged for representing and modeling graphstructured data while the conventional deep learning methods, such as convolutional neural networks and recurrent neural networks, have mainly focused on the grid-structured inputs of image and audio. Leveraged by representation learning capabilities, deep learning-based techniques can detect structural characteristics of graphs, giving promising results for graph applications. In this paper, we attempt to advance deep learning for graph-structured data by incorporating another component: transfer learning. By transferring the intrinsic geometric information learned in the source domain, our approach can construct a model for a new but related task in the target domain without collecting new data and without training a new model from scratch. We thoroughly tested our approach with large-scale real-world text data and confirmed the effectiveness of the proposed transfer learning framework for deep learning on graphs. According to our experiments, transfer learning is most effective when the source and target domains bear a high level of structural similarity in their graph representations.",
"title": ""
},
{
"docid": "c447ef57b190d129b5a44597c4d2ed80",
"text": "As most pancreatic neuroendocrine tumors (PNET) are relatively small and solitary, they may be considered well suited for removal by a minimally invasive approach. There are few large series that describe laparoscopic surgery for PNET. The primary aim of this study was to describe the feasibility, outcome, and histopathology associated with laparoscopic pancreatic surgery (LS) of PNET in a large series. All patients with PNET who underwent LS at a single hospital from March 1997 to April 2011 were included retrospectively in the study. A total of 72 patients with PNET underwent 75 laparoscopic procedures, out of which 65 were laparoscopic resections or enucleations. The median operative time of all patients who underwent resections or enucleations was 175 (60–520) min, the median blood loss was 300 (5–2,700) ml, and the median length of hospital stay was 7 (2–27) days. The overall morbidity rate was 42 %, with a surgical morbidity rate of 21 % and postoperative pancreatic fistula (POPF) formation in 21 %. Laparoscopic enucleations were associated with a higher rate of POPF than were laparoscopic resections. Five-year disease-specific survival rate was 90 %. The T stage, R stage, and a Ki-67 cutoff value of 5 % significantly predicted 5-year survival. LS of PNET is feasible with acceptable morbidity and a good overall disease-specific long-term prognosis.",
"title": ""
},
{
"docid": "9a1665cff530d93c84598e7df947099f",
"text": "The algorithmic Markov condition states that the most likely causal direction between two random variables X and Y can be identified as the direction with the lowest Kolmogorov complexity. This notion is very powerful as it can detect any causal dependency that can be explained by a physical process. However, due to the halting problem, it is also not computable. In this paper we propose an computable instantiation that provably maintains the key aspects of the ideal. We propose to approximate Kolmogorov complexity via the Minimum Description Length (MDL) principle, using a score that is mini-max optimal with regard to the model class under consideration. This means that even in an adversarial setting, the score degrades gracefully, and we are still maximally able to detect dependencies between the marginal and the conditional distribution. As a proof of concept, we propose CISC, a linear-time algorithm for causal inference by stochastic complexity, for pairs of univariate discrete variables. Experiments show that CISC is highly accurate on synthetic, benchmark, as well as real-world data, outperforming the state of the art by a margin, and scales extremely well with regard to sample and domain sizes.",
"title": ""
},
{
"docid": "91b49384769b178b300f2e3a4bd0b265",
"text": "The recently proposed self-ensembling methods have achieved promising results in deep semi-supervised learning, which penalize inconsistent predictions of unlabeled data under different perturbations. However, they only consider adding perturbations to each single data point, while ignoring the connections between data samples. In this paper, we propose a novel method, called Smooth Neighbors on Teacher Graphs (SNTG). In SNTG, a graph is constructed based on the predictions of the teacher model, i.e., the implicit self-ensemble of models. Then the graph serves as a similarity measure with respect to which the representations of \"similar\" neighboring points are learned to be smooth on the low-dimensional manifold. We achieve state-of-the-art results on semi-supervised learning benchmarks. The error rates are 9.89%, 3.99% for CIFAR-10 with 4000 labels, SVHN with 500 labels, respectively. In particular, the improvements are significant when the labels are fewer. For the non-augmented MNIST with only 20 labels, the error rate is reduced from previous 4.81% to 1.36%. Our method also shows robustness to noisy labels.",
"title": ""
},
{
"docid": "5f77e21de8f68cba79fc85e8c0e7725e",
"text": "We introduce structured prediction energy networks (SPENs), a flexible framework for structured prediction. A deep architecture is used to define an energy function of candidate labels, and then predictions are produced by using backpropagation to iteratively optimize the energy with respect to the labels. This deep architecture captures dependencies between labels that would lead to intractable graphical models, and performs structure learning by automatically learning discriminative features of the structured output. One natural application of our technique is multi-label classification, which traditionally has required strict prior assumptions about the interactions between labels to ensure tractable learning and prediction problems. We are able to apply SPENs to multi-label problems with substantially larger label sets than previous applications of structured prediction, while modeling high-order interactions using minimal structural assumptions. Overall, deep learning provides remarkable tools for learning features of the inputs to a prediction problem, and this work extends these techniques to learning features of structured outputs. Our experiments provide impressive performance on a variety of benchmark multi-label classification tasks, demonstrate that our technique can be used to provide interpretable structure learning, and illuminate fundamental trade-offs between feed-forward and iterative structured prediction techniques.",
"title": ""
},
{
"docid": "bdcb688bc914307d811114b2749e47c2",
"text": "E-government initiatives are in their infancy in many developing countries. The success of these initiatives is dependent on government support as well as citizens' adoption of e-government services. This study adopted the unified of acceptance and use of technology (UTAUT) model to explore factors that determine the adoption of e-government services in a developing country, namely Kuwait. 880 students were surveyed, using an amended version of the UTAUT model. The empirical data reveal that performance expectancy, effort expectancy and peer influence determine students' behavioural intention. Moreover, facilitating conditions and behavioural intentions determine students' use of e-government services. Implications for decision makers and suggestions for further research are also considered in this study.",
"title": ""
},
{
"docid": "78fecd65b909fbdfeb4b3090b2dadc01",
"text": "Advances in antenna technologies for cellular hand-held devices have been synchronous with the evolution of mobile phones over nearly 40 years. Having gone through four major wireless evolutions [1], [2], starting with the analog-based first generation to the current fourth-generation (4G) mobile broadband, technologies from manufacturers and their wireless network capacities today are advancing at unprecedented rates to meet our unrelenting service demands. These ever-growing demands, driven by exponential growth in wireless data usage around the globe [3], have gone hand in hand with major technological milestones achieved by the antenna design community. For instance, realizing the theory regarding the physical limitation of antennas [4]-[6] was paramount to the elimination of external antennas for mobile phones in the 1990s. This achievement triggered a variety of revolutionary mobile phone designs and the creation of new wireless services, establishing the current cycle of cellular advances and advances in mobile antenna technologies.",
"title": ""
},
{
"docid": "7716409441fb8e34013d3e9f58d32476",
"text": "Decentralized partially observable Markov decision processes (Dec-POMDPs) are a powerful tool for modeling multi-agent planning and decision-making under uncertainty. Prevalent Dec-POMDP solution techniques require centralized computation given full knowledge of the underlying model. Multi-agent reinforcement learning (MARL) based approaches have been recently proposed for distributed solution of during learning and policy execution are identical. In some practical scenarios this may not be the case. We propose a novel MARL approach in which agents are allowed to rehearse with information that will not be available during policy execution. The key is for the agents to learn policies that do not explicitly rely on these rehearsal features. We also establish a weak convergence result for our algorithm, RLaR, demonstrating that RLaR converges in probability when certain conditions are met. We show experimentally that incorporating rehearsal features can enhance the learning rate compared to non-rehearsalbased learners, and demonstrate fast, (near) optimal performance on many existing benchmark DecPOMDP problems. We also compare RLaR against an existing approximate Dec-POMDP solver which, like RLaR, does not assume a priori knowledge of the model. While RLaR's policy representation is not as scalable, we show that RLaR produces higher quality policies for most problems and horizons studied. & 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5034984717b3528f7f47a1f88a3b1310",
"text": "ALL RIGHTS RESERVED. This document contains material protected under International and Federal Copyright Laws and Treaties. Any unauthorized reprint or use of this material is prohibited. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system without express written permission from the author / publisher.",
"title": ""
},
{
"docid": "68f0bdda44beba9203a785b8be1035bb",
"text": "Nasal mucociliary clearance is one of the most important factors affecting nasal delivery of drugs and vaccines. This is also the most important physiological defense mechanism inside the nasal cavity. It removes inhaled (and delivered) particles, microbes and substances trapped in the mucus. Almost all inhaled particles are trapped in the mucus carpet and transported with a rate of 8-10 mm/h toward the pharynx. This transport is conducted by the ciliated cells, which contain about 100-250 motile cellular appendages called cilia, 0.3 µm wide and 5 µm in length that beat about 1000 times every minute or 12-15 Hz. For efficient mucociliary clearance, the interaction between the cilia and the nasal mucus needs to be well structured, where the mucus layer is a tri-layer: an upper gel layer that floats on the lower, more aqueous solution, called the periciliary liquid layer and a third layer of surfactants between these two main layers. Pharmacokinetic calculations of the mucociliary clearance show that this mechanism may account for a substantial difference in bioavailability following nasal delivery. If the formulation irritates the nasal mucosa, this mechanism will cause the irritant to be rapidly diluted, followed by increased clearance, and swallowed. The result is a much shorter duration inside the nasal cavity and therefore less nasal bioavailability.",
"title": ""
},
{
"docid": "e4b02298a2ff6361c0a914250f956911",
"text": "This paper studies efficient means in dealing with intracategory diversity in object detection. Strategies for occlusion and orientation handling are explored by learning an ensemble of detection models from visual and geometrical clusters of object instances. An AdaBoost detection scheme is employed with pixel lookup features for fast detection. The analysis provides insight into the design of a robust vehicle detection system, showing promise in terms of detection performance and orientation estimation accuracy.",
"title": ""
},
{
"docid": "45bb7a2e45bdf752acf8bb86e6871058",
"text": "Combination of formal and semi-formal methods is more and more required to produce specifications that can be, on the one hand, understood and thus validated by both designers and users and, on the other hand, precise enough to be verified by formal methods. This motivates our aim to use these complementary paradigms in order to deal with security aspects of information systems. This paper presents a methodology to specify access control policies starting with a set of graphical diagrams: UML for the functional model, SecureUML for static access control and ASTD for dynamic access control. These diagrams are then translated into a set of B machines. Finally, we present the formal specification of an access control filter that coordinates the different kinds of access control rules and the specification of functional operations. The goal of such B specifications is to rigorously check the access control policy of an information system taking advantage of tools from the B method.",
"title": ""
},
{
"docid": "7b3b559c3263a6093b7c7d627501b800",
"text": "We propose to tackle the problem of RGB-D image disocclusion inpainting when synthesizing new views of a scene by changing its viewpoint. Indeed, such a process creates holes both in depth and color images. First, we propose a novel algorithm to perform depth-map disocclusion inpainting. Our intuitive approach works particularly well for recovering the lost structures of the objects and to inpaint the depth-map in a geometrically plausible manner. Then, we propose a depth-guided patch-based inpainting method to fill-in the color image. Depth information coming from the reconstructed depth-map is added to each key step of the classical patch-based algorithm from Criminisi et al. in an intuitive manner. Relevant comparisons to the state-of-the-art inpainting methods for the disocclusion inpainting of both depth and color images are provided and illustrate the effectiveness of our proposed algorithms.",
"title": ""
}
] |
scidocsrr
|
970be3224d1a67c0258ea3c841d4b025
|
Cloud security issues and challenges: A survey
|
[
{
"docid": "299d59735ea1170228aff531645b5d4a",
"text": "While the economic case for cloud computing is compelling, the security challenges it poses are equally striking. In this work we strive to frame the full space of cloud-computing security issues, attempting to separate justified concerns from possible over-reactions. We examine contemporary and historical perspectives from industry, academia, government, and “black hats”. We argue that few cloud computing security issues are fundamentally new or fundamentally intractable; often what appears “new” is so only relative to “traditional” computing of the past several years. Looking back further to the time-sharing era, many of these problems already received attention. On the other hand, we argue that two facets are to some degree new and fundamental to cloud computing: the complexities of multi-party trust considerations, and the ensuing need for mutual auditability.",
"title": ""
}
] |
[
{
"docid": "030c8aeb4e365bfd2fdab710f8c9f598",
"text": "By combining linear graph theory with the principle of virtual work, a dynamic formulation is obtained that extends graph-theoretic modelling methods to the analysis of exible multibody systems. The system is represented by a linear graph, in which nodes represent reference frames on rigid and exible bodies, and edges represent components that connect these frames. By selecting a spanning tree for the graph, the analyst can choose the set of coordinates appearing in the nal system of equations. This set can include absolute, joint, or elastic coordinates, or some combination thereof. If desired, all non-working constraint forces and torques can be automatically eliminated from the dynamic equations by exploiting the properties of virtual work. The formulation has been implemented in a computer program, DynaFlex, that generates the equations of motion in symbolic form. Three examples are presented to demonstrate the application of the formulation, and to validate the symbolic computer implementation.",
"title": ""
},
{
"docid": "19e2790010bfa1081fb9503ba5f9d808",
"text": "Existing electricity market segmentation analysis techniques only make use of limited consumption statistics (usually averages and variances). In this paper we use power demand distributions (PDDs) obtained from fine-grain smart meter data to perform market segmentation based on distributional clustering. We apply this approach to mining 8 months of readings from about 1000 US Google employees.",
"title": ""
},
{
"docid": "6abd94555aa69d5d27f75db272952a0e",
"text": "Text recognition in images is an active research area which attempts to develop a computer application with the ability to automatically read the text from images. Nowadays there is a huge demand of storing the information available on paper documents in to a computer readable form for later use. One simple way to store information from these paper documents in to computer system is to first scan the documents and then store them as images. However to reuse this information it is very difficult to read the individual contents and searching the contents form these documents line-by-line and word-by-word. The challenges involved are: font characteristics of the characters in paper documents and quality of the images. Due to these challenges, computer is unable to recognize the characters while reading them. Thus, there is a need of character recognition mechanisms to perform document image analysis which transforms documents in paper format to electronic format. In this paper, we have reviewed and analyzed different methods for text recognition from images. The objective of this review paper is to summarize the well-known methods for better understanding of the reader.",
"title": ""
},
{
"docid": "dd4750b43931b3b09a5e95eaa74455d1",
"text": "In viticulture, there are several applications where bud detection in vineyard images is a necessary task, susceptible of being automated through the use of computer vision methods. A common and effective family of visual detection algorithms are the scanning-window type, that slide a (usually) fixed size window along the original image, classifying each resulting windowed-patch as containing or not containing the target object. The simplicity of these algorithms finds its most challenging aspect in the classification stage. Interested in grapevine buds detection in natural field conditions, this paper presents a classification method for images of grapevine buds ranging 100 to 1600 pixels in diameter, captured in outdoor, under natural field conditions, in winter (i.e., no grape bunches, very few leaves, and dormant buds), without artificial background, and with minimum equipment requirements. The proposed method uses well-known computer vision technologies: Scale-Invariant Feature Transform for calculating low-level features, Bag of Features for building an image descriptor, and Support Vector Machines for training a classifier. When evaluated over images containing buds of at least 100 pixels in diameter, the approach achieves a recall higher than 0.9 and a precision of 0.86 over all windowed-patches covering the whole bud and down to 60% of it, and scaled up to window patches containing a proportion of 20%-80% of bud versus background pixels. This robustness on the position and size of the window demonstrates its viability for use as the classification stage in a scanning-window detection algorithms.",
"title": ""
},
{
"docid": "4eebd4a2d5c50a2d7de7c36c5296786d",
"text": "Depth information has been used in computer vision for a wide variety of tasks. Since active range sensors are currently available at low cost, high-quality depth maps can be used as relevant input for many applications. Background subtraction and video segmentation algorithms can be improved by fusing depth and color inputs, which are complementary and allow one to solve many classic color segmentation issues. In this paper, we describe one fusion method to combine color and depth based on an advanced color-based algorithm. This technique has been evaluated by means of a complete dataset recorded with Microsoft Kinect, which enables comparison with the original method. The proposed method outperforms the others in almost every test, showing more robustness to illumination changes, shadows, reflections and camouflage.",
"title": ""
},
{
"docid": "d72bb787f20a08e70d5f0294551907d7",
"text": "In this paper we present a novel strategy, DragPushing, for improving the performance of text classifiers. The strategy is generic and takes advantage of training errors to successively refine the classification model of a base classifier. We describe how it is applied to generate two new classification algorithms; a Refined Centroid Classifier and a Refined Naïve Bayes Classifier. We present an extensive experimental evaluation of both algorithms on three English collections and one Chinese corpus. The results indicate that in each case, the refined classifiers achieve significant performance improvement over the base classifiers used. Furthermore, the performance of the Refined Centroid Classifier implemented is comparable, if not better, to that of state-of-the-art support vector machine (SVM)-based classifier, but offers a much lower computational cost.",
"title": ""
},
{
"docid": "f2d3856f69e486622f9a305ce0e56e04",
"text": "The unmanned control of the steering wheel is, at present, one of the most important challenges facing researchers in autonomous vehicles within the field of intelligent transportation systems (ITSs). In this paper, we present a two-layer control architecture for automatically moving the steering wheel of a mass-produced vehicle. The first layer is designed to calculate the target position of the steering wheel at any time and is based on fuzzy logic. The second is a classic control layer that moves the steering bar by means of an actuator to achieve the position targeted by the first layer. Real-time kinematic differential global positioning system (RTK-DGPS) equipment is the main sensor input for positioning. It is accurate to about 1 cm and can finely locate the vehicle trajectory. The developed systems are installed on a Citroe/spl uml/n Berlingo van, which is used as a testbed vehicle. Once this control architecture has been implemented, installed, and tuned, the resulting steering maneuvering is very similar to human driving, and the trajectory errors from the reference route are reduced to a minimum. The experimental results show that the combination of GPS and artificial-intelligence-based techniques behaves outstandingly. We can also draw other important conclusions regarding the design of a control system derived from human driving experience, providing an alternative mathematical formalism for computation, human reasoning, and integration of qualitative and quantitative information.",
"title": ""
},
{
"docid": "fcfe75abfde3edbf051ccb78387c3904",
"text": "In this paper a Fuzzy Logic Controller (FLC) for path following of a four-wheel differentially skid steer mobile robot is presented. Fuzzy velocity and fuzzy torque control of the mobile robot is compared with classical controllers. To assess controllers robot kinematics and dynamics are simulated with parameters of P2-AT mobile robot. Results demonstrate the better performance of fuzzy logic controllers in following a predefined path.",
"title": ""
},
{
"docid": "d10afc83c234c1c0531e23b29b5d8895",
"text": "BACKGROUND\nThe efficacy of new antihypertensive drugs has been questioned. We compared the effects of conventional and newer antihypertensive drugs on cardiovascular mortality and morbidity in elderly patients.\n\n\nMETHODS\nWe did a prospective, randomised trial in 6614 patients aged 70-84 years with hypertension (blood pressure > or = 180 mm Hg systolic, > or = 105 mm Hg diastolic, or both). Patients were randomly assigned conventional antihypertensive drugs (atenolol 50 mg, metoprolol 100 mg, pindolol 5 mg, or hydrochlorothiazide 25 mg plus amiloride 2.5 mg daily) or newer drugs (enalapril 10 mg or lisinopril 10 mg, or felodipine 2.5 mg or isradipine 2-5 mg daily). We assessed fatal stroke, fatal myocardial infarction, and other fatal cardiovascular disease. Analysis was by intention to treat.\n\n\nFINDINGS\nBlood pressure was decreased similarly in all treatment groups. The primary combined endpoint of fatal stroke, fatal myocardial infarction, and other fatal cardiovascular disease occurred in 221 of 2213 patients in the conventional drugs group (19.8 events per 1000 patient-years) and in 438 of 4401 in the newer drugs group (19.8 per 1000; relative risk 0.99 [95% CI 0.84-1.16], p=0.89). The combined endpoint of fatal and non-fatal stroke, fatal and non-fatal myocardial infarction, and other cardiovascular mortality occurred in 460 patients taking conventional drugs and in 887 taking newer drugs (0.96 [0.86-1.08], p=0.49).\n\n\nINTERPRETATION\nOld and new antihypertensive drugs were similar in prevention of cardiovascular mortality or major events. Decrease in blood pressure was of major importance for the prevention of cardiovascular events.",
"title": ""
},
{
"docid": "fec1137b7c216b839dac95ac2698f981",
"text": "In this paper, a high gain, low side lobe level Fabry Perot Cavity antenna with feed patch array is proposed. The antenna structure consists of a microstrip antenna array, which is parasitically coupled with an array of square parasitic patches fabricated on a FR4 superstrate. The patches are fabricated at the bottom of superstrate and suspended in air with the help of dielectric rods at 0.5λ0 height. Constant high gain is obtained by resonating parasitic patches at near close frequencies in 5.725–5.875GHz ISM band. The structure with 9× 9 square parasitic patches with 1.125λ0 spacing between feed elements is fabricated on 5λ0 × 5λ0 square ground. The fabricated structure provides gain of 21.5 dBi associated with side lobe level less than −25 dB, cross polarization less than −26 dB and front to back lobe ratio of more than 26 dB. The measured gain variation is less than 1 dB and VSWR is less than 2 over 5.725–5.875 GHz ISM band. The proposed structures are good candidates for base station cellular systems, satellite systems, and point-to-point links.",
"title": ""
},
{
"docid": "7c1ce170b4258e46f98c24209f0f6def",
"text": "It has been widely accepted that iris biometric systems are not subject to a template aging effect. Baker et al. [1] recently presented the first published evidence of a template aging effect, using images acquired from 2004 through 2008 with an LG 2200 iris imaging system, representing a total of 13 subjects (26 irises). We report on a template aging study involving two different iris recognition algorithms, a larger number of subjects (43), a more modern imaging system (LG 4000), and over a shorter time-lapse (2 years). We also investigate the degree to which the template aging effect may be related to pupil dilation and/or contact lenses. We find evidence of a template aging effect, resulting in an increase in match hamming distance and false reject rate.",
"title": ""
},
{
"docid": "af836023436eaa65ef55f9928312e73f",
"text": "We present a probabilistic approach to learning a Gaussian Process classifier in the presence of unlabeled data. Our approach involves a “null category noise model” (NCNM) inspired by ordered categorical noise models. The noise model reflects an assumption that the data density is lower between the class-conditional densities. We illustrate our approach on a toy problem and present comparative results for the semi-supervised classification of handwritten digits.",
"title": ""
},
{
"docid": "e74a15889f39ea03256fe5c7d9cb9819",
"text": "In healthy cells, cytochrome c (Cyt c) is located in the mitochondrial intermembrane/intercristae spaces, where it functions as an electron shuttle in the respiratory chain and interacts with cardiolipin (CL). Several proapoptotic stimuli induce the permeabilization of the outer membrane, facilitate the communication between intermembrane and intercristae spaces and promote the mobilization of Cyt c from CL, allowing for Cyt c release. In the cytosol, Cyt c mediates the allosteric activation of apoptosis-protease activating factor 1, which is required for the proteolytic maturation of caspase-9 and caspase-3. Activated caspases ultimately lead to apoptotic cell dismantling. Nevertheless, cytosolic Cyt c has been associated also to vital cell functions (i.e. differentiation), suggesting that its release not always occurs in an all-or-nothing fashion and that mitochondrial outer membrane permeabilization may not invariably lead to cell death. This review deals with the events involved in Cyt c release from mitochondria, with special attention to its regulation and final consequences.",
"title": ""
},
{
"docid": "b6de6f391c11178843bc16b51bf26803",
"text": "Crowd analysis becomes very popular research topic in the area of computer vision. A growing requirement for smarter video surveillance of private and public space using intelligent vision systems which can differentiate what is semantically important in the direction of the human observer as normal behaviors and abnormal behaviors. People counting, people tracking and crowd behavior analysis are different stages for computer based crowd analysis algorithm. This paper focus on crowd behavior analysis which can detect normal behavior or abnormal behavior.",
"title": ""
},
{
"docid": "dafcff59b2ffcc02e8fc441272afdd08",
"text": "The surge in vehicular network research has led, over the last few years, to the proposal of countless network solutions specifically designed for vehicular environments. A vast majority of such solutions has been evaluated by means of simulation, since experimental and analytical approaches are often impractical and intractable, respectively. The reliability of the simulative evaluation is thus paramount to the performance analysis of vehicular networks, and the first distinctive feature that has to be properly accounted for is the mobility of vehicles, i.e., network nodes. Notwithstanding the improvements that vehicular mobility modeling has undergone over the last decade, no vehicular mobility dataset is publicly available today that captures both the macroscopic and microscopic dynamics of road traffic over a large urban region. In this paper, we present a realistic synthetic dataset, covering 24 hours of car traffic in a 400-km2 region around the city of Köln, in Germany. We describe the generation process and outline how the dataset improves the traces currently employed for the simulative evaluation of vehicular networks. We also show the potential impact that such a comprehensive mobility dataset has on the network protocol performance analysis, demonstrating how incomplete representations of vehicular mobility may result in over-optimistic network connectivity and protocol performance.",
"title": ""
},
{
"docid": "bbf9612e6073d5cc1b9ff1eec9889649",
"text": "During the last decade the amount of scientific information available on-line increased at an unprecedented rate. As a consequence, nowadays researchers are overwhelmed by an enormous and continuously growing number of articles to consider when they perform research activities like the exploration of advances in specific topics, peer reviewing, writing and evaluation of proposals. Natural Language Processing Technology represents a key enabling factor in providing scientists with intelligent patterns to access to scientific information. Extracting information from scientific papers, for example, can contribute to the development of rich scientific knowledge bases which can be leveraged to support intelligent knowledge access and question answering. Summarization techniques can reduce the size of long papers to their essential content or automatically generate state-of-the-art-reviews. Paraphrase or textual entailment techniques can contribute to the identification of relations across different scientific textual sources. This tutorial provides an overview of the most relevant tasks related to the processing of scientific documents, including but not limited to the in-depth analysis of the structure of the scientific articles, their semantic interpretation, content extraction and summarization.",
"title": ""
},
{
"docid": "4036d11f629168ffe75840fb5c741bf6",
"text": "The rapid diffusion of ‘‘microblogging’’ services such as Twitter is ushering in a new era of possibilities for organizations to communicate with and engage their core stakeholders and the general public. To enhance understanding of the communicative functions microblogging serves for organizations, this study examines the Twitter utilization practices of the 100 largest nonprofit organizations in the United States. The analysis reveals there are three key functions of microblogging updates—‘‘information,’’ ‘‘community,’’ and ‘‘action.’’ Though the informational use of microblogging is extensive, nonprofit organizations are better at using Twitter to strategically engage their stakeholders via dialogic and community-building practices than they have been with traditional websites. The adoption of social media appears to have engendered new paradigms of public engagement.",
"title": ""
},
{
"docid": "3f629998235c1cfadf67cf711b07f8b9",
"text": "The capacity to gather and timely deliver to the service level any relevant information that can characterize the service-provisioning environment, such as computing resources/capabilities, physical device location, user preferences, and time constraints, usually defined as context-awareness, is widely recognized as a core function for the development of modern ubiquitous and mobile systems. Much work has been done to enable context-awareness and to ease the diffusion of context-aware services; at the same time, several middleware solutions have been designed to transparently implement context management and provisioning in the mobile system. However, to the best of our knowledge, an in-depth analysis of the context data distribution, namely, the function in charge of distributing context data to interested entities, is still missing. Starting from the core assumption that only effective and efficient context data distribution can pave the way to the deployment of truly context-aware services, this article aims at putting together current research efforts to derive an original and holistic view of the existing literature. We present a unified architectural model and a new taxonomy for context data distribution by considering and comparing a large number of solutions. Finally, based on our analysis, we draw some of the research challenges still unsolved and identify some possible directions for future work.",
"title": ""
},
{
"docid": "a69600725f25e0e927f8ddeb1d30f99d",
"text": "Island conservation in the longer term Conservation of biodiversity on islands is important globally because islands are home to more than 20% of the terrestrial plant and vertebrate species in the world, within less than 5% of the global terrestrial area. Endemism on islands is a magnitude higher than on continents [1]; ten of the 35 biodiversity hotspots in the world are entirely, or largely consist of, islands [2]. Yet this diversity is threatened: over half of all recent extinctions have occurred on islands, which currently harbor over one-third of all terrestrial species facing imminent extinction [3] (Figure 1). In response to the biodiversity crisis, island conservation has been an active field of research and action. Hundreds of invasive species eradications and endangered species translocations have been successfully completed [4–6]. However, despite climate change being an increasing research focus generally, its impacts on island biodiversity are only just beginning to be investigated. For example, invasive species eradications on islands have been prioritized largely by threats to native biodiversity, eradication feasibility, economic cost, and reinvasion potential, but have never considered the threat of sea-level rise. Yet, the probability and extent of island submersion would provide a relevant metric for the longevity of long-term benefits of such eradications.",
"title": ""
},
{
"docid": "35f2e6242ca33c7bb7127cf4111b088a",
"text": "We present a new algorithm for efficiently training n-gram language models on uncertain data, and illustrate its use for semisupervised language model adaptation. We compute the probability that an n-gram occurs k times in the sample of uncertain data, and use the resulting histograms to derive a generalized Katz back-off model. We compare three approaches to semisupervised adaptation of language models for speech recognition of selected YouTube video categories: (1) using just the one-best output from the baseline speech recognizer or (2) using samples from lattices with standard algorithms versus (3) using full lattices with our new algorithm. Unlike the other methods, our new algorithm provides models that yield solid improvements over the baseline on the full test set, and, further, achieves these gains without hurting performance on any of the set of video categories. We show that categories with the most data yielded the largest gains. The algorithm has been released as part of the OpenGrm n-gram library [1].",
"title": ""
}
] |
scidocsrr
|
2bc035249b762c997f72e7353dba93af
|
Collaborative data analytics for smart buildings: opportunities and models
|
[
{
"docid": "d16e579aadf2e9c871c76a201fa5cc29",
"text": "Worldwide, buildings account for ca. 40% of the total energy consumption and ca. 20% of the total CO2 emissions. While most of the energy goes into primary building use, a significant amount of energy is wasted due to malfunctioning building system equipment and wrongly configured Building Management Systems (BMS). For example, wrongly configured setpoints or building equipment, or misplaced sensors and actuators, can contribute to deviations of the real energy consumption from the predicted one. Our paper is motivated by these posed challenges and aims at pinpointing the types of problems in the BMS components that can affect the energy efficiency of a building, as well as review the methods that can be utilized for their discovery and diagnosis. The goal of the paper is to highlight the challenges that lie in this problem domain, as well as provide a strategy how to counterfeit them.",
"title": ""
}
] |
[
{
"docid": "9872fa151ab96271df460488e8527044",
"text": "The demand of wireless solutions in industrial applications increases since the early nineties. This trend is not only ongoing, it is further pushed by developments in the area of software stacks like the latest Bluetooth Low Energy Stack. It is also pushed by new chip-designs and powerful and highly integrated electronic hardware. The acceptance of wireless technologies as a possible solution for industrial applications, has overcome the entry barrier [1]. The first step to see wireless as standard for many industrial applications is almost accomplished. Nevertheless there is nearly none acceptance of wireless technology for Safety applications. One highly challenging and demanding requirement is still unsolved: The aspect safety and robustness. Those topics have been addressed in many cases but always in a similar manner. WirelessHART as an example addresses this topic with redundant so called multiple propagation paths and frequency hopping to handle with interferences and loss of network participants. So far the pure peer to peer link is rarely investigated and there are less safety solutions available. One product called LoRa™ can be seen as one possible solution to address this lack of safety within wireless links. This paper focuses on the safety performance evaluation of a modem-chip-design. The use of diverse and redundant wireless technologies like LoRa can lead to an increase acceptance of wireless in safety applications. Many measurements in real industrial application have been carried out to be able to benchmark the new chip in terms of the safety aspects. The content of this research results can help to raise the level of confidence in wireless. In this paper, the term “safety” is used for data transmission reliability.",
"title": ""
},
{
"docid": "4a164ec21fb69e7db5c90467c6f6af17",
"text": "Recent technologies have made it cost-effective to collect diverse types of genome-wide data. Computational methods are needed to combine these data to create a comprehensive view of a given disease or a biological process. Similarity network fusion (SNF) solves this problem by constructing networks of samples (e.g., patients) for each available data type and then efficiently fusing these into one network that represents the full spectrum of underlying data. For example, to create a comprehensive view of a disease given a cohort of patients, SNF computes and fuses patient similarity networks obtained from each of their data types separately, taking advantage of the complementarity in the data. We used SNF to combine mRNA expression, DNA methylation and microRNA (miRNA) expression data for five cancer data sets. SNF substantially outperforms single data type analysis and established integrative approaches when identifying cancer subtypes and is effective for predicting survival.",
"title": ""
},
{
"docid": "86fb01912ab343b95bb31e0b06fff851",
"text": "Serial periodic data exhibit both serial and periodic properties. For example, time continues forward serially, but weeks, months, and years are periods that recur. While there are extensive visualization techniques for exploring serial data, and a few for exploring periodic data, no existing technique simultaneously displays serial and periodic attributes of a data set. We introduce a spiral visualization technique, which displays data along a spiral to highlight serial attributes along the spiral axis and periodic ones along the radii. We show several applications of the spiral visualization to data exploration tasks, present our implementation, discuss the capacity for data analysis, and present findings of our informal study with users in data-rich scientific domains.",
"title": ""
},
{
"docid": "f0c08cb3e23e71bab0ff9ca73a4d7869",
"text": "A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manually labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth prediction, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photometric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives comparable performance to that of the state of art supervised methods for single view depth estimation.",
"title": ""
},
{
"docid": "4331746158d056ffdb5a47b56257aa2c",
"text": "This paper presents an analysis of the effect of duty ratio on power loss and efficiency of the Class-E amplifier. Conduction loss for each Class-E circuit component is derived and total amplifier losses and efficiency are expressed as functions of duty ratio. Two identical 300-W Class-E amplifiers operating at 7.29 MHz are designed, constructed, and tested in the laboratory. Dependence of total efficiency upon duty ratio when using real components is derived and verified experimentally. Derived loss and efficiency equations demonstrate rapid drop in efficiency for low duty ratio (below approximately 30%). Experimental results very closely matched calculated power loss and efficiency.",
"title": ""
},
{
"docid": "3853fdb51a5e66c9fe83288c37bdad12",
"text": "We report a case of a young girl with Turner syndrome presenting with a pulsatile left-sided supraclavicular swelling since birth, which proved to be the rare anomaly of a cervical aortic arch. Though elongation of the transverse aortic arch is well known in Turner syndrome, to the best of our knowledge, a cervical aortic arch has not been described in the literature.",
"title": ""
},
{
"docid": "e8d0b295658e582e534b9f41b1f14b25",
"text": "The rapid development of artificial intelligence has brought the artificial intelligence threat theory as well as the problem about how to evaluate the intelligence level of intelligent products. Both need to find a quantitative method to evaluate the intelligence level of intelligence systems, including human intelligence. Based on the standard intelligence system and the extended Von Neumann architecture, this paper proposes General IQ, Service IQ and Value IQ evaluation methods for intelligence systems, depending on different evaluation purposes. Among them, the General IQ of intelligence systems is to answer the question of whether \"the artificial intelligence can surpass the human intelligence\", which is reflected in putting the intelligence systems on an equal status and conducting the unified evaluation. The Service IQ and Value IQ of intelligence systems are used to answer the question of “how the intelligent products can better serve the human”, reflecting the intelligence and required cost of each intelligence system as a product in the process of serving human. 0. Background With AlphaGo defeating the human Go champion Li Shishi in 2016[1], the worldwide artificial intelligence is developing rapidly. As a result, the artificial intelligence threat theory is widely disseminated as well. At the same time, the intelligent products are flourishing and emerging. Can the artificial intelligence surpass the human intelligence? What level exactly does the intelligence of these intelligent products reach? To answer these questions requires a quantitative method to evaluate the development level of intelligence systems. Since the introduction of the Turing test in 1950, scientists have done a great deal of work on the evaluation system for the development of artificial intelligence[2]. In 1950, Turing proposed the famous Turing experiment, which can determine whether a computer has the intelligence equivalent to that of human with questioning and human judgment method. As the most widely used artificial intelligence test method, the Turing test does not test the intelligence development level of artificial intelligence, but only judges whether the intelligence system can be the same with human intelligence, and depends heavily on the judges’ and testees’ subjective judgments due to too much interference from human factors, so some people often claim their ideas have passed the Turing test, even without any strict verification. On March 24, 2015, the Proceedings of the National Academy of Sciences (PNAS) published a paper proposing a new Turing test method called “Visual Turing test”, which was designed to perform a more in-depth evaluation on the image cognitive ability of computer[3]. In 2014, Mark O. Riedl of the Georgia Institute of Technology believed that the essence of intelligence lied in creativity. He designed a test called Lovelace version 2.0. The test range of Lovelace 2.0 includes the creation of a virtual story novel, poetry, painting and music[4]. There are two problems in various solutions including the Turing test in solving the artificial intelligence quantitative test. Firstly, these test methods do not form a unified intelligent model, nor do they use the model as a basis for analysis to distinguish multiple categories of intelligence, which leads to that it is impossible to test different intelligence systems uniformly, including human; secondly, these test methods can not quantitatively analyze artificial intelligence, or only quantitatively analyze some aspects of intelligence. But what percentage does this system reach to human intelligence? How’s its ratio of speed to the rate of development of human intelligence? All these problems are not covered in the above study. In response to these problems, the author of this paper proposes that: There are three types of IQs in the evaluation of intelligence level for intelligence systems based on different purposes, namely: General IQ, Service IQ and Value IQ. The theoretical basis of the three methods and IQs for the evaluation of intelligence systems, detailed definitions and evaluation methods will be elaborated in the following. 1. Theoretical Basis: Standard Intelligence System and Extended Von Neumann Architecture People are facing two major challenges in evaluating the intelligence level of an intelligence system, including human beings and artificial intelligence systems. Firstly, artificial intelligence systems do not currently form a unified model; secondly, there is no unified model for the comparison between the artificial intelligence systems and the human at present. In response to this problem, the author's research team referred to the Von Neumann Architecture[5], David Wexler's human intelligence model[6], and DIKW model system in the field of knowledge management[7], and put forward a \"standard intelligent model\", which describes the characteristics and attributes of the artificial intelligence systems and the human uniformly, and takes an agent as a system with the abilities of knowledge acquisition, mastery, creation and feedback[8] (see Figure 1). Figure 1 Standard Intelligence Model Based on this model in combination with Von Neumann architecture, an extended Von Neumann architecture can be formed (see Figure 2). Compared to the Von Neumann architecture, this model is added with innovation and creation function that can discover new elements of knowledge and new laws based on the existing knowledge, and make them stored in the storage for use by computers and controllers, and achieve knowledge interaction with the outside through the input / output system. The second addition is an external knowledge database or cloud storage that enables knowledge sharing, whereas the Von Neumann architecture's external storage only serves the single system. A. Arithmetic logic unit D. innovation generator B. Control unitE. input device C. Internal memory unit F. output device Figure 2 Expanded Von Neumann Architecture 2. Definitions of Three IQs of Intelligence System 2.1 Proposal of AI General IQ (AI G IQ) Based on the standard intelligent model, the research team established the AI IQ Test Scale and used it to conduct AI IQ tests on more than 50 artificial intelligence systems including Google, Siri, Baidu, Bing and human groups at the age of 6, 12, and 18 respectively in 2014 and 2016. From the test results, the performance of artificial intelligence systems such as Google and Baidu has been greatly increased from two years ago, but still lags behind the human group at the age of 6[9] (see Table1 and Table 2). Table 1. Ranking of top 13 artificial intelligence IQs for 2014.",
"title": ""
},
{
"docid": "aa52a5764fc0b95e11d3088f7cdc7448",
"text": "Generative Adversarial Networks (GANs) have received wide attention in the machine learning field for their potential to learn high-dimensional, complex real data distribution. Specifically, they do not rely on any assumptions about the distribution and can generate real-like samples from latent space in a simple manner. This powerful property allows GANs to be applied to various applications such as image synthesis, image attribute editing, image translation, domain adaptation, and other academic fields. In this article, we discuss the details of GANs for those readers who are familiar with, but do not comprehend GANs deeply or who wish to view GANs from various perspectives. In addition, we explain how GANs operates and the fundamental meaning of various objective functions that have been suggested recently. We then focus on how the GAN can be combined with an autoencoder framework. Finally, we enumerate the GAN variants that are applied to various tasks and other fields for those who are interested in exploiting GANs for their research.",
"title": ""
},
{
"docid": "577b9ea82dd60b394ad3024452986d96",
"text": "Financial fraud is an issue with far reaching consequences in the finance industry, government, corporate sectors, and for ordinary consumers. Increasing dependence on new technologies such as cloud and mobile computing in recent years has compounded the problem. Traditional methods involving manual detection are not only time consuming, expensive and inaccurate, but in the age of big data they are also impractical. Not surprisingly, financial institutions have turned to automated processes using statistical and computational methods. This paper presents a comprehensive review of financial fraud detection research using such data mining methods, with a particular focus on computational intelligence (CI)-based techniques. Over fifty scientific literature, primarily spanning the period 2004-2014, were analysed in this study; literature that reported empirical studies focusing specifically on CI-based financial fraud detection were considered in particular. Research gap was identified as none of the existing review articles addresses the association among fraud types, CIbased detection algorithms and their performance, as reported in the literature. We have presented a comprehensive classification as well as analysis of existing fraud detection literature based on key aspects such as detection algorithm used, fraud type investigated, and performance of the detection methods for specific financial fraud types. Some of the key issues and challenges associated with the current practices and potential future direction of research have also",
"title": ""
},
{
"docid": "d1a804b3ecd5ed5cf277ae0c01f85bde",
"text": "Researchers have extensively chronicled the trends and challenges in higher education (Altbach et al. 2009). MOOCs appear to be as much about the collective grasping of universities’ leaders to bring higher education into the digital age as they are about a particular method of teaching. In this chapter, I won’t spend time commenting on the role of MOOCs in educational transformation or even why attention to this mode of delivering education has received unprecedented hype (rarely has higher education as a system responded as rapidly to a trend as it has responded to open online courses). Instead, this chapter details different MOOC models and the underlying pedagogy of each.",
"title": ""
},
{
"docid": "4d604ebc7060c5a5a7dd32d0494886db",
"text": "Database can accommodate a very large number of users on an on-demand basis. The main limitations with conventional relational database management systems (RDBMS) are that they are hard to scale with Data warehousing, Grid, Web 2.0 and Cloud applications, have non-linear query execution time, have unstable query plans and have static schema. Even though RDBMS's have provided database users with the best mix of simplicity, robustness, flexibility, performance, scalability and compatibility but they are not able to satisfy the present day users and applications for the reasons mentioned above. The next generation NonSQL (NoSQL) databases are mostly non-relational, distributed and horizontally scalable and are able to satisfy most of the needs of the present day applications. The main characteristics of these databases are schema-free, no join, non-relational, easy replication support, simple API and eventually consistent. The aim of this paper is to illustrate how a problem being solved using MySQL will perform when MongoDB is used on a Big data dataset. The results are encouraging and clearly showcase the comparisons made. Queries are executed on a big data airlines database using both MongoDB and MySQL. Select, update, delete and insert queries are executed and performance is evaluated.",
"title": ""
},
{
"docid": "b7dec8c2a0ef689ef0cac1eb6ed76cc5",
"text": "One of the most difficult speech recognition tasks is accurate recognition of human to human communication. Advances in deep learning over the last few years have produced major speech recognition improvements on the representative Switchboard conversational corpus. Word error rates that just a few years ago were 14% have dropped to 8.0%, then 6.6% and most recently 5.8%, and are now believed to be within striking range of human performance. This then raises two issues what IS human performance, and how far down can we still drive speech recognition error rates? A recent paper by Microsoft suggests that we have already achieved human performance. In trying to verify this statement, we performed an independent set of human performance measurements on two conversational tasks and found that human performance may be considerably better than what was earlier reported, giving the community a significantly harder goal to achieve. We also report on our own efforts in this area, presenting a set of acoustic and language modeling techniques that lowered the word error rate of our own English conversational telephone LVCSR system to the level of 5.5%/10.3% on the Switchboard/CallHome subsets of the Hub5 2000 evaluation, which at least at the writing of this paper is a new performance milestone (albeit not at what we measure to be human performance!). On the acoustic side, we use a score fusion of three models: one LSTM with multiple feature inputs, a second LSTM trained with speaker-adversarial multitask learning and a third residual net (ResNet) with 25 convolutional layers and time-dilated convolutions. On the language modeling side, we use word and character LSTMs and convolutional WaveNet-style language models.",
"title": ""
},
{
"docid": "a4e92e4dc5d93aec4414bc650436c522",
"text": "Where you can find the compiling with continuations easily? Is it in the book store? On-line book store? are you sure? Keep in mind that you will find the book in this site. This book is very referred for you because it gives not only the experience but also lesson. The lessons are very valuable to serve for you, that's not about who are reading this compiling with continuations book. It is about this book that will give wellness for all people from many societies.",
"title": ""
},
{
"docid": "5275184686a8453a1922cec7a236b66d",
"text": "Children’s sense of relatedness is vital to their academic motivation from 3rd to 6th grade. Children’s (n 641) reports of relatedness predicted changes in classroom engagement over the school year and contributed over and above the effects of perceived control. Regression and cumulative risk analyses revealed that relatedness to parents, teachers, and peers each uniquely contributed to students’ engagement, especially emotional engagement. Girls reported higher relatedness than boys, but relatedness to teachers was a more salient predictor of engagement for boys. Feelings of relatedness to teachers dropped from 5th to 6th grade, but the effects of relatedness on engagement were stronger for 6th graders. Discussion examines theoretical, empirical, and practical implications of relatedness as a key predictor of children’s academic motivation and performance.",
"title": ""
},
{
"docid": "1453350c8134ecfe272255b71e7707ad",
"text": "Program slicing is a viable method to restrict the focus of a task to specific sub-components of a program. Examples of applications include debugging, testing, program comprehension, restructuring, downsizing, and parallelization. This paper discusses different statement deletion based slicing methods, together with algorithms and applications to software engineering.",
"title": ""
},
{
"docid": "4c85c55ba02b2823aad33bf78d224b61",
"text": "We developed an affordance-based methodology to support environmentally conscious behavior (ECB) that conserves resources such as materials, energy, etc. While studying concepts that aim to support ECB, we noted that characteristics of products that enable ECB tend to be more accurately described as affordances than functions. Therefore, we became interested in affordances, and specifically how affordances can be used to design products that support ECB. Affordances have been described as possible ways of interacting with products, or context-dependent relations between artifacts and users. Other researchers have explored affordances in lieu of functions as a basis for design, and developed detailed deductive methods of discovering affordances in products. We abstracted desired affordances from patterns and principles we observed to support ECB, and generated concepts based on those affordances. As a possible shortcut to identifying and implementing relevant affordances, we introduced the affordance-transfer method. This method involves altering a product’s affordances to add desired features from related products. Promising sources of affordances include lead-user and other products that support resource conservation. We performed initial validation of the affordance-transfer method and observed that it can improve the usefulness of the concepts that novice designers generate to support ECB. [DOI: 10.1115/1.4025288]",
"title": ""
},
{
"docid": "9fdc90bb52bd0895b342351004344721",
"text": "We present an ensemble approach to cross-domain authorship attribution that combines predictions made by three independent classifiers, namely, standard char n-grams, char n-grams with non-diacritic distortion and word ngrams. Our proposal relies on variable-length n-gram models and multinomial logistic regression, and selects the prediction of highest probability among the three models as the output for the task. Results generally outperform the PANCLEF 2018 baseline system that makes use of fixed-length char n-grams and linear SVM classification.",
"title": ""
},
{
"docid": "974800093c29c5484abd6644ae330555",
"text": "In this paper, we investigate the gender gap in education in rural northwest China. We first discuss parental perceptions of abilities and appropriate roles for girls and boys; parental concerns about old-age support; and parental perceptions of different labor market outcomes for girls' and boys' education. We then investigate gender disparities in investments in children, children's performance at school, and children's subsequent attainment. We analyze a survey of 9-12-year-old children and their families conducted in rural Gansu Province in the year 2000, along with follow-up information about subsequent educational attainment collected 7 years later. We complement our main analysis with two illustrative case studies of rural families drawn from 11 months of fieldwork conducted in rural Gansu between 2003 and 2005 by the second author.In 2000, most mothers expressed egalitarian views about girls' and boys' rights and abilities, in the abstract. However, the vast majority of mothers still expected to rely on sons for old-age support, and nearly one in five mothers interviewed agreed with the traditional saying, \"Sending girls to school is useless since they will get married and leave home.\" Compared to boys, girls faced somewhat lower (though still very high) maternal educational expectations and a greater likelihood of being called on for household chores than boys. However, there was little evidence of a gender gap in economic investments in education. Girls rivaled or outperformed boys in academic performance and engagement. Seven years later, boys had attained just about a third of a year more schooling than girls-a quite modest advantage that could not be fully explained by early parental attitudes and investments, or student performance or engagement. Fieldwork confirmed that parents of sons and daughters tended to have high aspirations for their children. Parents sometimes viewed boys as having greater aptitude, but tended to view girls as having more dedication-an attribute parents perceived as being critical for educational success. Findings suggest that at least in Gansu, rural parental educational attitudes and practices toward boys and girls are more complicated and less uniformly negative for girls than commonly portrayed.",
"title": ""
},
{
"docid": "4768117021fc0e3c1f4b7730b11f9e73",
"text": "Answer Set Programming (ASP) has become an established paradigm for Knowledge Representation and Reasoning, in particular, when it comes to solving knowledge-intense combinatorial (optimization) problems. ASP’s unique pairing of a simple yet rich modeling language with highly performant solving technology has led to an increasing interest in ASP in academia as well as industry. To further boost this development and make ASP fit for real world applications it is indispensable to equip it with means for an easy integration into software environments and for adding complementary forms of reasoning. In this tutorial, we describe how both issues are addressed in the ASP system clingo. At first, we outline features of clingo’s application programming interface (API) that are essential for multi-shot ASP solving, a technique for dealing with continuously changing logic programs. This is illustrated by realizing two exemplary reasoning modes, namely branch-and-bound-based optimization and incremental ASP solving. We then switch to the design of the API for integrating complementary forms of reasoning and detail this in an extensive case study dealing with the integration of difference constraints. We show how the syntax of these constraints is added to the modeling language and seamlessly merged into the grounding process. We then develop in detail a corresponding theory propagator for difference constraints and present how it is integrated into clingo’s solving process.",
"title": ""
},
{
"docid": "a34825f20b645a146857c1544c08e66e",
"text": "1. The midterm will have about 5-6 long questions, and about 8-10 short questions. Space will be provided on the actual midterm for you to write your answers. 2. The midterm is meant to be educational, and as such some questions could be quite challenging. Use your time wisely to answer as much as you can! 3. For additional practice, please see CS 229 extra problem sets available at 1. [13 points] Generalized Linear Models Recall that generalized linear models assume that the response variable y (conditioned on x) is distributed according to a member of the exponential family: p(y; η) = b(y) exp(ηT (y) − a(η)), where η = θ T x. For this problem, we will assume η ∈ R. (a) [10 points] Given a training set {(x (i) , y (i))} m i=1 , the loglikelihood is given by (θ) = m i=1 log p(y (i) | x (i) ; θ). Give a set of conditions on b(y), T (y), and a(η) which ensure that the loglikelihood is a concave function of θ (and thus has a unique maximum). Your conditions must be reasonable, and should be as weak as possible. (E.g., the answer \" any b(y), T (y), and a(η) so that (θ) is concave \" is not reasonable. Similarly, overly narrow conditions, including ones that apply only to specific GLMs, are also not reasonable.) (b) [3 points] When the response variable is distributed according to a Normal distribution (with unit variance), we have b(y) = 1 √ 2π e −y 2 2 , T (y) = y, and a(η) = η 2 2. Verify that the condition(s) you gave in part (a) hold for this setting.",
"title": ""
}
] |
scidocsrr
|
a5977f8c23601aa52ca00b537d703b14
|
The Impact of Electronic Word of Mouth on Consumers' Purchasing Decisions
|
[
{
"docid": "1993b540ff91922d381128e9c8592163",
"text": "The use of the WWW as a venue for voicing opinions, complaints and recommendations on products and firms has been widely reported in the popular media. However little is known how consumers use these reviews and if they subsequently have any influence on evaluations and purchase intentions of products and retailers. This study examines the effect of negative reviews on retailer evaluation and patronage intention given that the consumer has already made a product/brand decision. Our results indicate that the extent of WOM search depends on the consumer’s reasons for choosing an online retailer. Further the influence of negative WOM information on perceived reliability and purchase intentions is determined largely by familiarity with the retailer and differs based on whether the retailer is a pure-Internet or clicks-and-mortar firm. Managerial implications for positioning strategies to minimize the effect of negative word-ofmouth have been discussed.",
"title": ""
},
{
"docid": "ddad5569efe76dca3445e7e4d4aceafc",
"text": "This study evaluates the current status of electronic word-of-mouth (eWOM) research through an exhaustive literature review of relevant articles. We have identified a total of 83 eWOM research articles published from 2001 through 2010. Published research into eWOM first appeared in peerreviewed journals about ten years ago, and research has been steadily increasing. Among research topic area, the impact of eWOM communication was the most researched topic in the last decade. We also found that individual and message were the two mostly used unit of analysis in eWOM studies. Survey, secondary data analysis, and mathematical modeling were the three main streams of research method. Finally, we found diverse theoretical approaches in understanding eWOM communication. We conclude this paper by identifying important trends in the eWOM literature to provide future research directions.",
"title": ""
}
] |
[
{
"docid": "0c2e489edeac2c8ad5703eda644edfac",
"text": "Nowadays, more and more decision procedures are supported or even guided by automated processes. An important technique in this automation is data mining. In this chapter we study how such automatically generated decision support models may exhibit discriminatory behavior towards certain groups based upon, e.g., gender or ethnicity. Surprisingly, such behavior may even be observed when sensitive information is removed or suppressed and the whole procedure is guided by neutral arguments such as predictive accuracy only. The reason for this phenomenon is that most data mining methods are based upon assumptions that are not always satisfied in reality, namely, that the data is correct and represents the population well. In this chapter we discuss the implicit modeling assumptions made by most data mining algorithms and show situations in which they are not satisfied. Then we outline three realistic scenarios in which an unbiased process can lead to discriminatory models. The effects of the implicit assumptions not being fulfilled are illustrated by examples. The chapter concludes with an outline of the main challenges and problems to be solved.",
"title": ""
},
{
"docid": "0c1381eb866a42da820a2b18442938e7",
"text": "We present a new method that learns to segment and cluster images without labels of any kind. A simple loss based on information theory is used to extract meaningful representations directly from raw images. This is achieved by maximising mutual information of images known to be related by spatial proximity or randomized transformations, which distills their shared abstract content. Unlike much of the work in unsupervised deep learning, our learned function outputs segmentation heatmaps and discrete classifications labels directly, rather than embeddings that need further processing to be usable. The loss can be formulated as a convolution, making it the first end-to-end unsupervised learning method that learns densely and efficiently (i.e. without sampling) for semantic segmentation. Implemented using realistic settings on generic deep neural network architectures, our method attains superior performance on COCO-Stuff and ISPRS-Potsdam for segmentation and STL for clustering, beating state-of-the-art baselines.",
"title": ""
},
{
"docid": "d2a89459ca4a0e003956d6fe4871bb34",
"text": "In this paper, a high-efficiency high power density LLC resonant converter with a matrix transformer is proposed. A matrix transformer can help reduce leakage inductance and the ac resistance of windings so that the flux cancellation method can then be utilized to reduce core size and loss. Synchronous rectifier (SR) devices and output capacitors are integrated into the secondary windings to eliminate termination-related winding losses, via loss and reduce leakage inductance. A 1 MHz 390 V/12 V 1 kW LLC resonant converter prototype is built to verify the proposed structure. The efficiency can reach as high as 95.4%, and the power density of the power stage is around 830 W/in3.",
"title": ""
},
{
"docid": "6c9acb831bc8dc82198aef10761506be",
"text": "In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Rules extracted from databases by data mining techniques, such as classification or association rules, when used for decision tasks such as benefit or credit approval, can be discriminatory in the above sense. In this paper, the notion of discriminatory classification rules is introduced and studied. Providing a guarantee of non-discrimination is shown to be a non trivial task. A naive approach, like taking away all discriminatory attributes, is shown to be not enough when other background knowledge is available. Our approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge. An empirical assessment of the results on the German credit dataset is also provided.",
"title": ""
},
{
"docid": "86ce47260d84ddcf8558a0e5e4f2d76f",
"text": "We present the definition and computational algorithms for a new class of surfaces which are dual to the isosurface produced by the widely used marching cubes (MC) algorithm. These new isosurfaces have the same separating properties as the MC surfaces but they are comprised of quad patches that tend to eliminate the common negative aspect of poorly shaped triangles of the MC isosurfaces. Based upon the concept of this new dual operator, we describe a simple, but rather effective iterative scheme for producing smooth separating surfaces for binary, enumerated volumes which are often produced by segmentation algorithms. Both the dual surface algorithm and the iterative smoothing scheme are easily implemented.",
"title": ""
},
{
"docid": "b87f7587821f4a8718396a1dd7fa479e",
"text": "In the future, robots will be important device widely in our daily lives to achieve complicated tasks. To achieve the tasks, there are some demands for the robots. In this paper, two strong demands of them are taken attention. First one is multiple-degrees of freedom (DOF), and the second one is miniaturization of the robots. Although rotary actuators is necessary to get multiple-DOF, miniaturization is difficult with rotary motors which are usually utilized for multiple-DOF robots. Here, tendon-driven rotary actuator is a candidate to solve the problems of the rotary actuators. The authors proposed a type of tendon-driven rotary actuator using thrust wires. However, big mechanical loss and frictional loss occurred because of the complicated structure of connection points. As the solution for the problems, this paper proposes a tendon-driven rotary actuator for haptics with thrust wires and polyethylene (PE) line. In the proposed rotary actuator, a PE line is used in order to connect the tip points of thrust wires and the end effector. The validity of the proposed rotary actuator is evaluated by experiments.",
"title": ""
},
{
"docid": "06e74a431b45aec75fb21066065e1353",
"text": "Despite the prevalence of sleep complaints among psychiatric patients, few questionnaires have been specifically designed to measure sleep quality in clinical populations. The Pittsburgh Sleep Quality Index (PSQI) is a self-rated questionnaire which assesses sleep quality and disturbances over a 1-month time interval. Nineteen individual items generate seven \"component\" scores: subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleeping medication, and daytime dysfunction. The sum of scores for these seven components yields one global score. Clinical and clinimetric properties of the PSQI were assessed over an 18-month period with \"good\" sleepers (healthy subjects, n = 52) and \"poor\" sleepers (depressed patients, n = 54; sleep-disorder patients, n = 62). Acceptable measures of internal homogeneity, consistency (test-retest reliability), and validity were obtained. A global PSQI score greater than 5 yielded a diagnostic sensitivity of 89.6% and specificity of 86.5% (kappa = 0.75, p less than 0.001) in distinguishing good and poor sleepers. The clinimetric and clinical properties of the PSQI suggest its utility both in psychiatric clinical practice and research activities.",
"title": ""
},
{
"docid": "547ce0778d8d51d96a610fb72b6bb4e9",
"text": "Applications in cyber-physical systems are increasingly coupled with online instruments to perform long-running, continuous data processing. Such “always on” dataflow applications are dynamic, where they need to change the applications logic and performance at runtime, in response to external operational needs. F`oε is a continuous dataflow framework that is designed to be adaptive for dynamic applications on Cloud infrastructure. It offers advanced dataflow patterns like BSP and MapReduce for flexible and holistic composition of streams and files, and supports dynamic recomposition at runtime with minimal impact on the execution. Adaptive resource allocation strategies allow our framework to effectively use elastic Cloud resources to meet varying data rates. We illustrate the design patterns of F`oε by running an integration pipeline and a tweet clustering application from the Smart Power Grids domain on a private Eucalyptus Cloud. The responsiveness of our resource adaptation is validated through simulations for periodic, bursty and random workloads.",
"title": ""
},
{
"docid": "e396d45e439e48a62773632487235ac4",
"text": "Many tools from the field of graph signal processing exploit knowledge of the underlying graph's structure (e.g., as encoded in the Laplacian matrix) to process signals on the graph. Therefore, in the case when no graph is available, graph signal processing tools cannot be used anymore. Researchers have proposed approaches to infer a graph topology from observations of signals on its vertices. Since the problem is ill-posed, these approaches make assumptions, such as smoothness of the signals on the graph, or sparsity priors. In this paper, we propose a characterization of the space of valid graphs, in the sense that they can explain stationary signals. To simplify the exposition in this paper, we focus here on the case where signals were i.i.d. at some point back in time and were observed after diffusion on a graph. We show that the set of graphs verifying this assumption has a strong connection with the eigenvectors of the covariance matrix, and forms a convex set. Along with a theoretical study in which these eigenvectors are assumed to be known, we consider the practical case when the observations are noisy, and experimentally observe how fast the set of valid graphs converges to the set obtained when the exact eigenvectors are known, as the number of observations grows. To illustrate how this characterization can be used for graph recovery, we present two methods for selecting a particular point in this set under chosen criteria, namely graph simplicity and sparsity. Additionally, we introduce a measure to evaluate how much a graph is adapted to signals under a stationarity assumption. Finally, we evaluate how state-of-the-art methods relate to this framework through experiments on a dataset of temperatures.",
"title": ""
},
{
"docid": "c35cabb80618f8cfee04c97238cceb31",
"text": "Addiction is a chronic relapsing disorder, in that most addicted individuals who choose to quit taking drugs fail to maintain abstinence in the long-term. Relapse is especially likely when recovering addicts encounter risk factors like small \"priming\" doses of drug, stress, or drug-associated cues and locations. In rodents, these same factors reinstate cocaine seeking after a period of abstinence, and extensive preclinical work has used priming, stress, or cue reinstatement models to uncover brain circuits underlying cocaine reinstatement. Here, we review common rat models of cocaine relapse, and discuss how specific features of each model influence the neural circuits recruited during reinstated drug seeking. To illustrate this point, we highlight the surprisingly specific roles played by ventral pallidum subcircuits in cocaine seeking reinstated by either cocaine-associated cues, or cocaine itself. One goal of such studies is to identify, and eventually to reverse the specific circuit activity that underlies the inability of some humans to control their drug use. Based on preclinical findings, we posit that circuit activity in humans also differs based on the triggers that precipitate craving and relapse, and that associated neural responses could help predict the triggers most likely to elicit relapse in a given person. If so, examining circuit activity could facilitate diagnosis of subgroups of addicted people, allowing individualized treatment based on the most problematic risk factors.",
"title": ""
},
{
"docid": "3b0ee097a17ed82306a0b2cc3c1d70d1",
"text": "This RFC is an official specification for the Internet community. It incorporates by reference, amends, corrects, and supplements the primary protocol standards documents relating to hosts. Distribution of this document is unlimited. Summary This is one RFC of a pair that defines and discusses the requirements for Internet host software. This RFC covers the communications protocol layers: link layer, IP layer, and transport layer; its companion RFC-1123 covers the application and support protocols.",
"title": ""
},
{
"docid": "5679a329a132125d697369ca4d39b93e",
"text": "This paper proposes a method to explore the design space of FinFETs with double fin heights. Our study shows that if one fin height is sufficiently larger than the other and the greatest common divisor of their equivalent transistor widths is small, the fin height pair will incur less width quantization effect and lead to better area efficiency. We design a standard cell library based on this technology using a tailored FreePDK15. With respect to a standard cell library designed with FreePDK15, about 86% of the cells designed with FinFETs of double fin heights have a smaller delay and 54% of the cells take a smaller area. We also demonstrate the advantages of FinFETs with double fin heights through chip designs using our cell library.",
"title": ""
},
{
"docid": "81a45cb4ca02c38839a81ad567eb1491",
"text": "Big data is often mined using clustering algorithms. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is a popular spatial clustering algorithm. However, it is computationally expensive and thus for clustering big data, parallel processing is required. The two prevalent paradigms for parallel processing are High-Performance Computing (HPC) based on Message Passing Interface (MPI) or Open Multi-Processing (OpenMP) and the newer big data frameworks such as Apache Spark or Hadoop. This report surveys for these two different paradigms publicly available implementations that aim at parallelizing DBSCAN and compares their performance. As a result, it is found that the big data implementations are not yet mature and in particular for skewed data, the implementation’s decomposition of the input data into parallel tasks has a huge influence on the performance in terms of running time.",
"title": ""
},
{
"docid": "b2962d473a4b2d1a20996ae578ceccd4",
"text": "In this paper, we examine the logic and methodology of engineering design from the perspective of the philosophy of science. The fundamental characteristics of design problems and design processes are discussed and analyzed. These characteristics establish the framework within which different design paradigms are examined. Following the discussions on descriptive properties of design, and the prescriptive role of design paradigms, we advocate the plausible hypothesis that there is a direct resemblance between the structure of design processes and the problem solving of scientific communities. The scientific community metaphor has been useful in guiding the development of general purpose highly effective design process meta-tools [73], [125].",
"title": ""
},
{
"docid": "5106155fbe257c635fb9621240fd7736",
"text": "AIM\nThe aim of this study was to investigate the prevalence of pain and pain assessment among inpatients in a university hospital.\n\n\nBACKGROUND\nPain management could be considered an indicator of quality of care. Few studies report on prevalence measures including all inpatients.\n\n\nDESIGN\nQuantitative and explorative.\n\n\nMETHOD\nSurvey.\n\n\nRESULTS\nOf the inpatients at the hospital who answered the survey, 494 (65%) reported having experienced pain during the preceding 24 hours. Of the patients who reported having experienced pain during the preceding 24 hours, 81% rated their pain >3 and 42.1% rated their pain >7. Of the patients who reported having experienced pain during the preceding 24 hours, 38.7% had been asked to self-assess their pain using a Numeric Rating Scale (NRS); 29.6% of the patients were completely satisfied, and 11.5% were not at all satisfied with their participation in pain management.\n\n\nCONCLUSIONS\nThe result showed that too many patients are still suffering from pain and that the NRS is not used to the extent it should be. Efforts to overcome under-implementation of pain assessment are required, particularly on wards where pain is not obvious, e.g., wards that do not deal with surgery patients. Work to improve pain management must be carried out through collaboration across professional groups.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nUsing a pain assessment tool such as the NRS could help patients express their pain and improve communication between nurses and patients in relation to pain as well as allow patients to participate in their own care. Carrying out prevalence pain measures similar to those used here could be helpful in performing quality improvement work in the area of pain management.",
"title": ""
},
{
"docid": "00dbe58bcb7d4415c01a07255ab7f365",
"text": "The paper deals with a time varying vehicle-to-vehicle channel measurement in the 60 GHz millimeter wave (MMW) band using a unique time-domain channel sounder built from off-the-shelf components and standard measurement devices and employing Golay complementary sequences as the excitation signal. The aim of this work is to describe the sounder architecture, primary data processing technique, achievable system parameters, and preliminary measurement results. We measured the signal propagation between two passing vehicles and characterized the signal reflected by a car driving on a highway. The proper operation of the channel sounder is verified by a reference measurement performed with an MMW vector network analyzer in a rugged stationary office environment. The goal of the paper is to show the measurement capability of the sounder and its superior features like 8 GHz measuring bandwidth enabling high time resolution or good dynamic range allowing an analysis of weak multipath components.",
"title": ""
},
{
"docid": "0dc59d18787d0dfd2be19dfcdd812d7d",
"text": "In 1974 Bahl, Cocke, Jelinek and Raviv published the decoding algorithm based on a posteriori probabilities later on known as the BCJR, Maximum a Posteriori (MAP) or forward-backward algorithm. The procedure can be applied to block or convolutional codes but, as it is more complex than the Viterbi algorithm, during about 20 years it was not used in practical implementations. The situation was dramatically changed with the advent of turbo codes in 1993. Their inventors, Berrou, Glavieux and Thithimajshima, used a modified version of the BCJR algorithm, which has reborn vigorously that way.",
"title": ""
},
{
"docid": "5177e0169470b90e3b58ffc8d87cffb7",
"text": "Melatonin, an endogenous signal of darkness, is an important component of the body's internal time-keeping system. As such it regulates major physiological processes including the sleep wake cycle, pubertal development and seasonal adaptation. In addition to its relevant antioxidant activity, melatonin exerts many of its physiological actions by interacting with membrane MT1 and MT2 receptors and intracellular proteins such as quinone reductase 2, calmodulin, calreticulin and tubulin. Here we review the current knowledge about the properties and signaling of melatonin receptors as well as their potential role in health and some diseases. Melatonin MT1 and MT2 receptors are G protein coupled receptors which are expressed in various parts of the CNS (suprachiasmatic nuclei, hippocampus, cerebellar cortex, prefrontal cortex, basal ganglia, substantia nigra, ventral tegmental area, nucleus accumbens and retinal horizontal, amacrine and ganglion cells) and in peripheral organs (blood vessels, mammary gland, gastrointestinal tract, liver, kidney and bladder, ovary, testis, prostate, skin and the immune system). Melatonin receptors mediate a plethora of intracellular effects depending on the cellular milieu. These effects comprise changes in intracellular cyclic nucleotides (cAMP, cGMP) and calcium levels, activation of certain protein kinase C subtypes, intracellular localization of steroid hormone receptors and regulation of G protein signaling proteins. There are circadian variations in melatonin receptors and responses. Alterations in melatonin receptor expression as well as changes in endogenous melatonin production have been shown in circadian rhythm sleep disorders, Alzheimer's and Parkinson's diseases, glaucoma, depressive disorder, breast and prostate cancer, hepatoma and melanoma. This paper reviews the evidence concerning melatonin receptors and signal transduction pathways in various organs. It further considers their relevance to circadian physiology and pathogenesis of certain human diseases, with a focus on the brain, the cardiovascular and immune systems, and cancer.",
"title": ""
},
{
"docid": "57889499aaa45b38754d9d6cebff96b8",
"text": "ion speeds up the DTW algorithm by operating on a reduced representation of the data. These algorithms include IDDTW [3], PDTW [13], and COW [2] . The left side of Figure 5 shows a full-resolution cost matrix for which a minimum-distance warp path must be found. Rather than running DTW on the full resolution (1/1) cost matrix, the time series are reduced in size to make the number of cells in the cost matrix more manageable. A warp path is found for the lowerresolution time series and is mapped back to full resolution. The resulting speedup depends on how much abstraction is used. Obviously, the calculated warp path becomes increasingly inaccurate as the level of abstraction increases. Projecting the low resolution warp path to the full resolution usually creates a warp path that is far from optimal. This is because even IF an optimal warp path passes through the low-resolution cell, projecting the warp path to the higher resolution ignores local variations in the warp path that can be very significant. Indexing [9][14] uses lower-bounding functions to prune the number of times DTW is run for similarity search [17]. Indexing speeds up applications in which DTW is used, but it does not make up the actual DTW calculation any more efficient. Our FastDTW algorithm uses ideas from both the constraints and abstraction approaches. Using a combination of both overcomes many limitations of using either method individually, and yields an accurate algorithm that is O(N) in both time and space complexity. Our multi-level approach is superficially similar to IDDTW [3] because they both evaluate several different resolutions. However, IDDTW simply executes PDTW [13] at increasingly higher resolutions until a desired “accuracy” is achieved. IDDTW does not project low resolution solutions to higher resolutions. In Section 4, we will demonstrate that these methods are more inaccurate than our method given the same amount of execution time. Projecting warp paths to higher resolutions is also done in the construction of “Match Webs” [15]. However, their approach is still O(N) due to the simultaneous search for many warp paths (they call them “chains”). A multi-resolution approach in their application also could not continue down to the low resolutions without severely reducing the number of “chains” that could be found. Some recent research [18] asserts that there is no need to speed up the original DTW algorithm. However, this is only true under the following (common) conditions: 1) Tight Constraints A relatively strict near-linear warp path is allowable. 2) Short Time Series All time series are short enough for the DTW algorithm to execute quickly. (~3,000 points if a warp path is needed, or ~100,000 if no warp path is needed and the user has a lot of patience).",
"title": ""
},
{
"docid": "d3ce627360a466ac95de3a61d64995e1",
"text": "The large size of power systems makes behavioral analysis of electricity markets computationally taxing. Reducing the system into a smaller equivalent, based on congestion zones, can substantially reduce the computational requirements. In this paper, we propose a scheme to determine the equivalent reactance of interfaces of a reduced system based upon the zonal power transfer distribution factors of the original system. The dc power flow model is used to formulate the problem. Test examples are provided using both an illustrative six-bus system and a more realistically sized 12 925-bus system.",
"title": ""
}
] |
scidocsrr
|
c0bcd123fdbd28899ff3a66d63acde8c
|
Empathic Robots for Long-term Interaction - Evaluating Social Presence, Engagement and Perceived Support in Children
|
[
{
"docid": "8ace8a84496060999001bc8daab1b01f",
"text": "As the field of HRI evolves, it is important to understand how users interact with robots over long periods. This paper reviews the current research on long-term interaction between users and social robots. We describe the main features of these robots and highlight the main findings of the existing long-term studies. We also present a set of directions for future research and discuss some open issues that should be addressed in this field.",
"title": ""
}
] |
[
{
"docid": "757c4f0d74b2070e184124b207136852",
"text": "Why do employees engage in innovative behavior at their workplaces? We examine how employees’ innovative behavior is explained by expectations for such behavior to affect job performance (expected positive performance outcomes) and image inside their organizations (expected image risks and expected image gains). We found significant effects of all three outcome expectations on innovative behavior. These outcome expectations, as intermediate psychological processes, were shaped by contextual and individual difference factors, including perceived organization support for innovation, supervisor relationship quality, job requirement for innovativeness, employee reputation as innovative, and individual dissatisfaction with the status quo.",
"title": ""
},
{
"docid": "73bf9a956ea7a10648851c85ef740db0",
"text": "Printed atmospheric spark gaps as ESD-protection on PCBs are examined. At first an introduction to the physic behind spark gaps. Afterward the time lag (response time) vs. voltage is measured with high load impedance. The dependable clamp voltage (will be defined later) is measured as a function of the load impedance and the local field in the air gap is simulated with FIT simulation software. At last the observed results are discussed on the basic of the physic and the simulations.",
"title": ""
},
{
"docid": "bda0ae59319660987e9d2686d98e4b9a",
"text": "Due to the shift from software-as-a-product (SaaP) to software-as-a-service (SaaS), software components that were developed to run in a single address space must increasingly be accessed remotely across the network. Distribution middleware is frequently used to facilitate this transition. Yet a range of middleware platforms exist, and there are few existing guidelines to help the programmer choose an appropriate middleware platform to achieve desired goals for performance, expressiveness, and reliability. To address this limitation, in this paper we describe a case study of transitioning an Open Service Gateway Initiative (OSGi) service from local to remote access. Our case study compares five remote versions of this service, constructed using different distribution middleware platforms. These platforms are implemented by widely-used commercial technologies or have been proposed as improvements on the state of the art. In particular, we implemented a service-oriented version of our own Remote Batch Invocation abstraction. We compare and contrast these implementations in terms of their respective performance, expressiveness, and reliability. Our results can help remote service programmers make informed decisions when choosing middleware platforms for their applications.",
"title": ""
},
{
"docid": "47b39a8839d536d57c692781d61f2b5e",
"text": "Recently, stream data mining applications has drawn vital attention from several research communities. Stream data is continuous form of data which is distinguished by its online nature. Traditionally, machine learning area has been developing learning algorithms that have certain assumptions on underlying distribution of data such as data should have predetermined distribution. Such constraints on the problem domain lead the way for development of smart learning algorithms performance is theoretically verifiable. Real-word situations are different than this restricted model. Applications usually suffers from problems such as unbalanced data distribution. Additionally, data picked from non-stationary environments are also usual in real world applications, resulting in the “concept drift” which is related with data stream examples. These issues have been separately addressed by the researchers, also, it is observed that joint problem of class imbalance and concept drift has got relatively little research. If the final objective of clever machine learning techniques is to be able to address a broad spectrum of real world applications, then the necessity for a universal framework for learning from and tailoring (adapting) to, environment where drift in concepts may occur and unbalanced data distribution is present can be hardly exaggerated. In this paper, we first present an overview of issues that are observed in stream data mining scenarios, followed by a complete review of recent research in dealing with each of the issue.",
"title": ""
},
{
"docid": "67269d2f4cc4b4ac07c855e3dfaca4ca",
"text": "Electronic textiles, or e-textiles, are an increasingly important part of wearable computing, helping to make pervasive devices truly wearable. These soft, fabric-based computers can function as lovely embodiments of Mark Weiser's vision of ubiquitous computing: providing useful functionality while disappearing discreetly into the fabric of our clothing. E-textiles also give new, expressive materials to fashion designers, textile designers, and artists, and garments stemming from these disciplines usually employ technology in visible and dramatic style. Integrating computer science, electrical engineering, textile design, and fashion design, e-textiles cross unusual boundaries, appeal to a broad spectrum of people, and provide novel opportunities for creative experimentation both in engineering and design. Moreover, e-textiles are cutting- edge technologies that capture people's imagination in unusual ways. (What other emerging pervasive technology has Vogue magazine featured?) Our work aims to capitalize on these unique features by providing a toolkit that empowers novices to design, engineer, and build their own e-textiles.",
"title": ""
},
{
"docid": "af382dbadec10d8480b09e51503071d2",
"text": "Scent communication plays a central role in the mating behavior of many nonhuman mammals but has often been overlooked in the study of human mating. However, a growing body of evidence suggests that men may perceive women's high-fertility body scents (collected near ovulation) as more attractive than their low-fertility body scents. The present study provides a methodologically rigorous replication of this finding, while also examining several novel questions. Women collected samples of their natural body scent twice--once on a low-fertility day and once on a high-fertility day of the ovulatory cycle. Tests of luteinizing hormone confirmed that women experienced ovulation within two days of their high-fertility session. Men smelled each woman's high- and low-fertility scent samples and completed discrimination and preference tasks. At above-chance levels, men accurately discriminated between women's high- and low-fertility scent samples (61%) and chose women's high-fertility scent samples as more attractive than their low-fertility scent samples (56%). Men also rated each scent sample on sexiness, pleasantness, and intensity, and estimated the physical attractiveness of the woman who had provided the sample. Multilevel modeling revealed that, when high- and low-fertility scent samples were easier to discriminate from each other, high-fertility scent samples received even more favorable ratings compared with low-fertility scent samples. This study builds on a growing body of evidence indicating that men are attracted to cues of impending ovulation in women and raises the intriguing question of whether women's cycling hormones influence men's attraction and sexual approach behavior.",
"title": ""
},
{
"docid": "ce0ba4696c26732ac72b346f72af7456",
"text": "OBJECTIVE\nThe purpose of this study was to examine the relationship between two forms of helping behavior among older adults--informal caregiving and formal volunteer activity.\n\n\nMETHODS\nTo evaluate our hypotheses, we employed Tobit regression models to analyze panel data from the first two waves of the Americans' Changing Lives survey.\n\n\nRESULTS\nWe found that older adult caregivers were more likely to be volunteers than noncaregivers. Caregivers who provided a relatively high number of caregiving hours annually reported a greater number of volunteer hours than did noncaregivers. Caregivers who provided care to nonrelatives were more likely than noncaregivers to be a volunteer and to volunteer more hours. Finally, caregivers were more likely than noncaregivers to be asked to volunteer.\n\n\nDISCUSSION\nOur results provide support for the hypothesis that caregivers are embedded in networks that provide them with more opportunities for volunteering. Additional research on the motivations for volunteering and greater attention to the context and hierarchy of caregiving and volunteering are needed.",
"title": ""
},
{
"docid": "42f3032626b2a002a855476a718a2b1b",
"text": "Learning controllers for bipedal robots is a challenging problem, often requiring expert knowledge and extensive tuning of parameters that vary in different situations. Recently, deep reinforcement learning has shown promise at automatically learning controllers for complex systems in simulation. This has been followed by a push towards learning controllers that can be transferred between simulation and hardware, primarily with the use of domain randomization. However, domain randomization can make the problem of finding stable controllers even more challenging, especially for underactuated bipedal robots. In this work, we explore whether policies learned in simulation can be transferred to hardware with the use of high-fidelity simulators and structured controllers. We learn a neural network policy which is a part of a more structured controller. While the neural network is learned in simulation, the rest of the controller stays fixed, and can be tuned by the expert as needed. We show that using this approach can greatly speed up the rate of learning in simulation, as well as enable transfer of policies between simulation and hardware. We present our results on an ATRIAS robot and explore the effect of action spaces and cost functions on the rate of transfer between simulation and hardware. Our results show that structured policies can indeed be learned in simulation and implemented on hardware successfully. This has several advantages, as the structure preserves the intuitive nature of the policy, and the neural network improves the performance of the hand-designed policy. In this way, we propose a way of using neural networks to improve expert designed controllers, while maintaining ease of understanding.",
"title": ""
},
{
"docid": "40cc765b3bc6c97fe6caaeedce109049",
"text": "There is a need in the humanities for a 3D WebGIS with analytical tools that allow researchers to analyze 3D models linked to spatially referenced data. Geographic Information Systems (GIS) allow for complex spatial analysis of 2.5D data. For example, they offer bird’s eye views of landscapes with extruded building footprints, but one cannot ‘get on the ground’ and interact with true 3D models from a pedestrian perspective. Meanwhile, 3D models and virtual environments visualize data in 3D space, but analytical tools are simple rotation or lighting effects. The MayaArch3D Project is developing a 3D WebGIS—called QueryArch3D—to allow these two distinct approaches to ‘talk to each other’ for studies of architecture and landscapes—in this case, the eighth-century Maya kingdom of Copan, Honduras. With this tool, researchers can search and query, in real time via a virtual reality (VR) environment, segmented 3D models of multiple resolutions (as well as computer-assisted design and reality-based) that are linked to attribute data stored in a spatial database. Beta tests indicate that this tool can assist researchers in expanding questions and developing new analytical methods in humanities research. This article summarizes the results of a pilot project that started in 2009, with an art historian and an archaeologist’s collaborative research on the ancient Maya kingdom and UNESCO World Heritage site of Copan in Honduras—called MayaArch3D. The project researches inno736 digitalcommons.unl.edu T h e M a y a a r c h 3D p r o j e c T : 3D W e b GIS f o r a n c I e n T a r c h I T e c T u r e a n D l a n D S c a p e S 737 1 The Gap between GIS and 3D Modeling Systems 1.1 3D modeling Modern sensor and computing technologies are changing the practice of art history and archaeology because they offer innovative ways to document, reconstruct, and research the ancient world in 3D (El-Hakim et al., 2008; Reindel and Wagner, 2009). State-of-the-art imaging technologies allow researchers to document 3D objects to the level of the micron (e.g. Grün, 2008), whereas Virtual Reality (VR) simulation programs enable reconstructions of ancient buildings in their ancient environments and landscapes. However, as Frischer has noted (2008), the perception is that 3D models are purely illustrative—ideal for education or conservation— whereas how 3D models can assist with comparative research on architecture is an ongoing question. Since 1998, Jennifer von Schwerin has addressed this question for ancient Maya architecture when she began collaborating with Harvard University archaeologists to analyze the collapsed façade sculpture of an eighth-century temple at Copan, Honduras, called Temple 22 (Ahlfeldt 2004; Fash 2011b; von Schwerin 2011a). As an art historian, von Schwerin seeks to correlate political and social changes in ancient Maya kingdoms with developments in architectural form over space and time. But the first challenge is simply to bring together data on the temple that is spread around the world in various archives and museums and to determine how the building once appeared in the past. To test her reconstructions, von Schwerin turned to digital 3D tools. Different methods are possible for creating 3D models of ancient monuments—such as computer graphics, procedural modeling (models created from sets of rules), and reality-based modeling (models created from real-world data such as laser scanning)—and increasingly, these are being combined to create multi-resolution 3D reconstructions. Although this combination can expand research possibilities, it is critical to identify optional modeling techniques based on researcher needs and to define the workflow for dealing with multi-resolution models in a 3D WebGIS tool. The MayaArch3D project is addressing this by creating test data of multi-resolution 3D models from Copan, including various 3D simulations of Temple 22 (Remondino et al., 2009, von Schwerin et al., 2011b). The 3D models are being generated at different levels of detail (LoD) and resolutions ranging from individual buildings to archaeological complexes using methodologies based on image data acquired with passive sensors (e.g. digital cameras), range data acquired with active sensors (e.g. laser scanning), classical surveying, and procedural modeling using existing maps. The choice depends on the required accuracy, object dimensions and location, the surface characteristics, the team’s level of experience, the project’s budget, and the final goal. For example, computer-assisted design (CAD) models such as the 3D Studio Max model of Temple vative approaches to integrate GIS, 3D digital models, and VR environments online for teaching and research on ancient architecture and landscapes. It has grown into an international, interdisciplinary project that brings together art historians, archaeologists, and cultural resource managers with experts in remote sensing, photogrammetry, 3D modeling, and VR. The Start Up Phase was funded by two National Endowment for the Humanities, Digital Humanities Start-Up grants to the University of New Mexico (PI: Jennifer von Schwerin) and developed and beta tested a pipeline and prototype 3D WebGIS—called QueryArch3D. The prototype version is available at http://Mayaarch3d.org/project-history/). Project results indicate that it is possible to bridge the gap between 3D and GIS to create a resource for researchers of Maya architecture to compare and analyze 3D models and archaeological data in the context of a geographically referenced, VR landscape. 738 v o n S c h W e r I n e T a l . I n L i t e r a r y a n d L i n g u i s t i c c o m p u t i n g 28 (2013) 22 depicted in Figures 1 and 2 offers the ability to test hypothetical reconstructions and to analyze a building from multiple perspectives (e.g. bird’s eye, exterior versus interior view) with rotation or lighting effects (Figure 3).1 Reality-based models created using active and passive sensors allow for comparison against CAD reconstructions (Figs 4 and 5). VR such as this low-resolution SketchUp model of Copan’s landscape (Figure 6)—created using georeferenced building footprints—provides an urban context for high-resolution 3D models of individual structures and allows users to virtually navigate through ancient cities and landscapes and to increase their awareness of mass, space, and spatial relationships. This interaction facilitates a sense of embodiment and place (Forte and Bonini, 2010), and it also is useful for visualizing the results of archaeological research—for example, an affiliated project is working to display the results of archaeoastronomical studies at Copan (see Figure 10). These are just a few reasons that counter the common perception that 3D models are purely illustrative (e.g. Frischer and Dakouri-Hild, 2008). Increasingly, projects are demonstrating the value of 3D models for scientific analysis. Researchers developing tools for viewing and analyzing sophisticated 3D architectural models include the two big VR environment re-creation laboratories—the Experimental Technology Center at University of California, Los Angeles and the Institute for Advanced Technology in the Humanities at the University of Virginia, who have collaborated on the project ‘Rome Reborn’ (romereborn.frischerconsulting.com). In Europe, 3D models of architecture are used to analyze building plans and phases [for instance, the projects on Roman emperor palaces in Rome and Serbia (Weferling et al., 2001) and analyses of the Cologne cathedral (Schock-Werner et al., 2011)]. More recently, a few researchers have begun to explore how digital models might be used for comparative online research. One example is Stephen Murray’s “Mapping Gothic France” project—a collaborative project linking text, Quick Time VR, and 2D and 3D images to an interactive map of Gothic cathedrals. One promising opportunity—the approach taken by the MayaArch3D Project—is to use 3D models as visualization “containers” for different kinds of information (Manferdini et al., 2008). These recent advantages have initiated a broader interest in 3D modeling for archaeology and cultural heritage, which is evident at conferences such as CAA Figure 1. The 3D low-resolution CAD model of Temple 22 used for testing hypothetical reconstructions integrated with high-resolution reality-based 3D models of architectural sculpture (3D model created by F. Galezzi) Figure 2. Preliminary high-definition model of Temple 22 used to test the process of integrating various data sources into the reconstruction process. (3D model by R. Maqueda and J. von Schwerin) T h e M a y a a r c h 3D p r o j e c T : 3D W e b GIS f o r a n c I e n T a r c h I T e c T u r e a n D l a n D S c a p e S 739 (Computer Applications and Quantitative Methods in Archaeology), CIPA (International Committee for Documentation of Cultural Heritage), and the recently founded peer-reviewed journal Digital Applications in Archaeology and Cultural Heritage. 1.2 3D models in ancient American archaeology Most applications of 3D archaeology focus on archaeological sites in Europe or the Middle East; however, the acquisition of reality-based data for 3D models also is increasing for the archaeology of the ancient Americas (e.g. Reindel and Wagner, 2009; Lambers et al., 2007). As for current 3D projects that deal with the remains of the ancient Maya specifically, some are engaged with high-resolution scanning of individual sculptures for conservation and analysis and are considering ways to offer them online. These include Harvard University’s Corpus Project (Tokovinine and Fash, 2008; Fash 2011a, 2012), the MayaArch3D Project summarized here (see also Remondino et al., 2009), and the Mesoamerican Three-Dimensional Imaging Database (Doering and Collins, 2009) (http://www.famsi.org). Other web-based applications, like CyArk, use Google Earth and make point clouds available of whole Maya structures (http://archive.cyark.org). Meanwhile, so",
"title": ""
},
{
"docid": "ff92de8ff0ff78c6ba451d4ce92a189d",
"text": "Recognition of the mode of motion or mode of transit of the user or platform carrying a device is needed in portable navigation, as well as other technological domains. An extensive survey on motion mode recognition approaches is provided in this survey paper. The survey compares and describes motion mode recognition approaches from different viewpoints: usability and convenience, types of devices in terms of setup mounting and data acquisition, various types of sensors used, signal processing methods employed, features extracted, and classification techniques. This paper ends with a quantitative comparison of the performance of motion mode recognition modules developed by researchers in different domains.",
"title": ""
},
{
"docid": "e9678b8e4981a072ecdf92978ddf8c8f",
"text": "To the Editor: Drs Shephard and Balady 1 discuss the potential dangers of excessive exercise and the appropriate dose of physical activity. They cite 2 reports showing that in healthy persons, prolonged exercise can cause myocardial fatigue with a temporary depression of myocardial function. 2,3 In addition, recent research demonstrates that prolonged aerobic exercise may cause subclinical myocardial necrosis in individuals with no risk factors for cardiovascular disease. 4,5 Evidence also exists that apparently healthy individuals who are not active enough to meet a traditional exercise prescription (structured vigorous activity) are at a high risk for subclinical myocardial damage caused by prolonged strenuous exercise. 6",
"title": ""
},
{
"docid": "690a2b067af8810d5da7d3389b7b4d78",
"text": "Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NPcomplete problem. Although finding the exact minimum adversarial distortion is hard, giving a certified lower bound of the minimum distortion is possible. Current available methods of computing such a bound are either time-consuming or deliver low quality bounds that are too loose to be useful. In this paper, we exploit the special structure of ReLU networks and provide two computationally efficient algorithms (Fast-Lin,Fast-Lip) that are able to certify non-trivial lower bounds of minimum adversarial distortions. Experiments show that (1) our methods deliver bounds close to (the gap is 2-3X) exact minimum distortions found by Reluplex in small networks while our algorithms are more than 10,000 times faster; (2) our methods deliver similar quality of bounds (the gap is within 35% and usually around 10%; sometimes our bounds are even better) for larger networks compared to the methods based on solving linear programming problems but our algorithms are 3314,000 times faster; (3) our method is capable of solving large MNIST and CIFAR networks up to 7 layers with more than 10,000 neurons within tens of seconds on a single CPU core. In addition, we show that there is no polynomial time algorithm that can approximately find the minimum `1 adversarial distortion of a ReLU network with a 0.99 lnn approximation ratio unless NP=P, where n is the number of neurons in the network. Equal contribution Massachusetts Institute of Technology, Cambridge, MA UC Davis, Davis, CA Harvard University, Cambridge, MA UT Austin, Austin, TX. Source code is available at https://github.com/huanzhang12/CertifiedReLURobustness. Correspondence to: Tsui-Wei Weng <[email protected]>, Huan Zhang <[email protected]>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).",
"title": ""
},
{
"docid": "803baca2f92e8e71739da691b5e9c4df",
"text": "Assume submodels are in exponential family: pm(x, z; θm) = exp { θ m ( φm(x)φ Z(z) ) −Am(θm) } for x ∈ X , z ∈ Zm and 0 otherwise Reformulation of Product EM: Aggregate parameters: b = ∑ m bm, bm = φ X m(x) θm E-step: compute expected sufficient statistics μ = E(b,∩mZm) def = Eq(z;b)φ(z) with support ∩mZm M-step: set θm to match moments φ X m(x)μ Exponential family formulation Two sources of intractability in the E-step: • Domain Z = ∩mZm is unwieldy (e.g., matchings) • Parameters b result in high tree-width graph New objective function: • A function of sufficient statistics μm and parameters θm for each submodel m = 1, . . . ,M • See paper for some preliminary bounds Algorithm: Aggregate parameters: b = ∑ m bm E-step: compute statistics μm = E(b′,Z ′) Aggregate statistics: μ̄ = 1 M ∑ m μm M-step: set θm to match moments φ X m(x)μ̄",
"title": ""
},
{
"docid": "bb15c4addb2d3660d8750c343b18b5c9",
"text": "Neural networks have many successful applications, while much less theoretical understanding has been gained. Towards bridging this gap, we study the problem of learning a two-layer overparameterized ReLU neural network for multi-class classification via stochastic gradient descent (SGD) from random initialization. In the overparameterized setting, when the data comes from mixtures of well-separated distributions, we prove that SGD learns a network with a small generalization error, albeit the network has enough capacity to fit arbitrary labels. Furthermore, the analysis provides interesting insights into several aspects of learning neural networks and can be verified based on empirical studies on synthetic data and on the MNIST dataset.",
"title": ""
},
{
"docid": "42e2aec24a5ab097b5fff3ec2fe0385d",
"text": "Online freelancing marketplaces have grown quickly in recent years. In theory, these sites offer workers the ability to earn money without the obligations and potential social biases associated with traditional employment frameworks. In this paper, we study whether two prominent online freelance marketplaces - TaskRabbit and Fiverr - are impacted by racial and gender bias. From these two platforms, we collect 13,500 worker profiles and gather information about workers' gender, race, customer reviews, ratings, and positions in search rankings. In both marketplaces, we find evidence of bias: we find that gender and race are significantly correlated with worker evaluations, which could harm the employment opportunities afforded to the workers. We hope that our study fuels more research on the presence and implications of discrimination in online environments.",
"title": ""
},
{
"docid": "aa2e16e6ed5d2610a567e358807834d4",
"text": "As the most prevailing two-factor authentication mechanism, smart-card-based password authentication has been a subject of intensive research in the past two decades, and hundreds of this type of schemes have wave upon wave been proposed. In most of these studies, there is no comprehensive and systematical metric available for schemes to be assessed objectively, and the authors present new schemes with assertions of the superior aspects over previous ones, while overlooking dimensions on which their schemes fare poorly. Unsurprisingly, most of them are far from satisfactory—either are found short of important security goals or lack of critical properties, especially being stuck with the security-usability tension. To overcome this issue, in this work we first explicitly define a security model that can accurately capture the practical capabilities of an adversary and then suggest a broad set of twelve properties framed as a systematic methodology for comparative evaluation, allowing schemes to be rated across a common spectrum. As our main contribution, a new scheme is advanced to resolve the various issues arising from user corruption and server compromise, and it is formally proved secure under the harshest adversary model so far. In particular, by integrating “honeywords”, traditionally the purview of system security, with a “fuzzy-verifier”, our scheme hits “two birds”: it not only eliminates the long-standing security-usability conflict that is considered intractable in the literature, but also achieves security guarantees beyond the conventional optimal security bound.",
"title": ""
},
{
"docid": "929c3c0bd01056851952660ffd90673a",
"text": "SUMMARY: The Food and Drug Administration (FDA) is issuing this proposed rule to amend the 1994 tentative final monograph or proposed rule (the 1994 TFM) for over-the-counter (OTC) antiseptic drug products. In this proposed rule, we are proposing to establish conditions under which OTC antiseptic products intended for use by health care professionals in a hospital setting or other health care situations outside the hospital are generally recognized as safe and effective. In the 1994 TFM, certain antiseptic active ingredients were proposed as being generally recognized as safe for use in health care settings based on safety data evaluated by FDA as part of its ongoing review of OTC antiseptic drug products. However, in light of more recent scientific developments, we are now proposing that additional safety data are necessary to support the safety of antiseptic active ingredients for these uses. We also are proposing that all health care antiseptic active ingredients have in vitro data characterizing the ingredient's antimicrobial properties and in vivo clinical simulation studies showing that specified log reductions in the amount of certain bacteria are achieved using the ingredient. DATES: Submit electronic or written comments by October 28, 2015. See section VIII of this document for the proposed effective date of a final rule based on this proposed rule. ADDRESSES: You may submit comments by any of the following methods: Electronic Submissions Submit electronic comments in the following way: • Federal eRulemaking Portal: http:// www.regulations.gov. Follow the instructions for submitting comments.",
"title": ""
},
{
"docid": "4e8a27fd2e56dbc33e315bc9cb462239",
"text": "Traditionally, the visual analogue scale (VAS) has been proposed to overcome the limitations of ordinal measures from Likert-type scales. However, the function of VASs to overcome the limitations of response styles to Likert-type scales has not yet been addressed. Previous research using ranking and paired comparisons to compensate for the response styles of Likert-type scales has suffered from limitations, such as that the total score of ipsative measures is a constant that cannot be analyzed by means of many common statistical techniques. In this study we propose a new scale, called the Visual Analogue Scale for Rating, Ranking, and Paired-Comparison (VAS-RRP), which can be used to collect rating, ranking, and paired-comparison data simultaneously, while avoiding the limitations of each of these data collection methods. The characteristics, use, and analytic method of VAS-RRPs, as well as how they overcome the disadvantages of Likert-type scales, ranking, and VASs, are discussed. On the basis of analyses of simulated and empirical data, this study showed that VAS-RRPs improved reliability, response style bias, and parameter recovery. Finally, we have also designed a VAS-RRP Generator for researchers' construction and administration of their own VAS-RRPs.",
"title": ""
},
{
"docid": "2474db9eed888bba6bb4dd08658bc4b6",
"text": "BACKGROUND\nThe anabolic effect of resistance exercise is enhanced by the provision of dietary protein.\n\n\nOBJECTIVES\nWe aimed to determine the ingested protein dose response of muscle (MPS) and albumin protein synthesis (APS) after resistance exercise. In addition, we measured the phosphorylation of candidate signaling proteins thought to regulate acute changes in MPS.\n\n\nDESIGN\nSix healthy young men reported to the laboratory on 5 separate occasions to perform an intense bout of leg-based resistance exercise. After exercise, participants consumed, in a randomized order, drinks containing 0, 5, 10, 20, or 40 g whole egg protein. Protein synthesis and whole-body leucine oxidation were measured over 4 h after exercise by a primed constant infusion of [1-(13)C]leucine.\n\n\nRESULTS\nMPS displayed a dose response to dietary protein ingestion and was maximally stimulated at 20 g. The phosphorylation of ribosomal protein S6 kinase (Thr(389)), ribosomal protein S6 (Ser(240/244)), and the epsilon-subunit of eukaryotic initiation factor 2B (Ser(539)) were unaffected by protein ingestion. APS increased in a dose-dependent manner and also reached a plateau at 20 g ingested protein. Leucine oxidation was significantly increased after 20 and 40 g protein were ingested.\n\n\nCONCLUSIONS\nIngestion of 20 g intact protein is sufficient to maximally stimulate MPS and APS after resistance exercise. Phosphorylation of candidate signaling proteins was not enhanced with any dose of protein ingested, which suggested that the stimulation of MPS after resistance exercise may be related to amino acid availability. Finally, dietary protein consumed after exercise in excess of the rate at which it can be incorporated into tissue protein stimulates irreversible oxidation.",
"title": ""
},
{
"docid": "4ba85aeab0be2441c6705a9715a8f329",
"text": "Learning and remembering how to use APIs is difficult. While code-completion tools can recommend API methods, browsing a long list of API method names and their documentation is tedious. Moreover, users can easily be overwhelmed with too much information. We present a novel API recommendation approach that taps into the predictive power of repetitive code changes to provide relevant API recommendations for developers. Our approach and tool, APIREC, is based on statistical learning from fine-grained code changes and from the context in which those changes were made. Our empirical evaluation shows that APIREC correctly recommends an API call in the first position 59% of the time, and it recommends the correct API call in the top five positions 77% of the time. This is a significant improvement over the state-of-the-art approaches by 30-160% for top-1 accuracy, and 10-30% for top-5 accuracy, respectively. Our result shows that APIREC performs well even with a one-time, minimal training dataset of 50 publicly available projects.",
"title": ""
}
] |
scidocsrr
|
437bacf0d9a7d1be3a1130a3677a20b4
|
When is multitask learning effective? Semantic sequence prediction under varying data conditions
|
[
{
"docid": "2bfd884e92a26d017a7854be3dfb02e8",
"text": "The tasks in fine-grained opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task. We propose a general class of discriminative models based on recurrent neural networks (RNNs) and word embeddings that can be successfully applied to such tasks without any taskspecific feature engineering effort. Our experimental results on the task of opinion target identification show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based models. Our framework is flexible, allows us to incorporate other linguistic features, and achieves results that rival the top performing systems in SemEval-2014.",
"title": ""
}
] |
[
{
"docid": "759a4737f3774c1487670597f5e011d1",
"text": "Indoor positioning systems (IPS) based on Wi-Fi signals are gaining popularity recently. IPS based on Received Signal Strength Indicator (RSSI) could only achieve a precision of several meters due to the strong temporal and spatial variation of indoor environment. On the other hand, IPS based on Channel State Information (CSI) drive the precision into the sub-meter regime with several access points (AP). However, the performance degrades with fewer APs mainly due to the limit of bandwidth. In this paper, we propose a Wi-Fi-based time-reversal indoor positioning system (WiFi-TRIPS) using the location-specific fingerprints generated by CSIs with a total bandwidth of 1 GHz. WiFi-TRIPS consists of an offline phase and an online phase. In the offline phase, CSIs are collected in different 10 MHz bands from each location-of-interest and the timing and frequency synchronization errors are compensated. We perform a bandwidth concatenation to combine CSIs in different bands into a single fingerprint of 1 GHz. In the online phase, we evaluate the time-reversal resonating strength using the fingerprint from an unknown location and those in the database for location estimation. Extensive experiment results demonstrate a perfect 5cm precision in an 20cm × 70cm area in a non-line-of-sight office environment with one link measurement.",
"title": ""
},
{
"docid": "e08c2c82730900fea60f6a3c81300430",
"text": "The Internet of Things (IoT) is inter communication of embedded devices using networking technologies. The IoT will be one of the important trends in future, can affect the networking, business and communication. In this paper, proposing a remote sensing parameter of the human body which consists of pulse and temperature. The parameters that are used for sensing and monitoring will send the data through wireless sensors. Adding a web based observing helps to keep track of the regular health status of a patient. The sensing data will be continuously collected in a database and will be used to inform patient to any unseen problems to undergo possible diagnosis. Experimental results prove the proposed system is user friendly, reliable, economical.",
"title": ""
},
{
"docid": "e44b779a299c6454e5870b02e7186a9f",
"text": "Region-based convolutional neural networks (R-CNNs) have achieved great success in object detection recently. These deep models depend on region proposal algorithms to hypothesize object locations. In this paper, we combine a special region proposal algorithm with R-CNN, and apply it to pedestrian detection. The special algorithm is used to generate region proposals only for pedestrian class. It is different from the popular region proposal algorithm selective search that detects generic object locations. The experimental results prove that region proposals generated by our method are more applicable than selective search for pedestrian detection. Our method performs faster training and testing than the deep model based on, and it achieves competitive performance compared to the state-of-the-arts in pedestrian detection.",
"title": ""
},
{
"docid": "af382dbadec10d8480b09e51503071d2",
"text": "Scent communication plays a central role in the mating behavior of many nonhuman mammals but has often been overlooked in the study of human mating. However, a growing body of evidence suggests that men may perceive women's high-fertility body scents (collected near ovulation) as more attractive than their low-fertility body scents. The present study provides a methodologically rigorous replication of this finding, while also examining several novel questions. Women collected samples of their natural body scent twice--once on a low-fertility day and once on a high-fertility day of the ovulatory cycle. Tests of luteinizing hormone confirmed that women experienced ovulation within two days of their high-fertility session. Men smelled each woman's high- and low-fertility scent samples and completed discrimination and preference tasks. At above-chance levels, men accurately discriminated between women's high- and low-fertility scent samples (61%) and chose women's high-fertility scent samples as more attractive than their low-fertility scent samples (56%). Men also rated each scent sample on sexiness, pleasantness, and intensity, and estimated the physical attractiveness of the woman who had provided the sample. Multilevel modeling revealed that, when high- and low-fertility scent samples were easier to discriminate from each other, high-fertility scent samples received even more favorable ratings compared with low-fertility scent samples. This study builds on a growing body of evidence indicating that men are attracted to cues of impending ovulation in women and raises the intriguing question of whether women's cycling hormones influence men's attraction and sexual approach behavior.",
"title": ""
},
{
"docid": "63b78edf4fe9578d576ba89da14c850a",
"text": "Growth of internet era and corporate sector dealings communication online has introduced crucial security challenges in cyber space. Statistics of recent large scale attacks defined new class of threat to online world, advanced persistent threat (APT) able to impact national security and economic stability of any country. From all APTs, botnet is one of the well-articulated and stealthy attacks to perform cybercrime. Botnet owners and their criminal organizations are continuously developing innovative ways to infect new targets into their networks and exploit them. The concept of botnet refers collection of compromised computers (bots) infected by automated software robots, that interact to accomplish some distributed task which run without human intervention for illegal purposes. They are mostly malicious in nature and allow cyber criminals to control the infected machines remotely without the victim's knowledge. They use various techniques, communication protocols and topologies in different stages of their lifecycle; also specifically they can upgrade their methods at any time. Botnet is global in nature and their target is to steal or destroy valuable information from organizations as well as individuals. In this paper we present real world botnet (APTs) survey.",
"title": ""
},
{
"docid": "d540250c51e97622a10bcb29f8fde956",
"text": "With many advantages of rectangular waveguide and microstrip lines, substrate integrated waveguide (SIW) can be used for design of planar waveguide-like slot antenna. However, the bandwidth of this kind of antenna structure is limited. In this work, a parasitic dipole is introduced and coupled with the SIW radiate slot. The results have indicated that the proposed technique can enhance the bandwidth of the SIW slot antenna significantly. The measured bandwidth of fabricated antenna prototype is about 19%, indicating about 115% bandwidth enhancement than the ridged substrate integrated waveguide (RSIW) slot antenna.",
"title": ""
},
{
"docid": "eee687e5c110bbfdd447b7a58444f34e",
"text": "We present a \"scale-and-stretch\" warping method that allows resizing images into arbitrary aspect ratios while preserving visually prominent features. The method operates by iteratively computing optimal local scaling factors for each local region and updating a warped image that matches these scaling factors as closely as possible. The amount of deformation of the image content is guided by a significance map that characterizes the visual attractiveness of each pixel; this significance map is computed automatically using a novel combination of gradient and salience-based measures. Our technique allows diverting the distortion due to resizing to image regions with homogeneous content, such that the impact on perceptually important features is minimized. Unlike previous approaches, our method distributes the distortion in all spatial directions, even when the resizing operation is only applied horizontally or vertically, thus fully utilizing the available homogeneous regions to absorb the distortion. We develop an efficient formulation for the nonlinear optimization involved in the warping function computation, allowing interactive image resizing.",
"title": ""
},
{
"docid": "36c1257016e25ead101e61cf1128d733",
"text": "Malignant melanoma causes the majority of deaths related to skin cancer. Nevertheless, it is the most treatable one, depending on its early diagnosis. The early prognosis is a challenging task for both clinicians and dermatologist, due to the characteristic similarities of melanoma with other skin lesions such as dysplastic nevi. In the past decades, several computerized lesion analysis algorithms have been proposed by the research community for detection of melanoma. These algorithms mostly focus on differentiating melanoma from benign lesions and few have considered the case of melanoma against dysplastic nevi. In this paper, we consider the most challenging task and propose an automatic framework for differentiation of melanoma from dysplastic nevi. The proposed framework also considers combination and comparison of several texture features beside the well used colour and shape features based on \"ABCD\" clinical rule in the literature. Focusing on dermoscopy images, we evaluate the performance of the framework using two feature extraction approaches, global and local (bag of words) and three classifiers such as support vector machine, gradient boosting and random forest. Our evaluation revealed the potential of texture features and random forest as an almost independent classifier. Using texture features and random forest for differentiation of melanoma and dysplastic nevi, the framework achieved the highest sensitivity of 98% and specificity of 70%.",
"title": ""
},
{
"docid": "a99c8d5b74e2470b30706b57fd96868d",
"text": "Implant restorations have become a primary treatment option for the replacement of congenitally missing lateral incisors. The central incisor and canine often erupt in less than optimal positions adjacent to the edentulous lateral incisor space, and therefore preprosthetic orthodontic treatment is frequently required. Derotation of the central incisor and canine, space closure and correction of root proximities may be required to create appropriate space in which to place the implant and achieve an esthetic restoration. This paper discusses aspects of preprosthetic orthodontic diagnosis and treatment that need to be considered with implant restorations.",
"title": ""
},
{
"docid": "bb0be0730200ae47d9b87d3c6a915008",
"text": "Human ESC-derived mesenchymal stem cell (MSC)-conditioned medium (CM) was previously shown to mediate cardioprotection during myocardial ischemia/reperfusion injury through large complexes of 50-100 nm. Here we show that these MSCs secreted 50- to 100-nm particles. These particles could be visualized by electron microscopy and were shown to be phospholipid vesicles consisting of cholesterol, sphingomyelin, and phosphatidylcholine. They contained coimmunoprecipitating exosome-associated proteins, e.g., CD81, CD9, and Alix. These particles were purified as a homogeneous population of particles with a hydrodynamic radius of 55-65 nm by size-exclusion fractionation on a HPLC. Together these observations indicated that these particles are exosomes. These purified exosomes reduced infarct size in a mouse model of myocardial ischemia/reperfusion injury. Therefore, MSC mediated its cardioprotective paracrine effect by secreting exosomes. This novel role of exosomes highlights a new perspective into intercellular mediation of tissue injury and repair, and engenders novel approaches to the development of biologics for tissue repair.",
"title": ""
},
{
"docid": "faa82c37ea37ac9703b471302466c735",
"text": "An accurate and robust face recognition system was developed and tested. This system exploits the feature extraction capabilities of the discrete cosine transform (DCT) and invokes certain normalization techniques that increase its robustness to variations in facial geometry and illumination. The method was tested on a variety of available face databases, including one collected at McGill University. The system was shown to perform very well when compared to other approaches.",
"title": ""
},
{
"docid": "24297f719741f6691e5121f33bafcc09",
"text": "The hypothesis that cancer is driven by tumour-initiating cells (popularly known as cancer stem cells) has recently attracted a great deal of attention, owing to the promise of a novel cellular target for the treatment of haematopoietic and solid malignancies. Furthermore, it seems that tumour-initiating cells might be resistant to many conventional cancer therapies, which might explain the limitations of these agents in curing human malignancies. Although much work is still needed to identify and characterize tumour-initiating cells, efforts are now being directed towards identifying therapeutic strategies that could target these cells. This Review considers recent advances in the cancer stem cell field, focusing on the challenges and opportunities for anticancer drug discovery.",
"title": ""
},
{
"docid": "5a397012744d958bb1a69b435c73e666",
"text": "We introduce a method to generate whole body motion of a humanoid robot such that the resulted total linear/angular momenta become specified values. First, we derive a linear equation which gives the total momentum of a robot from its physical parameters, the base link speed and the joint speeds. Constraints between the legs and the environment are also considered. The whole body motion is calculated from a given momentum reference by using a pseudo-inverse of the inertia matrix. As examples, we generated the kicking and walking motions and tested on the actual humanoid robot HRP-2. This method, the Resolved Momentum Control, gives us a unified framework to generate various maneuver of humanoid robots.",
"title": ""
},
{
"docid": "d4ee96388ca88c0a5d2a364f826dea91",
"text": "Cloud computing, as an emerging computing paradigm, enables users to remotely store their data into a cloud so as to enjoy scalable services on-demand. Especially for small and medium-sized enterprises with limited budgets, they can achieve cost savings and productivity enhancements by using cloud-based services to manage projects, to make collaborations, and the like. However, allowing cloud service providers (CSPs), which are not in the same trusted domains as enterprise users, to take care of confidential data, may raise potential security and privacy issues. To keep the sensitive user data confidential against untrusted CSPs, a natural way is to apply cryptographic approaches, by disclosing decryption keys only to authorized users. However, when enterprise users outsource confidential data for sharing on cloud servers, the adopted encryption system should not only support fine-grained access control, but also provide high performance, full delegation, and scalability, so as to best serve the needs of accessing data anytime and anywhere, delegating within enterprises, and achieving a dynamic set of users. In this paper, we propose a scheme to help enterprises to efficiently share confidential data on cloud servers. We achieve this goal by first combining the hierarchical identity-based encryption (HIBE) system and the ciphertext-policy attribute-based encryption (CP-ABE) system, and then making a performance-expressivity tradeoff, finally applying proxy re-encryption and lazy re-encryption to our scheme.",
"title": ""
},
{
"docid": "a045911e831d63d1482619224fb15130",
"text": "The latest decade lead to a unconstrained advancement of the importance of online networking. Due to the gigantic measures of records appearing in web organizing, there is a colossal necessity for the programmed examination of such records. Online networking customer's comments expect a basic part in building or changing the one's acknowledgments concerning some specific indicate or making it standard. This paper demonstrates a preliminary work to exhibit the sufficiency of machine learning prescient calculations on the remarks of most well known long range informal communication site, Facebook. We showed the customer remark patters, over the posts on Facebook Pages and expected that what number of remarks a post is depended upon to get in next H hrs. To automate the technique, we developed an item display containing the crawler, information processor and data disclosure module. For prediction, we used the Linear Regression model (Simple Linear model, Linear relapse model and Pace relapse model) and Non-Linear Regression model(Decision tree, MLP) on different data set varieties and evaluated them under the appraisal estimations Hits@10, AUC@10, Processing Time and Mean Absolute Error.",
"title": ""
},
{
"docid": "cf2e54d22fbf261a51a226f7f5adc4f5",
"text": "We propose a new fast, robust and controllable method to simulate the dynamic destruction of large and complex objects in real time. The common method for fracture simulation in computer games is to pre-fracture models and replace objects by their pre-computed parts at run-time. This popular method is computationally cheap but has the disadvantages that the fracture pattern does not align with the impact location and that the number of hierarchical fracture levels is fixed. Our method allows dynamic fracturing of large objects into an unlimited number of pieces fast enough to be used in computer games. We represent visual meshes by volumetric approximate convex decompositions (VACD) and apply user-defined fracture patterns dependent on the impact location. The method supports partial fracturing meaning that fracture patterns can be applied locally at multiple locations of an object. We propose new methods for computing a VACD, for approximate convex hull construction and for detecting islands in the convex decomposition after partial destruction in order to determine support structures.",
"title": ""
},
{
"docid": "5e8f2e9d799b865bb16bd3a68003db73",
"text": "A robust road markings detection algorithm is a fundamental component of intelligent vehicles' autonomous navigation in urban environment. This paper presents an algorithm for detecting road markings including zebra crossings, stop lines and lane markings to provide road information for intelligent vehicles. First, to eliminate the impact of the perspective effect, an Inverse Perspective Mapping (IPM) transformation is applied to the images grabbed by the camera; the region of interest (ROI) was extracted from IPM image by a low level processing. Then, different algorithms are adopted to extract zebra crossings, stop lines and lane markings. The experiments on a large number of street scenes in different conditions demonstrate the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "fda90903b8466d33354ea6ea2c5f1c11",
"text": "Hydrogels are three-dimensional cross-linked polymer network that can respond to the fluctuations of the environmental stimuli. These biomaterials can incorporate large quantum of biological fluids and swell. When swelled, they are soft & rubbery and resemble the living tissue, exhibiting excellent biocompatibility. Today, drug delivery experience several challenges where hydrogel could be one potential answer to those. Thanks to the unique properties of hydrogel for which they are widely exposed to different biomedical fields. Hence the preparation techniques of hydrogel biomaterial and the evaluation of the properties are of utmost significance. Literature reveals that this three dimensional architecture could be homo-polymeric, co-polymeric, semiinterpenetrating and interpenetrating polymer networks (IPN) based on preparation methods. Polymeric blends like semi-IPN have also been investigated to satisfy the specific needs of biomedical field. Such blends have shown superior performances over individual polymers. Unique biocompatibility, flexible methods of synthesis and tailor able physical properties have made the hydrogels to be used as a drug delivery device to tissue engineering scaffolds. As scaffolds they should provide structural integrity like tissue constructs and as a drug carrier it should have sufficient mechanical strength to control and protect the drug and proteins until they are delivered to the specific sites of the biological system. Hence, the evaluation of swelling, mechanical and biocompatible properties consider more attention before the hydrogels are applied. In this review article an attempt has been made to describe the available methods of hydrogel synthesis along with their inevitable properties.",
"title": ""
},
{
"docid": "45f534e9bc92e4f70a1dc25b8d82f62c",
"text": "A new implementation has been proposed for the beta multiplier voltage reference to improve its performance with regard to process variations. The scope for silicon tunability on the proposed circuit is also discussed. The circuit was implemented in a 0.18 /spl mu/ process and was found to have a temperature sensitivity of less than 500 ppm/C in the virgin die without trimming.",
"title": ""
},
{
"docid": "f56c5a623b29b88f42bf5d6913b2823e",
"text": "We describe a novel interface for composition of polygonal meshes based around two artist-oriented tools: Geometry Drag-and-Drop and Mesh Clone Brush. Our drag-and-drop interface allows a complex surface part to be selected and interactively dragged to a new location. We automatically fill the hole left behind and smoothly deform the part to conform to the target surface. The artist may increase the boundary rigidity of this deformation, in which case a fair transition surface is automatically computed. Our clone brush allows for transfer of surface details with precise spatial control. These tools support an interaction style that has not previously been demonstrated for 3D surfaces, allowing detailed 3D models to be quickly assembled from arbitrary input meshes. We evaluated this interface by distributing a basic tool to computer graphics hobbyists and professionals, and based on their feedback, describe potential workflows which could utilize our techniques.",
"title": ""
}
] |
scidocsrr
|
aae9c50f57847d01350770f0cf0b02d5
|
Deep neural networks and mixed integer linear optimization
|
[
{
"docid": "79560f7ec3c5f42fe5c5e0ad175fe6a0",
"text": "The deployment of Artificial Neural Networks (ANNs) in safety-critical applications poses a number of new verification and certification challenges. In particular, for ANN-enabled self-driving vehicles it is important to establish properties about the resilience of ANNs to noisy or even maliciously manipulated sensory input. We are addressing these challenges by defining resilience properties of ANN-based classifiers as the maximum amount of input or sensor perturbation which is still tolerated. This problem of computing maximum perturbation bounds for ANNs is then reduced to solving mixed integer optimization problems (MIP). A number of MIP encoding heuristics are developed for drastically reducing MIP-solver runtimes, and using parallelization of MIP-solvers results in an almost linear speed-up in the number (up to a certain limit) of computing cores in our experiments. We demonstrate the effectiveness and scalability of our approach by means of computing maximum resilience bounds for a number of ANN benchmark sets ranging from typical image recognition scenarios to the autonomous maneuvering of robots.",
"title": ""
}
] |
[
{
"docid": "3c59a0673cb80ee7e2b6ebd0fa5ea27d",
"text": "Resolvers are transducers that are used to sense the angular position of rotational machines. The analog resolver is necessary to use resolver to digital converter. Among the RDC software method, angle tracking observer (ATO) is the most popular method. In an actual resolverbased position sensing system, amplitude imbalance dominantly distorts the estimate position information of ATO. Minority papers have reported position error compensation of resolver’s output signal with amplitude imbalance. This paper proposes new ATO algorithm in order to compensate position errors caused by the amplitude imbalance. There is no need premeasured off line data. This is easy, simple, cost-effective, and able to work on line compensation. To verify feasibility of the proposed algorithm, simulation and experiments are carried out.",
"title": ""
},
{
"docid": "483880f697329701db9412f5569b802f",
"text": "Online consumer reviews (OCR) have helped consumers to know about the strengths and weaknesses of different products and find the ones that best suit their needs. This research investigates the predictors of readership and helpfulness of OCR using a sentiment mining approach. Our findings show that reviews with higher levels of positive sentiment in the title receive more readerships. Sentimental reviews with neutral polarity in the text are also perceived to be more helpful. The length and longevity of a review positively influence both its readership and helpfulness. Our findings suggest that the current methods used for sorting OCR may bias both their readership and helpfulness. This study can be used by online vendors to develop scalable automated systems for sorting and classification of OCR which will benefit both vendors and consumers.",
"title": ""
},
{
"docid": "7db6124dc1f196ec2067a2d9dc7ba028",
"text": "We describe a graphical representation of probabilistic relationships-an alternative to the Bayesian network-called a dependency network. Like a Bayesian network, a dependency network has a graph and a probability component. The graph component is a (cyclic) directed graph such that a node's parents render that node independent of all other nodes in the network. The probability component consists of the probability of a node given its parents for each node (as in a Bayesian network). We identify several basic properties of this representation, and describe its use in collaborative filtering (the task of predicting preferences) and the visualization of predictive relationships.",
"title": ""
},
{
"docid": "f1d1a73f21dcd1d27da4e9d4a93c5581",
"text": "Movements of interfaces can be analysed in terms of whether they are sensible, sensable and desirable. Sensible movements are those that users naturally perform; sensable are those that can be measured by a computer; and desirable movements are those that are required by a given application. We show how a systematic comparison of sensible, sensable and desirable movements, especially with regard to how they do not precisely overlap, can reveal potential problems with an interface and also inspire new features. We describe how this approach has been applied to the design of three interfaces: the Augurscope II, a mobile augmented reality interface for outdoors; the Drift Table, an item of furniture that uses load sensing to control the display of aerial photographs; and pointing flashlights at walls and posters in order to play sounds.",
"title": ""
},
{
"docid": "a51a3e1ae86e4d178efd610d15415feb",
"text": "The availability of semantically annotated image and video assets constitutes a critical prerequisite for the realisation of intelligent knowledge management services pertaining to realistic user needs. Given the extend of the challenges involved in the automatic extraction of such descriptions, manually created metadata play a significant role, further strengthened by their deployment in training and evaluation tasks related to the automatic extraction of content descriptions. The different views taken by the two main approaches towards semantic content description, namely the Semantic Web and MPEG-7, as well as the traits particular to multimedia content due to the multiplicity of information levels involved, have resulted in a variety of image and video annotation tools, adopting varying description aspects. Aiming to provide a common framework of reference and furthermore to highlight open issues, especially with respect to the coverage and the interoperability of the produced metadata, in this chapter we present an overview of the state of the art in image and video annotation tools.",
"title": ""
},
{
"docid": "e98ddf64ae7f68fffd2b68a13e12bba0",
"text": "The Dark Triad – narcissism, Machiavellianism, and psychopathy – have traditionally been considered to be undesirable traits. However, emerging work suggest that not only may there be a positive side to possessing these traits but they may also serve important adaptive functions, even if the strategies associated with them are viewed as socially undesirable. In an online survey (N = 336), we investigated the costs and benefits of the Dark Triad within the domain of mating psychology. The social style and lower order personality traits of the Dark Triad traits facilitated increased mateships in the form of poaching mates from others and being poached oneself to form mateships, pointing to possible benefits of possessing the Dark Triad traits. However, the costside was evidenced with rates of mates abandoning their current relationship for a new one. Mate retention is a problem faced by those with these traits and the tactics used to retain mates were characteristic of the Dark Triad: aggressive and narcisstic. Results are discussed using an adaptionist paradigm. Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "884ea5137f9eefa78030608097938772",
"text": "In this paper, we propose a new concept - the \"Reciprocal Velocity Obstacle\"- for real-time multi-agent navigation. We consider the case in which each agent navigates independently without explicit communication with other agents. Our formulation is an extension of the Velocity Obstacle concept [3], which was introduced for navigation among (passively) moving obstacles. Our approach takes into account the reactive behavior of the other agents by implicitly assuming that the other agents make a similar collision-avoidance reasoning. We show that this method guarantees safe and oscillation- free motions for each of the agents. We apply our concept to navigation of hundreds of agents in densely populated environments containing both static and moving obstacles, and we show that real-time and scalable performance is achieved in such challenging scenarios.",
"title": ""
},
{
"docid": "df5c521e040c59ea2b9ce044fa68d864",
"text": "We consider the problem of estimating real-time traffic conditions from sparse, noisy GPS probe vehicle data. We specifically address arterial roads, which are also known as the secondary road network (highways are considered the primary road network). We consider several estimation problems: historical traffic patterns, real-time traffic conditions, and forecasting future traffic conditions. We assume that the data available for these estimation problems is a small set of sparsely traced vehicle trajectories, which represents a small fraction of the total vehicle flow through the network. We present an expectation maximization algorithm that simultaneously learns the likely paths taken by probe vehicles as well as the travel time distributions through the network. A case study using data from San Francisco taxis is used to illustrate the performance of the algorithm.",
"title": ""
},
{
"docid": "c56daed0cc2320892fad3ac34ce90e09",
"text": "In this paper we describe the open source data analytics platform KNIME, focusing particularly on extensions and modules supporting fuzzy sets and fuzzy learning algorithms such as fuzzy clustering algorithms, rule induction methods, and interactive clustering tools. In addition we outline a number of experimental extensions, which are not yet part of the open source release and present two illustrative examples from real world applications to demonstrate the power of the KNIME extensions.",
"title": ""
},
{
"docid": "e9e7a68578f23b85bee9ebfe1b923f87",
"text": "Low-density lipoprotein (LDL) is the most abundant and the most atherogenic class of cholesterol-carrying lipoproteins in human plasma. The level of plasma LDL is regulated by the LDL receptor, a cell surface glycoprotein that removes LDL from plasma by receptor-mediated endocytosis. Defects in the gene encoding the LDL receptor, which occur in patients with familial hypercholesterolemia, elevate the plasma LDL level and produce premature coronary atherosclerosis. The physiologically important LDL receptors are located primarily in the liver, where their number is regulated by the cholesterol content of the hepatocyte. When the cholesterol content of hepatocytes is raised by ingestion of diets high in saturated fat and cholesterol, LDL receptors fall and plasma LDL levels rise. Conversely, maneuvers that lower the cholesterol content of hepatocytes, such as ingestion of drugs that inhibit cholesterol synthesis (mevinolin or compactin) or prevent the reutilization of bile acids (cholestyramine or colestipol), stimulate LDL receptor production and lower plasma LDL levels. The normal process of receptor regulation can therefore be exploited in powerful and novel ways so as to reverse hypercholesterolemia and prevent atherosclerosis.",
"title": ""
},
{
"docid": "980ad058a2856048765f497683557386",
"text": "Hierarchical reinforcement learning (HRL) has recently shown promising advances on speeding up learning, improving the exploration, and discovering intertask transferable skills. Most recent works focus on HRL with two levels, i.e., a master policy manipulates subpolicies, which in turn manipulate primitive actions. However, HRL with multiple levels is usually needed in many real-world scenarios, whose ultimate goals are highly abstract, while their actions are very primitive. Therefore, in this paper, we propose a diversity-driven extensible HRL (DEHRL), where an extensible and scalable framework is built and learned levelwise to realize HRL with multiple levels. DEHRL follows a popular assumption: diverse subpolicies are useful, i.e., subpolicies are believed to be more useful if they are more diverse. However, existing implementations of this diversity assumption usually have their own drawbacks, which makes them inapplicable to HRL with multiple levels. Consequently, we further propose a novel diversity-driven solution to achieve this assumption in DEHRL. Experimental studies evaluate DEHRL with five baselines from four perspectives in two domains; the results show that DEHRL outperforms the state-of-the-art baselines in all four aspects.",
"title": ""
},
{
"docid": "3a86f1f91cfaa398a03a56abb34f497c",
"text": "We present a practical approach to generate stochastic anisotropic samples with Poisson-disk characteristic over a two-dimensional domain. In contrast to isotropic samples, we understand anisotropic samples as nonoverlapping ellipses whose size and density match a given anisotropic metric. Anisotropic noise samples are useful for many visualization and graphics applications. The spot samples can be used as input for texture generation, for example, line integral convolution (LIC), but can also be used directly for visualization. The definition of the spot samples using a metric tensor makes them especially suitable for the visualization of tensor fields that can be translated into a metric. Our work combines ideas from sampling theory and mesh generation to approximate generalized blue noise properties. To generate these samples with the desired properties, we first construct a set of nonoverlapping ellipses whose distribution closely matches the underlying metric. This set of samples is used as input for a generalized anisotropic Lloyd relaxation to distribute noise samples more evenly. Instead of computing the Voronoi tessellation explicitly, we introduce a discrete approach that combines the Voronoi cell and centroid computation in one step. Our method supports automatic packing of the elliptical samples, resulting in textures similar to those generated by anisotropic reaction-diffusion methods. We use Fourier analysis tools for quality measurement of uniformly distributed samples. The resulting samples have nice sampling properties, for example, they satisfy a blue noise property where low frequencies in the power spectrum are reduced to a minimum..",
"title": ""
},
{
"docid": "6278090a0206b812a31f5eb60f6d9381",
"text": "The “Mozart effect” reported by Rauscher, Shaw, and Ky (1993, 1995) indicates that spatial-temporal abilities are enhanced after listening to music composed by Mozart. We replicated and extended the effect in Experiment 1: Performance on a spatial-temporal task was better after participants listened to a piece composed by Mozart or by Schubert than after they sat in silence. In Experiment 2, the advantage for the music condition disappeared when the control condition consisted of a narrated story instead of silence. Rather, performance was a function of listeners’preference (music or story), with better performance following the preferred condition. Claims that exposure to music composed by Mozart improves spatial-temporal abilities (Rauscher, Shaw, & Ky, 1993, 1995) have received widespread attention in the news media. Based on these findings, Georgia Governor Zell Miller recently budgeted for a compact disc or cassette for each infant born in state. Reports published in Science(Holden, 1994), the APA Monitor(Martin, 1994), and the popular press indicate that scientists and the general public are giving serious consideration to the possibility that music listening and music lessons improve other abilities. If these types of associations can be confirmed, the implications would be considerable. For example, listening to music could improve the performance of pilots and structural engineers. Such associations would also provide evidence against contemporary theories of modularity (Fodor, 1983) and multiple intelligences (Gardner, 1993), which argue for independence of functioning across domains. Although facilitation in spatial-temporal performance following exposure to music (Rauscher et al., 1993, 1995) is temporary (10 to 15 min), long-term improvements in spatial-temporal reasoning as a consequence of music lessons have also been reported (Gardiner, Fox, Knowles, & Jeffrey, 1996; Rauscher et al., 1997). Unfortunately, the media have not been careful to distinguish these disparate findings. The purpose of the present study was to provide a more complete explanation of the short-term phenomenon. Rauscher and her colleagues have proposed that the so-called Mozart effect can be explained by the trion model (Leng & Shaw, 1991), which posits that exposure to complex musical compositions excites cortical firing patterns similar to those used in spatial-temporal reasoning, so that performance on spatial-temporal tasks is positively affected by exposure to music. On the surface, the Mozart effect is similar to robust psychological phenomena such as transfer or priming. For example, the effect could be considered an instance of positive, nonspecific transfer across domains and modalities (i.e., music listening and visual-spatial performance) that do not have a well-documented association. Transfer is said to occur when knowledge or skill acquired in one situation influences performance in another (Postman, 1971). In the case of the Mozart effect, however, passive listening to music—rather than overt learning—influences spatial-temporal performance. The Mozart effect also bears similarities to associative priming effects and spreading activation (Collins & Loftus, 1975). But priming effects tend to disappear when the prime and the target have few features in common (Klimesch, 1994, pp. 163–165), and cross-modal priming effects are typically weak (Roediger & McDermott, 1993). Moreover, it is far from obvious which features are shared by stimuli as diverse as a Mozart sonata and a spatial-temporal task. In short, the Mozart effect described by Rauscher et al. (1993, 1995) is difficult to situate in a context of known cognitive phenomena. Stough, Kerkin, Bates, and Mangan (1994) failed to replicate the findings of Rauscher et al., although their use of Raven’s Advanced Progressive Matrices rather than spatial tasks from the Stanford-Binet Intelligence Scale (Rauscher et al., 1993, 1995) to assess spatial abilities may account for the discrepancies. Whereas tasks measuring spatial recognition (such as the Raven’s test) require a search for physical similarities among visually presented stimuli, spatial-temporal tasks (e.g., the Paper Folding and Cutting, PF&C, subtest of the StanfordBinet; mental rotation tasks; jigsaw puzzles) require mental transformation of the stimuli (Rauscher & Shaw, 1998). In their review of previous successes and failures at replicating the Mozart effect, Rauscher and Shaw (1998) concluded that the effect is obtainable only with spatial-temporal tasks. Our goal in Experiment 1 was to replicate and extend the basic findings of Rauscher et al. (1993, 1995). A completely computercontrolled procedure was used to test adults’ performance on a PF&C task immediately after they listened to music or sat in silence. Half of the participants listened to Mozart during the music condition; the other half listened to Schubert. The purpose of Experiment 2 was to test the hypothesis that the Mozart effect is actually a consequence of participants’ preference for one testing condition over another, the assumption being that better performance would follow the preferred condition. Control conditions in Rauscher et al. (1993) included a period of silence or listening to a relaxation tape, both of which might have been less interesting or arousing than listening to a Mozart sonata. Consequently, if the participants in that study preferred the Mozart condition, this factor might account for the differential performance on the spatial-temporal task that followed. In a subsequent experiment (Rauscher et al., 1995), comparison conditions involved silence or a combination of minimalist music (Philip Glass), a taped short story, and repetitive dance music. Minimalist and repetitive music might also induce boredom or low levels of arousal, much like silence, and the design precluded direct comparison of the short-story and music conditions. Indeed, in all other instances in which the Mozart effect has been successfully replicated (see Rauscher & Shaw, 1998), control conditions consisted of sitting in silence or listening to relaxation tapes or repetitive music. In Experiment 2, our control condition involved simply listening to a short story. Address correspondence to E. Glenn Schellenberg, Department of Psychology, University of Toronto at Mississauga, Mississauga, Ontario, Canada L5L 1C6; e-mail: [email protected]. PSYCHOLOGICAL SCIENCE Kristin M. Nantais and E. Glenn Schellenberg",
"title": ""
},
{
"docid": "5c9f0843ebc26bf8a52d7633acc33c58",
"text": "This thesis aim is to present results on a stochastic model called reinforced random walk. This process was conceived in the late 1980’s by Coppersmith and Diaconis and can be regarded as a generalization of a random walk on a weighted graph. These reinforced walks have non-homogeneous transition probabilities, which arise from an interaction between the process and the weights. We survey articles on the subject, perform simulations and extend a theorem by Pemantle.",
"title": ""
},
{
"docid": "551f1dca9718125b385794d8e12f3340",
"text": "Social media provides increasing opportunities for users to voluntarily share their thoughts and concerns in a large volume of data. While user-generated data from each individual may not provide considerable information, when combined, they include hidden variables, which may convey significant events. In this paper, we pursue the question of whether social media context can provide socio-behavior \"signals\" for crime prediction. The hypothesis is that crowd publicly available data in social media, in particular Twitter, may include predictive variables, which can indicate the changes in crime rates. We developed a model for crime trend prediction where the objective is to employ Twitter content to identify whether crime rates have dropped or increased for the prospective time frame. We also present a Twitter sampling model to collect historical data to avoid missing data over time. The prediction model was evaluated for different cities in the United States. The experiments revealed the correlation between features extracted from the content and crime rate directions. Overall, the study provides insight into the correlation of social content and crime trends as well as the impact of social data in providing predictive indicators.",
"title": ""
},
{
"docid": "024570b927c0967bf0c2868c36fc16d6",
"text": "Cognitive training has been shown to improve executive functions (EFs) in middle childhood and adulthood. However, fewer studies have targeted the preschool years-a time when EFs undergo rapid development. The present study tested the effects of a short four session EF training program in 54 four-year-olds. The training group significantly improved their working memory from pre-training relative to an active control group. Notably, this effect extended to a task sharing few surface features with the trained tasks, and continued to be apparent 3 months later. In addition, the benefits of training extended to a measure of mathematical reasoning 3 months later, indicating that training EFs during the preschool years has the potential to convey benefits that are both long-lasting and wide-ranging.",
"title": ""
},
{
"docid": "193dc2e716580d8277cd9309feaf1ecb",
"text": "In this paper we propose a framework for gradient descent image alignment in the Fourier domain. Specifically, we propose an extension to the classical Lucas & Kanade (LK) algorithm where we represent the source and template image's intensity pixels in the complex 2D Fourier domain rather than in the 2D spatial domain. We refer to this approach as the Fourier LK (FLK) algorithm. The FLK formulation is especially advantageous, over traditional LK, when it comes to pre-processing the source and template images with a bank of filters (e.g., Gabor filters) as: (i) it can handle substantial illumination variations, (ii) the inefficient pre-processing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, (iii) unlike traditional LK the computational cost is invariant to the number of filters and as a result far more efficient, (iv) this approach can be extended to the inverse compositional form of the LK algorithm where nearly all steps (including Fourier transform and filter bank pre-processing) can be pre-computed leading to an extremely efficient and robust approach to gradient descent image matching. We demonstrate robust image matching performance on a variety of objects in the presence of substantial illumination differences with exactly the same computational overhead as that of traditional inverse compositional LK during fitting.",
"title": ""
},
{
"docid": "210e9bc5f2312ca49438e6209ecac62e",
"text": "Image classification has become one of the main tasks in the field of computer vision technologies. In this context, a recent algorithm called CapsNet that implements an approach based on activity vectors and dynamic routing between capsules may overcome some of the limitations of the current state of the art artificial neural networks (ANN) classifiers, such as convolutional neural networks (CNN). In this paper, we evaluated the performance of the CapsNet algorithm in comparison with three well-known classifiers (Fisherfaces, LeNet, and ResNet). We tested the classification accuracy on four datasets with a different number of instances and classes, including images of faces, traffic signs, and everyday objects. The evaluation results show that even for simple architectures, training the CapsNet algorithm requires significant computational resources and its classification performance falls below the average accuracy values of the other three classifiers. However, we argue that CapsNet seems to be a promising new technique for image classification, and further experiments using more robust computation resources and refined CapsNet architectures may produce better outcomes.",
"title": ""
},
{
"docid": "5158b5da8a561799402cb1ef3baa3390",
"text": "We study the segmental recurrent neural network for end-to-end acoustic modelling. This model connects the segmental conditional random field (CRF) with a recurrent neural network (RNN) used for feature extraction. Compared to most previous CRF-based acoustic models, it does not rely on an external system to provide features or segmentation boundaries. Instead, this model marginalises out all the possible segmentations, and features are extracted from the RNN trained together with the segmental CRF. In essence, this model is self-contained and can be trained end-to-end. In this paper, we discuss practical training and decoding issues as well as the method to speed up the training in the context of speech recognition. We performed experiments on the TIMIT dataset. We achieved 17.3% phone error rate (PER) from the first-pass decoding — the best reported result using CRFs, despite the fact that we only used a zeroth-order CRF and without using any language model.",
"title": ""
},
{
"docid": "069636576cbf6c5dd8cead8fff32ea4b",
"text": "Sleep-disordered breathing-comprising obstructive sleep apnoea (OSA), central sleep apnoea (CSA), or a combination of the two-is found in over half of heart failure (HF) patients and may have harmful effects on cardiac function, with swings in intrathoracic pressure (and therefore preload and afterload), blood pressure, sympathetic activity, and repetitive hypoxaemia. It is associated with reduced health-related quality of life, higher healthcare utilization, and a poor prognosis. Whilst continuous positive airway pressure (CPAP) is the treatment of choice for patients with daytime sleepiness due to OSA, the optimal management of CSA remains uncertain. There is much circumstantial evidence that the treatment of OSA in HF patients with CPAP can improve symptoms, cardiac function, biomarkers of cardiovascular disease, and quality of life, but the quality of evidence for an improvement in mortality is weak. For systolic HF patients with CSA, the CANPAP trial did not demonstrate an overall survival or hospitalization advantage for CPAP. A minute ventilation-targeted positive airway therapy, adaptive servoventilation (ASV), can control CSA and improves several surrogate markers of cardiovascular outcome, but in the recently published SERVE-HF randomized trial, ASV was associated with significantly increased mortality and no improvement in HF hospitalization or quality of life. Further research is needed to clarify the therapeutic rationale for the treatment of CSA in HF. Cardiologists should have a high index of suspicion for sleep-disordered breathing in those with HF, and work closely with sleep physicians to optimize patient management.",
"title": ""
}
] |
scidocsrr
|
8b0824b4dc48cba0b56d4824a7fea6b9
|
Being Happy, Healthy and Whole Watching Movies That Affect Our Emotions
|
[
{
"docid": "93d8b8afe93d10e54bf4a27ba3b58220",
"text": "Researchers interested in emotion have long struggled with the problem of how to elicit emotional responses in the laboratory. In this article, we summarise five years of work to develop a set of films that reliably elicit each of eight emotional states (amusement, anger, contentment, disgust, fear, neutral, sadness, and surprise). After evaluating over 250 films, we showed selected film clips to an ethnically diverse sample of 494 English-speaking subjects. We then chose the two best films for each of the eight target emotions based on the intensity and discreteness of subjects' responses to each film. We found that our set of 16 films successfully elicited amusement, anger, contentment. disgust, sadness, surprise, a relatively neutral state, and, to a lesser extent, fear. We compare this set of films with another set recently described by Philippot (1993), and indicate that detailed instructions for creating our set of film stimuli will be provided on request.",
"title": ""
},
{
"docid": "21e0255c8c127cc53bd2a370faf34feb",
"text": "A science of positive subjective experience, positive individual traits, and positive institutions promises to improve quality of life and prevent the pathologies that arise when life is barren and meaningless. The exclusive focus on pathology that has dominated so much of our discipline results in a model of the human being lacking the positive features that make life worth living. Hope, wisdom, creativity, future mindedness, courage, spirituality, responsibility, and perseverance are ignored or explained as transformations of more authentic negative impulses. The 15 articles in this millennial issue of the American Psychologist discuss such issues as what enables happiness, the effects of autonomy and self-regulation, how optimism and hope affect health, what constitutes wisdom, and how talent and creativity come to fruition. The authors outline a framework for a science of positive psychology, point to gaps in our knowledge, and predict that the next century will see a science and profession that will come to understand and build the factors that allow individuals, communities, and societies to flourish.",
"title": ""
}
] |
[
{
"docid": "c5759678a84864a843c20c5f4a23f29f",
"text": "We propose a novel framework called transient imaging for image formation and scene understanding through impulse illumination and time images. Using time-of-flight cameras and multi-path analysis of global light transport, we pioneer new algorithms and systems for scene understanding through time images. We demonstrate that our proposed transient imaging framework allows us to accomplish tasks that are well beyond the reach of existing imaging technology. For example, one can infer the geometry of not only the visible but also the hidden parts of a scene, enabling us to look around corners. Traditional cameras estimate intensity per pixel I(x,y). Our transient imaging camera captures a 3D time-image I(x,y,t) for each pixel and uses an ultra-short pulse laser for illumination. Emerging technologies are supporting cameras with a temporal-profile per pixel at picosecond resolution, allowing us to capture an ultra-high speed time-image. This time-image contains the time profile of irradiance incident at a sensor pixel. We experimentally corroborated our theory with free space hardware experiments using a femtosecond laser and a picosecond accurate sensing device. The ability to infer the structure of hidden scene elements, unobservable by both the camera and illumination source, will create a range of new computer vision opportunities.",
"title": ""
},
{
"docid": "da168a94f6642ee92454f2ea5380c7f3",
"text": "One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low-dimensional manifold embedded in a high-dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high-dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed.",
"title": ""
},
{
"docid": "ad8b60be0abf430fa38c22b39f074df2",
"text": "Social media is playing an increasingly vital role in information dissemination. But with dissemination being more distributed, content often makes multiple hops, and consequently has opportunity to change. In this paper we focus on content that should be changing the least, namely quoted text. We find changes to be frequent, with their likelihood depending on the authority of the copied source and the type of site that is copying. We uncover patterns in the rate of appearance of new variants, their length, and popularity, and develop a simple model that is able to capture them. These patterns are distinct from ones produced when all copies are made from the same source, suggesting that information is evolving as it is being processed collectively in online social media.",
"title": ""
},
{
"docid": "f260bb2ddc4b0b6c855727c2b8c389fb",
"text": "At present, medical experts and researchers turn their attention towards using robotic devices to facilitate human limb rehabilitation. An exoskeleton is such a robotic device, which is used to perform rehabilitation, motion assistance and power augmentation tasks. For effective operation, it is supposed to follow the structure and the motion of the natural human limb. This paper propose a robotic rehabilitation exoskeleton with novel shoulder joint actuation mechanism with a moving center of glenohumeral (CGH) joint. The proposed exoskeleton has four active degrees of freedom (DOFs), namely; shoulder flexion/extension, abduction/adduction, pronation/supination (external/internal rotation), and elbow flexion/extension. In addition to those motions mentioned above, three passive DOFs had been introduced to the shoulder joint mechanism in order to provide allowance for the scapular motion of the shoulder. The novel mechanism allows the movement of CGH — joint in two planes; namely frontal plane during shoulder abduction/adduction and transverse plane during flexion/extension. The displacement of the CGH — joint axis was measured experimentally. These results are then incorporated into the novel mechanism, which takes into account the natural movement characteristics of the human shoulder joint. It is intended to reduce excessive stress on patient's upper limb while carrying out rehabilitation exercises.",
"title": ""
},
{
"docid": "947816f1e72045254c5ec45d140efabe",
"text": "ÐThe traveling purchaser problem (TPP) is an interesting generalization of the well-known traveling salesman problem (TSP), in which a list of commodity items have to be purchased at some markets selling various commodities with dierent prices, and the total travel and purchase costs must be minimized. Applications include the purchase of raw materials for the manufacturing factories in which the total cost has to be minimized, and the scheduling of jobs over some machines with dierent set-up and job processing costs in which the total costs for completing the jobs has to be minimized. The TPP has been shown to be computationally intractable. Therefore, many heuristic solution procedures, including the Search algorithm, the Generalized-Savings algorithm, the Tour-Reduction algorithm, and the Commodity-Adding algorithm have been proposed to solve the TPP approximately. In this paper, we consider some variations of these algorithms to improve the solutions. The proposed variations are compared with the existing solution procedures. The results indicate that the proposed variations signi®cantly improve the existing solutions. # 1998 Elsevier Science Ltd. All rights reserved",
"title": ""
},
{
"docid": "354dd677ae444b224e3955661b26a503",
"text": "This paper presents an extended target tracking method for tracking cars in urban traffic using data from laser range sensors. Results are presented for three real world datasets that contain multiple cars, occlusions, and maneuver changes. The car's shape is approximated by a rectangle, and single track steering models are used for the target kinematics. A multiple model approach is taken for both the dynamics modeling and the measurement modeling. A comparison to ground truth shows that the estimation errors are generally very small: on average the absolute error is less than half a degree for the heading. Multiple cars are handled using a multiple model PHD filter, where a variable probability of detection is integrated to enable tracking of occluded cars.",
"title": ""
},
{
"docid": "30c96eb397b515f6b3e4d05c071413d1",
"text": "Thin-film solar cells have the potential to significantly decrease the cost of photovoltaics. Light trapping is particularly critical in such thin-film crystalline silicon solar cells in order to increase light absorption and hence cell efficiency. In this article we investigate the suitability of localized surface plasmons on silver nanoparticles for enhancing the absorbance of silicon solar cells. We find that surface plasmons can increase the spectral response of thin-film cells over almost the entire solar spectrum. At wavelengths close to the band gap of Si we observe a significant enhancement of the absorption for both thin-film and wafer-based structures. We report a sevenfold enhancement for wafer-based cells at =1200 nm and up to 16-fold enhancement at =1050 nm for 1.25 m thin silicon-on-insulator SOI cells, and compare the results with a theoretical dipole-waveguide model. We also report a close to 12-fold enhancement in the electroluminescence from ultrathin SOI light-emitting diodes and investigate the effect of varying the particle size on that enhancement. © 2007 American Institute of Physics. DOI: 10.1063/1.2734885",
"title": ""
},
{
"docid": "9134d28f62a2917f028930e937ee30b3",
"text": "In this paper, we comprehensively discuss the current progress of visual–inertial (VI) navigation systems and sensor fusion research with a particular focus on small unmanned aerial vehicles, known as microaerial vehicles (MAVs). Such fusion has become very topical due to the complementary characteristics of the two sensing modalities. We discuss the pros and cons of the most widely implemented VI systems against the navigational and maneuvering capabilities of MAVs. Considering the issue of optimum data fusion from multiple heterogeneous sensors, we examine the potential of the most widely used advanced state estimation techniques (both linear and nonlinear as well as Bayesian and non-Bayesian) against various MAV design considerations. Finally, we highlight several research opportunities and potential challenges associated with each technique.",
"title": ""
},
{
"docid": "1f56f045a9b262ce5cd6566d47c058bb",
"text": "The growing popularity and development of data mining technologies bring serious threat to the security of individual,'s sensitive information. An emerging research topic in data mining, known as privacy-preserving data mining (PPDM), has been extensively studied in recent years. The basic idea of PPDM is to modify the data in such a way so as to perform data mining algorithms effectively without compromising the security of sensitive information contained in the data. Current studies of PPDM mainly focus on how to reduce the privacy risk brought by data mining operations, while in fact, unwanted disclosure of sensitive information may also happen in the process of data collecting, data publishing, and information (i.e., the data mining results) delivering. In this paper, we view the privacy issues related to data mining from a wider perspective and investigate various approaches that can help to protect sensitive information. In particular, we identify four different types of users involved in data mining applications, namely, data provider, data collector, data miner, and decision maker. For each type of user, we discuss his privacy concerns and the methods that can be adopted to protect sensitive information. We briefly introduce the basics of related research topics, review state-of-the-art approaches, and present some preliminary thoughts on future research directions. Besides exploring the privacy-preserving approaches for each type of user, we also review the game theoretical approaches, which are proposed for analyzing the interactions among different users in a data mining scenario, each of whom has his own valuation on the sensitive information. By differentiating the responsibilities of different users with respect to security of sensitive information, we would like to provide some useful insights into the study of PPDM.",
"title": ""
},
{
"docid": "afabc44116cc1141c00c3528f1509c18",
"text": "Low-rank representation (LRR) has recently attracted a great deal of attention due to its pleasing efficacy in exploring low-dimensional subspace structures embedded in data. For a given set of observed data corrupted with sparse errors, LRR aims at learning a lowest-rank representation of all data jointly. LRR has broad applications in pattern recognition, computer vision and signal processing. In the real world, data often reside on low-dimensional manifolds embedded in a high-dimensional ambient space. However, the LRR method does not take into account the non-linear geometric structures within data, thus the locality and similarity information among data may be missing in the learning process. To improve LRR in this regard, we propose a general Laplacian regularized low-rank representation framework for data representation where a hypergraph Laplacian regularizer can be readily introduced into, i.e., a Non-negative Sparse Hyper-Laplacian regularized LRR model (NSHLRR). By taking advantage of the graph regularizer, our proposed method not only can represent the global low-dimensional structures, but also capture the intrinsic non-linear geometric information in data. The extensive experimental results on image clustering, semi-supervised image classification and dimensionality reduction tasks demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "686a8e474cece7380d3401a023e678e9",
"text": "This paper introduces SDWS (Semantic Description of Web Services), a Web tool which generates semantic descriptions from collections of Web services. The fundamental approach of SDWS consists of the integration of a set of ontological models for the representation of different Web service description languages and models. The main contributions of this proposal are (i) a general ontological model for the representation of Web services, (ii) a set of language-specific ontological models for the representation of different Web service descriptions implementations, and (iii) a set of software modules that automatically parse Web service descriptions and produce their respective ontological representation. The design of the generic service model incorporates the common elements that all service descriptions share: a service name, a set of operations, and input and output parameters; together with other important elements that semantic models define: preconditions and effects. Experimental results show that the automatic generation of semantic descriptions from public Web services is feasible and represents an important step towards the integration of a general semantic service registry.",
"title": ""
},
{
"docid": "4f3177b303b559f341b7917683114257",
"text": "We investigate the integration of a planning mechanism into sequence-to-sequence models using attention. We develop a model which can plan ahead in the future when it computes its alignments between input and output sequences, constructing a matrix of proposed future alignments and a commitment vector that governs whether to follow or recompute the plan. This mechanism is inspired by the recently proposed strategic attentive reader and writer (STRAW) model for Reinforcement Learning. Our proposed model is end-to-end trainable using primarily differentiable operations. We show that it outperforms a strong baseline on character-level translation tasks from WMT’15, the algorithmic task of finding Eulerian circuits of graphs, and question generation from the text. Our analysis demonstrates that the model computes qualitatively intuitive alignments, converges faster than the baselines, and achieves superior performance with fewer parameters.",
"title": ""
},
{
"docid": "9e2516a141cb6e46cfa6d27e723a7ba9",
"text": "In this paper, we present the method we developed when participating to the e-Risk pilot task. We use machine learning in order to solve the problem of early detection of depressive users in social media relying on various features that we detail in this paper. We submitted 4 models which differences are also detailed in this paper. Best results were obtained when using a combination of lexical and statistical features.",
"title": ""
},
{
"docid": "d96c9204c552181e4d00ed961b18c665",
"text": "We present a new tool, named DART, for automatically testing software that combines three main techniques: (1) automated extraction of the interface of a program with its external environment using static source-code parsing; (2) automatic generation of a test driver for this interface that performs random testing to simulate the most general environment the program can operate in; and (3) dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths. Together, these three techniques constitute Directed Automated Random Testing, or DART for short. The main strength of DART is thus that testing can be performed completely automatically on any program that compiles -- there is no need to write any test driver or harness code. During testing, DART detects standard errors such as program crashes, assertion violations, and non-termination. Preliminary experiments to unit test several examples of C programs are very encouraging.",
"title": ""
},
{
"docid": "9904ac77b96bdd634322701a53149b4e",
"text": "Brain-computer interface can have a profound impact on the life of paralyzed or elderly citizens as they offer control over various devices without any necessity of movement of the body parts. This technology has come a long way and opened new dimensions in improving our life. Use of electroencephalogram (EEG wave) based control schemes can change the shape of the lives of the disabled citizens if incorporated with an electric wheelchair through a wearable device. Electric wheelchairs are nowadays commercially available which provides mobility to the disabled persons with relative ease. But most of the commercially available products are much expensive and controlled through the joystick, hand gesture, voice command, etc. which may not be viable control scheme for severely disabled or paralyzed persons. In our research work, we have developed a low-cost electric wheelchair using locally available cheap parts and incorporated brain-computer interface considering the affordability of people from developing countries. So, people who have lost their control over their limbs or have the inability to drive a wheelchair by any means can control the proposed wheelchair only by their attention and willingness to blink. To acquire the signal of attention and blink, single channel electroencephalogram (EEG wave) was captured by a wearable Neurosky MindWave Mobile. One of the salient features of the proposed scheme is ‘Destination Mapping’ by which the wheelchair develops a virtual map as the user moves around and autonomously reaches desired positions afterward by taking command from a smart interface based on EEG signal. From the experiments that were carried out at different stages of the development, it was exposed that, such a wheelchair is easy to train and calibrate for different users and offers a low cost and smart alternative especially for the elderly people in developing countries.",
"title": ""
},
{
"docid": "cdce757210357d04db2ef22d580bb75c",
"text": "Analysis is performed for a two-arm round spiral structure excited in phase, where the arms near the antenna center are backed by a small round disc. This spiral antenna has a frequency range where the radiation is bidirectional, having patterns symmetric with respect to the antenna plane. Subsequently, the bidirectional radiation is transformed into a unidirectional conical beam by backing the spiral with a cavity, where the cavity height is chosen to be extremely small (0.077 wavelength at the lowest design frequency 3.3 GHz). It is found that, as the distance between the spiral plane and the small disc inside the cavity is decreased, variations in the axial ratio and VSWR become smaller, as desired. Other antenna characteristics over a wide frequency range of 3.3 GHz to 9.6 GHz, including the gain and radiation efficiency, are also discussed.",
"title": ""
},
{
"docid": "54ef3b0ba6c2ac7830c78b828e58299f",
"text": "Deep Learning has been applied successfully to speech processing. In this paper we propose an architecture for speech synthesis using multiple speakers. Some hidden layers are shared by all the speakers, while there is a specific output layer for each speaker. Objective and perceptual experiments prove that this scheme produces much better results in comparison with single speaker model. Moreover, we also tackle the problem of speaker adaptation by adding a new output branch to the model and successfully training it without the need of modifying the base optimized model. This fine tuning method achieves better results than training the new speaker from scratch with its own model.",
"title": ""
},
{
"docid": "1a7bdb641bc9b52a1e48e2d6842bf5aa",
"text": "Sales of a brand are determined by measures such as how many customers buy the brand, how often, and how much they also buy other brands. Scanner panel operators routinely report these ‘‘brand performance measures’’ (BPMs) to their clients. In this position paper, we consider how to understand, interpret, and use these measures. The measures are shown to follow well-established patterns. One is that big and small brands differ greatly in how many buyers they have, but usually far less in how loyal these buyers are. The Dirichlet model predicts these patterns. It also provides a broader framework for thinking about all competitive repeat-purchase markets—from soup to gasoline, prescription drugs to aviation fuel, where there are large and small brands, and light and heavy buyers, in contexts as diverse as the United States, United Kingdom, Japan, Germany, and Australasia. Numerous practical uses of the framework are illustrated: auditing the performance of established brands, predicting and evaluating the performance of new brands, checking the nature of unfamiliar markets, of partitioned markets, and of dynamic market situations more generally (where the Dirichlet provides theoretical benchmarks for price promotions, advertising, etc.). In addition, many implications for our understanding of consumers, brands, and the marketing mix logically follow from the Dirichlet framework. In repeat-purchase markets, there is often a lack of segmentation between brands and the typical consumer exhibits polygamous buying behavior (though there might be strong segmentation at the category level). An understanding of these applications and implications leads to consumer insights, imposes constraints on marketing action, and provides norms for evaluating brands and for assessing marketing initiatives. D 2003 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "0f44ab1a2d93ce015778e9a41063ce7b",
"text": "Bullying is a serious problem in schools, and school authorities need effective solutions to resolve this problem. There is growing interest in the wholeschool approach to bullying. Whole-school programs have multiple components that operate simultaneously at different levels in the school community. This article synthesizes the existing evaluation research on whole-school programs to determine the overall effectiveness of this approach. The majority of programs evaluated to date have yielded nonsignificant outcomes on measures of self-reported victimization and bullying, and only a small number have yielded positive outcomes. On the whole, programs in which implementation was systematically monitored tended to be more effective than programs without any monitoring. show little empathy for their victims (Roberts & Morotti, 2000). Bullying may be a means of increasing one’s social status and access to valued resources, such as the attention of opposite-sex peers (Pellegrini, 2001). Victims tend to be socially isolated, lack social skills, and have more anxiety and lower self-esteem than students in general (Olweus, 1997). They also tend to have a higher than normal risk for depression and suicide (e.g., Sourander, Helstelae, Helenius, & Piha, 2000). A subgroup of victims reacts aggressively to abuse and has a distinct pattern of psychosocial maladjustment encompassing both the antisocial behavior of bullies and the social and emotional difficulties of victims (Glover, Gough, Johnson, & Cartwright, 2000). Bullying is a relatively stable and long-term problem for those involved, particularly children fitting the profile Bullying is a particularly vicious kind of aggressive behavior distinguished by repeated acts against weaker victims who cannot easily defend themselves (Farrington, 1993; Smith & Brain, 2000). Its consequences are severe, especially for those victimized over long periods of time. Bullying is a complex psychosocial problem influenced by a myriad of variables. The repetition and imbalance of power involved may be due to physical strength, numbers, or psychological factors. Both bullies and victims evidence poorer psychological adjustment than individuals not involved in bullying (Kumpulainen, Raesaenen, & Henttonen, 1999; Nansel et al., 2001). Children who bully tend to be involved in alcohol consumption and smoking, have poorer academic records than noninvolved students, display a strong need for dominance, and",
"title": ""
}
] |
scidocsrr
|
0c93f0d9bd2bf810b0621fe1282a264e
|
Analysis and reduction of common mode EMI noise for resonant converters
|
[
{
"docid": "887665ab7f043987b3373628d9cf6021",
"text": "In isolated converter, transformer is a main path of common mode current. Methods of how to reduce the noise through transformer have been widely studied. One effective technique is using shield between primary and secondary winding. In this paper, EMI noise transferring path and EMI model for typical isolated converters are analyzed. And the survey about different methods of shielding is discussed. Their pros and cons are analyzed. Then the balance concept is introduced and our proposed double shielding using balance concept for wire winding transformer is raised. It can control the parasitic capacitance accurately and is easy to manufacturing. Next, a newly proposed single layer shielding for PCB winding transformer is discussed. The experiment results are provided to verify the methods.",
"title": ""
},
{
"docid": "9dbff74b02153ee33f23d00884d909f7",
"text": "The trend in isolated DC/DC converters is increasing output power demands and higher operating frequencies. Improved topologies and semiconductors can allow for lower loss at higher frequencies. A major barrier to further improvement is the transformer design. With high current levels and high frequency effects the transformers can become the major loss component in the circuit. High values of transformer leakage inductance can also greatly degrade the performance of the converter. Matrix transformers offer the ability to reduce winding loss and leakage inductance. This paper will study the impact of increased switching frequencies on transformer size and explore the use of matrix transformers in high current high frequency isolated applications. This paper will also propose an improved integrated matrix transformer design that can decrease core loss and further improve the performance of matrix transformers.",
"title": ""
},
{
"docid": "9746a126b884fe5e542ebb31f814c281",
"text": "LLC resonant DC/DC converters are becoming popular in computing applications, such as telecom, server systems. For these applications, it is required to meet the EMI standard. In this paper, novel EMI noise transferring path and EMI model for LLC resonant DC/DC converters are proposed. DM and CM noise of LLC resonant converter are analyzed. Several EMI noise reduction approaches are proposed. Shield layers are applied to reduce CM noise. By properly choosing the ground point of shield layer, significant noise reduction can be obtained. With extra EMI balance capacitor, CM noise can be reduced further. Two channel interleaving LLC resonant converters are proposed to cancel the CM current. Conceptually, when two channels operate with 180 degree phase shift, CM current can be canceled. Therefore, the significant EMI noise reduction can be achieved.",
"title": ""
}
] |
[
{
"docid": "44895e24ca91db113a8c01d84bd5b83c",
"text": "In living organisms, nitrogen arise primarily as ammonia (NH3) and ammonium (NH4(+)), which is a main component of the nucleic acid pool and proteins. Although nitrogen is essential for growth and maintenance in animals, but when the nitrogenous compounds exceeds the normal range which can quickly lead to toxicity and death. Urea cycle is the common pathway for the disposal of excess nitrogen through urea biosynthesis. Hyperammonemia is a consistent finding in many neurological disorders including congenital urea cycle disorders, reye's syndrome and acute liver failure leads to deleterious effects. Hyperammonemia and liver failure results in glutamatergic neurotransmission which contributes to the alteration in the function of the glutamate-nitric oxide-cGMP pathway, modulates the important cerebral process. Even though ammonia is essential for normal functioning of the central nervous system (CNS), in particular high concentrations of ammonia exposure to the brain leads to the alterations of glutamate transport by the transporters. Several glutamate transporters have been recognized in the central nervous system and each has a unique physiological property and distribution. The loss of glutamate transporter activity in brain during acute liver failure and hyperammonemia is allied with increased extracellular brain glutamate concentrations which may be conscientious for the cerebral edema and ultimately cell death.",
"title": ""
},
{
"docid": "042be1fb7939384cf03ecd354f10e35f",
"text": "Text mining is a flexible technology that can be applied to numerous different tasks in biology and medicine. We present a system for extracting disease-gene associations from biomedical abstracts. The system consists of a highly efficient dictionary-based tagger for named entity recognition of human genes and diseases, which we combine with a scoring scheme that takes into account co-occurrences both within and between sentences. We show that this approach is able to extract half of all manually curated associations with a false positive rate of only 0.16%. Nonetheless, text mining should not stand alone, but be combined with other types of evidence. For this reason, we have developed the DISEASES resource, which integrates the results from text mining with manually curated disease-gene associations, cancer mutation data, and genome-wide association studies from existing databases. The DISEASES resource is accessible through a web interface at http://diseases.jensenlab.org/, where the text-mining software and all associations are also freely available for download.",
"title": ""
},
{
"docid": "c0ac3eff02d60a293bb88807d289223d",
"text": "Computational models in cognitive neuroscience should ideally use biological properties and powerful computational principles to produce behavior consistent with psychological findings. Error-driven backpropagation is computationally powerful and has proven useful for modeling a range of psychological data but is not biologically plausible. Several approaches to implementing backpropagation in a biologically plausible fashion converge on the idea of using bid irectional activation propagation in interactive networks to convey error signals. This article demonstrates two main points about these error-driven interactive networks: (1) they generalize poorly due to attractor dynamics that interfere with the network's ability to produce novel combinatorial representations systematically in response to novel inputs, and (2) this generalization problem can be remedied by adding two widely used mechanistic principles, inhibitory competition and Hebbian learning, that can be independently motivated for a variety of biological, psychological, and computational reasons. Simulations using the Leabra algorithm, which combines the generalized recirculation (GeneRec), biologically plausible, error-driven learning algorithm with inhibitory competition and Hebbian learning, show that these mechanisms can result in good generalization in interactive networks. These results support the general conclusion that cognitive neuroscience models that incorporate the core mechanistic principles of interactivity, inhibitory competition, and error-driven and Hebbian learning satisfy a wider range of biological, psychological, and computational constraints than models employing a subset of these principles.",
"title": ""
},
{
"docid": "a20b684deeb401855cbdc12cab90610a",
"text": "A zero knowledge interactive proof system allows one person to convince another person of some fact without revealing the information about the proof. In particular, it does not enable the verifier to later convince anyone else that the prover has a proof of the theorem or even merely that the theorem is true (much less that he himself has a proof). This paper reviews the field of zero knowledge proof systems giving a brief overview of zero knowledge proof systems and the state of current research in this field.",
"title": ""
},
{
"docid": "7f6c4518b06e5b20d4fe9e4be4e3fdc1",
"text": "The idea that language shapes the way we think, often associated with Benjamin Whorf, has long been decried as not only wrong but also fundamentally wrong-headed. Yet, experimental evidence has reopened debate about the extent to which language influences nonlinguistic cognition, particularly in the domain of time. In this article, I will first analyze an influential argument against the Whorfian hypothesis and show that its anti-Whorfian conclusion is in part an artifact of conflating two distinct questions: Do we think in language? and Does language shape thought? Next, I will discuss crosslinguistic differences in spatial metaphors for time and describe experiments that demonstrate corresponding differences in nonlinguistic mental representations. Finally, I will sketch a simple learning mechanism by which some linguistic relativity effects appear to arise. Although people may not think in language, speakers of different languages develop distinctive conceptual repertoires as a consequence of ordinary and presumably universal neural and cognitive processes.",
"title": ""
},
{
"docid": "6720ae7a531d24018bdd1d3d1c7eb28b",
"text": "This study investigated the effects of mobile phone text-messaging method (predictive and multi-press) and experience (in texters and non-texters) on children’s textism use and understanding. It also examined popular claims that the use of text-message abbreviations, or textese spelling, is associated with poor literacy skills. A sample of 86 children aged 10 to 12 years read and wrote text messages in conventional English and in textese, and completed tests of spelling, reading, and non-word reading. Children took significantly longer, and made more errors, when reading messages written in textese than in conventional English. Further, they were no faster at writing messages in textese than in conventional English, regardless of texting method or experience. Predictive texters were faster at reading and writing messages than multi-press texters, and texting experience increased writing, but not reading, speed. General spelling and reading scores did not differ significantly with usual texting method. However, better literacy skills were associated with greater textese reading speed and accuracy. These findings add to the growing evidence for a positive relationship between texting proficiency and traditional literacy skills. Children’s text-messaging and literacy skills 3 The advent of mobile phones, and of text-messaging in particular, has changed the way that people communicate, and adolescents and children seem especially drawn to such technology. Australian surveys have revealed that 19% of 8to 11-year-olds and 76% of 12to 14-year-olds have their own mobile phone (Cupitt, 2008), and that 69% of mobile phone users aged 14 years and over use text-messaging (Australian Government, 2008), with 90% of children in Grades 7-12 sending a reported average of 11 texts per week (ABS, 2008). Text-messaging has also been the catalyst for a new writing style: textese. Described as a hybrid of spoken and written English (Plester & Wood, 2009), textese is a largely soundbased, or phonological, form of spelling that can reduce the time and cost of texting (Leung, 2007). Common abbreviations, or textisms, include letter and number homophones (c for see, 2 for to), contractions (txt for text), and non-conventional spellings (skool for school) (Plester, Wood, & Joshi, 2009; Thurlow, 2003). Estimates of the proportion of textisms that children use in their messages range from 21-47% (increasing with age) in naturalistic messages (Wood, Plester, & Bowyer, 2009), to 34% for messages elicited by a given scenario (Plester et al., 2009), to 50-58% for written messages that children ‘translated’ to and from textese (Plester, Wood, & Bell, 2008). One aim of the current study was to examine the efficiency of using textese for both the message writer and the reader, in order to understand the reasons behind (Australian) children’s use of textisms. The spread of textese has been attributed to texters’ desire to overcome the confines of the alphanumeric mobile phone keypad (Crystal, 2008). Since several letters are assigned to each number, the multi-press style of texting requires the somewhat laborious pressing of the same button one to four times to type each letter (Taylor & Vincent, 2005). The use of textese thus has obvious savings for multi-press texters, of both time and screen-space (as message character count cannot exceed 160). However, there is evidence, discussed below, that reading textese can be relatively slow and difficult for the message recipient, compared to Children’s text-messaging and literacy skills 4 reading conventional English. Since the use of textese is now widespread, it is important to examine the potential advantages and disadvantages that this form of writing may have for message senders and recipients, especially children, whose knowledge of conventional English spelling is still developing. To test the potential advantages of using textese for multi-press texters, Neville (2003) examined the speed and accuracy of textese versus conventional English in writing and reading text messages. British girls aged 11-16 years were dictated two short passages to type into a mobile phone: one using conventional English spelling, and the other “as if writing to a friend”. They also read two messages aloud from the mobile phone, one in conventional English, and the other in textese. The proportion of textisms produced is not reported, but no differences in textese use were observed between texters and non-texters. Writing time was significantly faster for textese than conventional English messages, with greater use of textisms significantly correlated with faster message typing times. However, participants were significantly faster at reading messages written in conventional English than in textese, regardless of their usual texting frequency. Kemp (2010) largely followed Neville’s (2003) design, but with 61 Australian undergraduates (mean age 22 years), all regular texters. These adults, too, were significantly faster at writing, but slower at reading, messages written in textese than in conventional English, regardless of their usual messaging frequency. Further, adults also made significantly more reading errors for messages written in textese than conventional English. These findings converge on the important conclusion that while the use of textisms makes writing more efficient for the message sender, it costs the receiver more time to read it. However, both Neville (2003) and Kemp (2010) examined only multi-press method texting, and not the predictive texting method now also available. Predictive texting requires only a single key-press per letter, and a dictionary-based system suggests one or more likely words Children’s text-messaging and literacy skills 5 based on the combinations entered (Taylor & Vincent, 2005). Textese may be used less by predictive texters than multi-press texters for two reasons. Firstly, predictive texting requires fewer key-presses than multi-press texting, which reduces the need to save time by taking linguistic short-cuts. Secondly, the dictionary-based predictive system makes it more difficult to type textisms that are not pre-programmed into the dictionary. Predictive texting is becoming increasingly popular, with recent studies reporting that 88% of Australian adults (Kemp, in press), 79% of Australian 13to 15-year-olds (De Jonge & Kemp, in press) and 55% of British 10to 12-year-olds (Plester et al., 2009) now use this method. Another aim of this study was thus to compare the reading and writing of textese and conventional English messages in children using their typical input method: predictive or multi-press texting, as well as in children who do not normally text. Finally, this study sought to investigate the popular assumption that exposure to unconventional word spellings might compromise children’s conventional literacy skills (e.g., Huang, 2008; Sutherland, 2002), with media articles revealing widespread disapproval of this communication style (Thurlow, 2006). In contrast, some authors have suggested that the use of textisms might actually improve children’s literacy skills (e.g., Crystal, 2008). Many textisms commonly used by children rely on the ability to distinguish, blend, and/or delete letter sounds (Plester et al., 2008, 2009). Practice at reading and creating textisms may therefore lead to improved phonological awareness (Crystal, 2008), which consistently predicts both reading and spelling prowess (e.g., Bradley & Bryant, 1983; Lundberg, Frost, & Petersen, 1988). Alternatively, children who use more textisms may do so because they have better phonological awareness, or poorer spellers may be drawn to using textisms to mask weak spelling ability (e.g., Sutherland, 2002). Thus, studying children’s textism use can provide further information on the links between the component skills that constitute both conventional and alternative, including textism-based, literacy. Children’s text-messaging and literacy skills 6 There is evidence for a positive link between the use of textisms and literacy skills in preteen children. Plester et al. (2008) asked 10to 12-year-old British children to translate messages from standard English to textese, and vice versa, with pen and paper. They found a significant positive correlation between textese use and verbal reasoning scores (Study 1) and spelling scores (Study 2). Plester et al. (2009) elicited text messages from a similar group of children by asking them to write messages in response to a given scenario. Again, textism use was significantly positively associated with word reading ability and phonological awareness scores (although not with spelling scores). Neville (2003) found that the number of textisms written, and the number read accurately, as well as the speed with which both conventional and textese messages were read and written, all correlated significantly with general spelling skill in 11to 16-year-old girls. The cross-sectional nature of these studies, and of the current study, means that causal relationships cannot be firmly established. However, Wood et al. (2009) report on a longitudinal study in which 8to 12-year-old children’s use of textese at the beginning of the school year predicted their skills in reading ability and phonological awareness at the end of the year, even after controlling for verbal IQ. These results provide the first support for the idea that textism use is driving the development of literacy skills, and thus that this use of technology can improve learning in the area of language and literacy. Taken together, these findings also provide important evidence against popular media claims that the use of textese is harming children’s traditional literacy skills. No similar research has yet been published with children outside the UK. The aim of the current study was thus to examine the speed and proficiency of textese use in Australian 10to 12-year-olds and, for the first time, to compare the r",
"title": ""
},
{
"docid": "890f459384ea47a8915a60c19a3320e3",
"text": "Product ads are a popular form of search advertizing offered by major search engines, including Yahoo, Google and Bing. Unlike traditional search ads, product ads include structured product specifications, which allow search engine providers to perform better keyword-based ad retrieval. However, the level of completeness of the product specifications varies and strongly influences the performance of ad retrieval. On the other hand, online shops are increasing adopting semantic markup languages such as Microformats, RDFa and Microdata, to annotate their content, making large amounts of product description data publicly available. In this paper, we present an approach for enriching product ads with structured data extracted from thousands of online shops offering Microdata annotations. In our approach we use structured product ads as supervision for training feature extraction models able to extract attribute-value pairs from unstructured product descriptions. We use these features to identify matching products across different online shops and enrich product ads with the extracted data. Our evaluation on three product categories related to electronics show promising results in terms of enriching product ads with useful product data.",
"title": ""
},
{
"docid": "b9a6803c0525c41291a575715a604b0f",
"text": "The Internet-of-Things (IoT) has quickly moved from the realm of hype to reality with estimates of over 25 billion devices deployed by 2020. While IoT has huge potential for societal impact, it comes with a number of key security challenges---IoT devices can become the entry points into critical infrastructures and can be exploited to leak sensitive information. Traditional host-centric security solutions in today's IT ecosystems (e.g., antivirus, software patches) are fundamentally at odds with the realities of IoT (e.g., poor vendor security practices and constrained hardware). We argue that the network will have to play a critical role in securing IoT deployments. However, the scale, diversity, cyberphysical coupling, and cross-device use cases inherent to IoT require us to rethink network security along three key dimensions: (1) abstractions for security policies; (2) mechanisms to learn attack and normal profiles; and (3) dynamic and context-aware enforcement capabilities. Our goal in this paper is to highlight these challenges and sketch a roadmap to avoid this impending security disaster.",
"title": ""
},
{
"docid": "e54d1c92bac316dbaf6cd9d158297b17",
"text": "This paper proposes dynamic software birthmarks which can be extracted during execution of Windows applications. Birthmarks are unique and native characteristics of software. For a pair of softwarep andq, if q has the same birthmarks as p’ , q is suspected as a copy of p. Our security analysis showed that the proposed birthmark has good tolerance against various kinds of program transformation attacks.",
"title": ""
},
{
"docid": "32a4c17a53643042a5c19180bffd7c21",
"text": "Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a \"$1 recognizer\" that is easy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97% accuracy with only 1 loaded template and 99% accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which users balanced speed and accuracy, were recognized better than slow or fast gestures for all three recognizers. We also discuss the effect that the number of templates or training examples has on recognition, the score falloff along recognizers' N-best lists, and results for individual gestures. We include detailed pseudocode of the $1 recognizer to aid development, inspection, extension, and testing.",
"title": ""
},
{
"docid": "8251cc4742adfb41fdfe611dc45bb311",
"text": "Recommending news articles is a challenging task due to the continuous changes in the set of available news articles and the contextdependent preferences of users. Traditional recommender approaches are optimized for analyzing static data sets. In news recommendation scenarios, characterized by continuous changes, high volume of messages, and tight time constraints, alternative approaches are needed. In this work we present a highly scalable recommender system optimized for the processing of streams. We evaluate the system in the CLEF NewsREEL challenge. Our system is built on Apache Spark enabling the distributed processing of recommendation requests ensuring the scalability of our approach. The evaluation of the implemented system shows that our approach is suitable for the news recommenation scenario and provides high-quality results while satisfying the tight time constraints.",
"title": ""
},
{
"docid": "5b7ff78bc563c351642e5f316a6d895b",
"text": "OBJECTIVE\nTo determine an albino population's expectations from an outreach albino clinic, understanding of skin cancer risk, and attitudes toward sun protection behavior.\n\n\nDESIGN\nSurvey, June 1, 1997, to September 30, 1997.\n\n\nSETTING\nOutreach albino clinics in Tanzania.\n\n\nPARTICIPANTS\nAll albinos 13 years and older and accompanying adults of younger children attending clinics. Unaccompanied children younger than 13 years and those too sick to answer questions were excluded. Ninety-four questionnaires were completed in 5 villages, with a 100% response rate.\n\n\nINTERVENTIONS\nInterview-based questionnaire with scoring system for pictures depicting poorly sun-protected albinos.\n\n\nRESULTS\nThe most common reasons for attending the clinic were health education and skin examination. Thirteen respondents (14%) believed albinism was inherited; it was more common to believe in superstitious causes of albinism than inheritance. Seventy-three respondents (78%) believed skin cancer was preventable, and 60 (63%) believed skin cancer was related to the sun. Seventy-two subjects (77%) thought sunscreen provided protection from the sun; 9 (10%) also applied it at night. Reasons for not wearing sun-protective clothing included fashion, culture, and heat. The hats provided were thought to have too soft a brim, to shrink, and to be ridiculed. Suggestions for additional clinic services centered on education and employment. Albinos who had read the educational booklet had no better understanding of sun avoidance than those who had not (P =.49).\n\n\nCONCLUSIONS\nThere was a reasonable understanding of risks of skin cancer and sun-avoidance methods. Clinical advice was often not followed for cultural reasons. The hats provided were unsuitable, and there was some confusion about the use of sunscreen. A lack of understanding of the cause of albinism led to many superstitions.",
"title": ""
},
{
"docid": "b46a9871dc64327f1ab79fa22de084ce",
"text": "Traditional address scanning attacks mainly rely on the naive 'brute forcing' approach, where the entire IPv4 address space is exhaustively searched by enumerating different possibilities. However, such an approach is inefficient for IPv6 due to its vast subnet size (i.e., 2^64). As a result, it is widely assumed that address scanning attacks are less feasible in IPv6 networks. In this paper, we evaluate new IPv6 reconnaissance techniques in real IPv6 networks and expose how to leverage the Domain Name System (DNS) for IPv6 network reconnaissance. We collected IPv6 addresses from 5 regions and 100,000 domains by exploiting DNS reverse zone and DNSSEC records. We propose a DNS Guard (DNSG) to efficiently detect DNS reconnaissance attacks in IPv6 networks. DNSG is a plug and play component that could be added to the existing infrastructure. We implement DNSG using Bro and Suricata. Our results demonstrate that DNSG could effectively block DNS reconnaissance attacks.",
"title": ""
},
{
"docid": "d3c5a15b14ab5f4a44223e7e571e412e",
"text": "− Instead of minimizing the observed training error, Support Vector Regression (SVR) attempts to minimize the generalization error bound so as to achieve generalized performance. The idea of SVR is based on the computation of a linear regression function in a high dimensional feature space where the input data are mapped via a nonlinear function. SVR has been applied in various fields – time series and financial (noisy and risky) prediction, approximation of complex engineering analyses, convex quadratic programming and choices of loss functions, etc. In this paper, an attempt has been made to review the existing theory, methods, recent developments and scopes of SVR.",
"title": ""
},
{
"docid": "bb685e028e4f1005b7fe9da01f279784",
"text": "Although there are few efficient algorithms in the literature for scientific workflow tasks allocation and scheduling for heterogeneous resources such as those proposed in grid computing context, they usually require a bounded number of computer resources that cannot be applied in Cloud computing environment. Indeed, unlike grid, elastic computing, such asAmazon's EC2, allows users to allocate and release compute resources on-demand and pay only for what they use. Therefore, it is reasonable to assume that the number of resources is infinite. This feature of Clouds has been called âillusion of infiniteresourcesâ. However, despite the proven benefits of using Cloud to run scientific workflows, users lack guidance for choosing between multiple offering while taking into account several objectives which are often conflicting. On the other side, the workflow tasks allocation and scheduling have been shown to be NP-complete problems. Thus, it is convenient to use heuristic rather than deterministic algorithm. The objective of this paper is to design an allocation strategy for Cloud computing platform. More precisely, we propose three complementary bi-criteria approaches for scheduling workflows on distributed Cloud resources, taking into account the overall execution time and the cost incurred by using a set of resources.",
"title": ""
},
{
"docid": "746fde6f710c39857b86c625098ad8cb",
"text": "Estimation of the orientation field is one of the key challenges during biometric feature extraction from a fingerprint sample. Many important processing steps rely on an accurate and reliable estimation. This is especially challenging for samples of low quality, for which in turn accurate preprocessing is essential. Regressional Convolutional Neural Networks have shown their superiority for bad quality samples in the independent benchmark framework FVC-ongoing. This work proposes to incorporate Deep Expectation. Options for further improvements are evaluated in this challenging environment of low quality images and small amount of training data. The findings from the results improve the new algorithm called DEX-OF. Incorporating Deep Expectation, improved regularization, and slight model changes DEX-OF achieves an RMSE of 7.52° on the bad quality dataset and 4.89° at the good quality dataset at FVC-ongoing. These are the best reported error rates so far.",
"title": ""
},
{
"docid": "96c4b307391d049924cb6f06191d3bae",
"text": "Theory and research on media violence provides evidence that aggressive youth seek out media violence and that media violence prospectively predicts aggression in youth.The authors argue that both relationships,when modeled over time, should be mutually reinforcing, in what they call a downward spiral model. This study uses multilevel modeling to examine individual growth curves in aggressiveness and violent media use. The measure of use of media violence included viewing action films, playing violent computer and video games, and visiting violence-oriented Internet sites by students from 20 middle schools in 10 different regions in the United States. The findings appear largely consistent with the proposed model. In particular, concurrent effects of aggressiveness on violent-media use and concurrent and lagged effects of violent media use on aggressiveness were found. The implications of this model for theorizing about media effects on youth, and for bridging active audience with media effects perspectives, are discussed.",
"title": ""
},
{
"docid": "792694fbea0e2e49a454ffd77620da47",
"text": "Technology is increasingly shaping our social structures and is becoming a driving force in altering human biology. Besides, human activities already proved to have a significant impact on the Earth system which in turn generates complex feedback loops between social and ecological systems. Furthermore, since our species evolved relatively fast from small groups of hunter-gatherers to large and technology-intensive urban agglomerations, it is not a surprise that the major institutions of human society are no longer fit to cope with the present complexity. In this note we draw foundational parallelisms between neurophysiological systems and ICT-enabled social systems, discussing how frameworks rooted in biology and physics could provide heuristic value in the design of evolutionary systems relevant to politics and economics. In this regard we highlight how the governance of emerging technology (i.e. nanotechnology, biotechnology, information technology, and cognitive science), and the one of climate change both presently confront us with a number of connected challenges. In particular: historically high level of inequality; the co-existence of growing multipolar cultural systems in an unprecedentedly connected world; the unlikely reaching of the institutional agreements required to deviate abnormal trajectories of development. We argue that wise general solutions to such interrelated issues should embed the deep understanding of how to elicit mutual incentives in the socio-economic subsystems of Earth system in order to jointly concur to a global utility function (e.g. avoiding the reach of planetary boundaries and widespread social unrest). We leave some open questions on how techno-social systems can effectively learn and adapt with respect to our understanding of geopolitical",
"title": ""
},
{
"docid": "64bbb86981bf3cc575a02696f64109f6",
"text": "We use computational techniques to extract a large number of different features from the narrative speech of individuals with primary progressive aphasia (PPA). We examine several different types of features, including part-of-speech, complexity, context-free grammar, fluency, psycholinguistic, vocabulary richness, and acoustic, and discuss the circumstances under which they can be extracted. We consider the task of training a machine learning classifier to determine whether a participant is a control, or has the fluent or nonfluent variant of PPA. We first evaluate the individual feature sets on their classification accuracy, then perform an ablation study to determine the optimal combination of feature sets. Finally, we rank the features in four practical scenarios: given audio data only, given unsegmented transcripts only, given segmented transcripts only, and given both audio and segmented transcripts. We find that psycholinguistic features are highly discriminative in most cases, and that acoustic, context-free grammar, and part-of-speech features can also be important in some circumstances.",
"title": ""
},
{
"docid": "35f2e6242ca33c7bb7127cf4111b088a",
"text": "We present a new algorithm for efficiently training n-gram language models on uncertain data, and illustrate its use for semisupervised language model adaptation. We compute the probability that an n-gram occurs k times in the sample of uncertain data, and use the resulting histograms to derive a generalized Katz back-off model. We compare three approaches to semisupervised adaptation of language models for speech recognition of selected YouTube video categories: (1) using just the one-best output from the baseline speech recognizer or (2) using samples from lattices with standard algorithms versus (3) using full lattices with our new algorithm. Unlike the other methods, our new algorithm provides models that yield solid improvements over the baseline on the full test set, and, further, achieves these gains without hurting performance on any of the set of video categories. We show that categories with the most data yielded the largest gains. The algorithm has been released as part of the OpenGrm n-gram library [1].",
"title": ""
}
] |
scidocsrr
|
5ef8c2a4dcd0036c87851f9859c48e4d
|
Future Edge Cloud and Edge Computing for Internet of Things Applications
|
[
{
"docid": "16fbebf500be1bf69027d3a35d85362b",
"text": "Mobile Edge Computing is an emerging technology that provides cloud and IT services within the close proximity of mobile subscribers. Traditional telecom network operators perform traffic control flow (forwarding and filtering of packets), but in Mobile Edge Computing, cloud servers are also deployed in each base station. Therefore, network operator has a great responsibility in serving mobile subscribers. Mobile Edge Computing platform reduces network latency by enabling computation and storage capacity at the edge network. It also enables application developers and content providers to serve context-aware services (such as collaborative computing) by using real time radio access network information. Mobile and Internet of Things devices perform computation offloading for compute intensive applications, such as image processing, mobile gaming, to leverage the Mobile Edge Computing services. In this paper, some of the promising real time Mobile Edge Computing application scenarios are discussed. Later on, a state-of-the-art research efforts on Mobile Edge Computing domain is presented. The paper also presents taxonomy of Mobile Edge Computing, describing key attributes. Finally, open research challenges in successful deployment of Mobile Edge Computing are identified and discussed.",
"title": ""
},
{
"docid": "bbf5561f88f31794ca95dd991c074b98",
"text": "O CTO B E R 2014 | Volume 18, Issue 4 GetMobile Every time you use a voice command on your smartphone, you are benefitting from a technique called cloud offload. Your speech is captured by a microphone, pre-processed, then sent over a wireless network to a cloud service that converts speech to text. The result is then forwarded to another cloud service or sent back to your mobile device, depending on the application. Speech recognition and many other resource-intensive mobile services require cloud offload. Otherwise, the service would be too slow and drain too much of your battery. Research projects on cloud offload are hot today, with MAUI [4] in 2010, Odessa [13] and CloneCloud [2] in 2011, and COMET [8] in 2012. These build on a rich heritage of work dating back to the mid-1990s on a theme that is broadly characterized as cyber foraging. They are also relevant to the concept of cloudlets [18] that has emerged as an important theme in mobile-cloud convergence. Reflecting my participation in this evolution from its origins, this article is a personal account of the key developments in this research area. It focuses on mobile computing, ignoring many other uses of remote execution since the 1980s such as distributed processing, query processing, distributed object systems, and distributed partitioning.",
"title": ""
},
{
"docid": "bda2541d2c2a5a5047b29972cb1536f6",
"text": "Fog is an emergent architecture for computing, storage, control, and networking that distributes these services closer to end users along the cloud-to-things continuum. It covers both mobile and wireline scenarios, traverses across hardware and software, resides on network edge but also over access networks and among end users, and includes both data plane and control plane. As an architecture, it supports a growing variety of applications, including those in the Internet of Things (IoT), fifth-generation (5G) wireless systems, and embedded artificial intelligence (AI). This survey paper summarizes the opportunities and challenges of fog, focusing primarily in the networking context of IoT.",
"title": ""
}
] |
[
{
"docid": "9f3388eb88e230a9283feb83e4c623e1",
"text": "Entity Linking (EL) is an essential task for semantic text understanding and information extraction. Popular methods separately address the Mention Detection (MD) and Entity Disambiguation (ED) stages of EL, without leveraging their mutual dependency. We here propose the first neural end-to-end EL system that jointly discovers and links entities in a text document. The main idea is to consider all possible spans as potential mentions and learn contextual similarity scores over their entity candidates that are useful for both MD and ED decisions. Key components are context-aware mention embeddings, entity embeddings and a probabilistic mention entity map, without demanding other engineered features. Empirically, we show that our end-to-end method significantly outperforms popular systems on the Gerbil platform when enough training data is available. Conversely, if testing datasets follow different annotation conventions compared to the training set (e.g. queries/ tweets vs news documents), our ED model coupled with a traditional NER system offers the best or second best EL accuracy.",
"title": ""
},
{
"docid": "7b4dd695182f7e15e58f44e309bf897c",
"text": "Phosphorus is one of the most abundant elements preserved in earth, and it comprises a fraction of ∼0.1% of the earth crust. In general, phosphorus has several allotropes, and the two most commonly seen allotropes, i.e. white and red phosphorus, are widely used in explosives and safety matches. In addition, black phosphorus, though rarely mentioned, is a layered semiconductor and has great potential in optical and electronic applications. Remarkably, this layered material can be reduced to one single atomic layer in the vertical direction owing to the van der Waals structure, and is known as phosphorene, in which the physical properties can be tremendously different from its bulk counterpart. In this review article, we trace back to the research history on black phosphorus of over 100 years from the synthesis to material properties, and extend the topic from black phosphorus to phosphorene. The physical and transport properties are highlighted for further applications in electronic and optoelectronics devices.",
"title": ""
},
{
"docid": "9b35733a48462f45639625daac540a2f",
"text": "Recommender systems provide strategies that help users search or make decisions within the overwhelming information spaces nowadays. They have played an important role in various areas such as e-commerce and e-learning. In this paper, we propose a hybrid recommendation strategy of content-based and knowledge-based methods that are flexible for any field to apply. By analyzing the past rating records of every user, the system learns the user’s preferences. After acquiring users’ preferences, the semantic search-and-discovery procedure takes place starting from a highly rated item. For every found item, the system evaluates the Interest Intensity indicating to what degree the user might like it. Recommender systems train a personalized estimating module using a genetic algorithm for each user, and the personalized estimating model helps improve the precision of the estimated scores. With the recommendation strategies and personalization strategies, users may have better recommendations that are closer to their preferences. In the latter part of this paper, a realworld case, a movie-recommender system adopting proposed recommendation strategies, is implemented.",
"title": ""
},
{
"docid": "6073601ab6d6e1dbba7a42c346a29436",
"text": "We present a new focus+Context (fisheye) technique for visualizing and manipulating large hierarchies. Our technique assigns more display space to a portion of the hierarchy while still embedding it in the context of the entire hierarchy. The essence of this scheme is to layout the hierarchy in a uniform way on a hyperbolic plane and map this plane onto a circular display region. This supports a smooth blending between focus and context, as well as continuous redirection of the focus. We have developed effective procedures for manipulating the focus using pointer clicks as well as interactive dragging, and for smoothly animating transitions across such manipulation. A laboratory experiment comparing the hyperbolic browser with a conventional hierarchy browser was conducted.",
"title": ""
},
{
"docid": "bc8c769b625017e2f8522c71dcfe0660",
"text": "Quantitative models have proved valuable in predicting consumer behavior in the offline world. These same techniques can be adapted to predict online actions. The use of diffusion models provides a firm foundation to implement and forecast viral marketing strategies. Choice models can predict purchases at online stores and shopbots. Hierarchical Bayesian models provide a framework to implement versioning and price segmentation strategies. Bayesian updating is a natural tool for profiling users with clickstream data. I illustrate these four modeling techniques and discuss their potential for solving Internet marketing problems.",
"title": ""
},
{
"docid": "67995490350c68f286029d8b401d78d8",
"text": "OBJECTIVE\nModifiable risk factors for dementia were recently identified and compiled in a systematic review. The 'Lifestyle for Brain Health' (LIBRA) score, reflecting someone's potential for dementia prevention, was studied in a large longitudinal population-based sample with respect to predicting cognitive change over an observation period of up to 16 years.\n\n\nMETHODS\nLifestyle for Brain Health was calculated at baseline for 949 participants aged 50-81 years from the Maastricht Ageing Study. The predictive value of LIBRA for incident dementia and cognitive impairment was examined by using Cox proportional hazard models and by testing its relation with cognitive decline.\n\n\nRESULTS\nLifestyle for Brain Health predicted future risk of dementia, as well as risk of cognitive impairment. A one-point increase in LIBRA score related to 19% higher risk for dementia and 9% higher risk for cognitive impairment. LIBRA predicted rate of decline in processing speed, but not memory or executive functioning.\n\n\nCONCLUSIONS\nLifestyle for Brain Health (LIBRA) may help in identifying and monitoring risk status in dementia-prevention programmes, by targeting modifiable, lifestyle-related risk factors. Copyright © 2017 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "b6d71f472848de18eadff0944eab6191",
"text": "Traditional approaches for object discovery assume that there are common characteristics among objects, and then attempt to extract features specific to objects in order to discriminate objects from background. However, the assumption “common features” may not hold, considering different variations between and within objects. Instead, we look at this problem from a different angle: if we can identify background regions, then the rest should belong to foreground. In this paper, we propose to model background to localize possible object regions. Our method is based on the observations: (1) background has limited categories, such as sky, tree, water, ground, etc., and can be easier to recognize, while there are millions of objects in our world with different shapes, colors and textures; (2) background is occluded because of foreground objects. Thus, we can localize objects based on voting from fore/background occlusion boundary. Our contribution lies: (1) we use graph-based image segmentation to yield high quality segments, which effectively leverages both flat segmentation and hierarchical segmentation approaches; (2) we model background to infer and rank object hypotheses. More specifically, we use background appearance and discriminative patches around fore/background boundary to build the background model. The experimental results show that our method can generate good quality object proposals and rank them where objects are covered highly within a small pool of proposed regions. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "837c34e3999714c0aa0dcf901aa278cf",
"text": "A novel high temperature superconducting interdigital bandpass filter is proposed by using coplanar waveguide quarter-wavelength resonators. The CPW resonators are arranged in parallel, and consequently the filter becomes very compact. The filter is a 5-pole Chebyshev BPF with a midband frequency of 5.0GHz and an equal-ripple fractional bandwidth of 3.2%. It is fabricated using a YBCO film deposited on an MgO substrate. The measured filtering characteristics agree well with EM simulations and show a low insertion loss in spite of the small size of the filter.",
"title": ""
},
{
"docid": "da36268b2f7c3a06a420988ac1ad909e",
"text": "PROBLEM\nMost midwives and nurses do not write for publication. Previous authors on this topic have focussed on the processes of writing and getting published. Although definitive English usage style guides exist, they are infrequently consulted by new midwifery authors.\n\n\nPURPOSE\nTo enable new writers to confidently apply the basic skills of scientific writing when preparing a paper for publication.\n\n\nOVERVIEW\nThe basic skills needed for scientific writing are the focus of this paper. The importance of careful word choices is discussed first. Next, the skills of writing sentences are presented. Finally, the skills of writing paragraphs are discussed. Examples of poor and better writing are given in relation to each of these basic elements.",
"title": ""
},
{
"docid": "86519f9b080f3c8037928f67309c6d19",
"text": "We develop an adaptive active contour tracing algorithm for extraction of spinal cord from MRI that is fully automatic, unlike existing approaches that need manually chosen seeds. We can accurately extract the target spinal cord and construct the volume of interest to provide visual guidance for strategic rehabilitation surgery planning.",
"title": ""
},
{
"docid": "aa0bd00ca5240e462e49df3d1bd3487e",
"text": "The choice of the CMOS logic to be used for implementation of a given specification is usually dependent on the optimization and the performance constraints that the finished chip is required to meet. This paper presents a comparative study of CMOS static and dynamic logic. Effect of voltage variation on power and delay of static and dynamic CMOS logic styles studied. The performance of static logic is better than dynamic logic for designing basic logic gates like NAND and NOR. But the dynamic casecode voltage switch logic (CVSL) achieves better performance. 75% lesser power delay product is achieved than that of static CVSL. However, it observed that dynamic logic performance is better for higher fan in and complex logic circuits.",
"title": ""
},
{
"docid": "ba7b51dc253da1a17aaf12becb1abfed",
"text": "This papers aims to design a new approach in order to increase the performance of the decision making in model-based fault diagnosis when signature vectors of various faults are identical or closed. The proposed approach consists on taking into account the knowledge issued from the reliability analysis and the model-based fault diagnosis. The decision making, formalised as a bayesian network, is established with a priori knowledge on the dynamic component degradation through Markov chains. The effectiveness and performances of the technique are illustrated on a heating water process corrupted by faults. Copyright © 2006 IFAC",
"title": ""
},
{
"docid": "575dd426037b7f6d8cbc61d17a7e7a70",
"text": "Dual active bridge (DAB) converters offer an unmatched capability to transfer energy in either direction between two dc sources, while also providing galvanic isolation and high conversion efficiency. However, to operate at higher efficiencies, the bridges must operate with zero voltage switching (ZVS) over as wide an operating range as possible. The conventional approach to determine ZVS operation uses time domain analysis with ideal ac coupling inductances, which only approximately identifies the ZVS boundaries. This paper proposes a new approach using harmonic decomposition of the bridge switching patterns, which gives an explicit theoretical solution under all operating conditions, while also accommodating more complex ac coupling structures, practical impedance nonidealities, and the switching impact of dead-time and device capacitance. The methodology is confirmed by matching analytical predictions with experimental results for selected DAB systems.",
"title": ""
},
{
"docid": "e6543cc19d8ff6015593212fa8e6d86a",
"text": "Many researchers are now dedicating their efforts to studying interactive modalities such as facial expressions, natural language, and gestures. This phenomenon makes communication between robots and individuals become more natural. However, many robots currently in use are appearance constrained and not able to perform facial expressions and gestures. In addition, although humanoid-oriented techniques are promising, they are time and cost consuming, which leads to many technical difficulties in most research studies. To increase interactive efficiency and decrease costs, we alternatively focus on three interaction modalities and their combinations, namely color, sound, and vibration. We conduct a structured study to evaluate the effects of the three modalities on a human's emotional perception towards our simple-shaped robot \"Maru.\" Our findings offer insights into human-robot affective interactions, which can be particularly useful for appearance-constrained social robots. The contribution of this work is not so much the explicit parameter settings but rather deepening the understanding of how to express emotions through the simple modalities of color, sound, and vibration while providing a set of recommended expressions that HRI researchers and practitioners could readily employ.",
"title": ""
},
{
"docid": "cda6d8c94602170e2534fc29973ecff8",
"text": "In 1912, Max Wertheimer published his paper on phi motion, widely recognized as the start of Gestalt psychology. Because of its continued relevance in modern psychology, this centennial anniversary is an excellent opportunity to take stock of what Gestalt psychology has offered and how it has changed since its inception. We first introduce the key findings and ideas in the Berlin school of Gestalt psychology, and then briefly sketch its development, rise, and fall. Next, we discuss its empirical and conceptual problems, and indicate how they are addressed in contemporary research on perceptual grouping and figure-ground organization. In particular, we review the principles of grouping, both classical (e.g., proximity, similarity, common fate, good continuation, closure, symmetry, parallelism) and new (e.g., synchrony, common region, element and uniform connectedness), and their role in contour integration and completion. We then review classic and new image-based principles of figure-ground organization, how it is influenced by past experience and attention, and how it relates to shape and depth perception. After an integrated review of the neural mechanisms involved in contour grouping, border ownership, and figure-ground perception, we conclude by evaluating what modern vision science has offered compared to traditional Gestalt psychology, whether we can speak of a Gestalt revival, and where the remaining limitations and challenges lie. A better integration of this research tradition with the rest of vision science requires further progress regarding the conceptual and theoretical foundations of the Gestalt approach, which is the focus of a second review article.",
"title": ""
},
{
"docid": "dcbec6eea7b3157285298f303eb78840",
"text": "Osteochondral tissue engineering has shown an increasing development to provide suitable strategies for the regeneration of damaged cartilage and underlying subchondral bone tissue. For reasons of the limitation in the capacity of articular cartilage to self-repair, it is essential to develop approaches based on suitable scaffolds made of appropriate engineered biomaterials. The combination of biodegradable polymers and bioactive ceramics in a variety of composite structures is promising in this area, whereby the fabrication methods, associated cells and signalling factors determine the success of the strategies. The objective of this review is to present and discuss approaches being proposed in osteochondral tissue engineering, which are focused on the application of various materials forming bilayered composite scaffolds, including polymers and ceramics, discussing the variety of scaffold designs and fabrication methods being developed. Additionally, cell sources and biological protein incorporation methods are discussed, addressing their interaction with scaffolds and highlighting the potential for creating a new generation of bilayered composite scaffolds that can mimic the native interfacial tissue properties, and are able to adapt to the biological environment.",
"title": ""
},
{
"docid": "9c452434ad1c25d0fbe71138b6c39c4b",
"text": "Dual control frameworks for systems subject to uncertainties aim at simultaneously learning the unknown parameters while controlling the system dynamics. We propose a robust dual model predictive control algorithm for systems with bounded uncertainty with application to soft landing control. The algorithm exploits a robust control invariant set to guarantee constraint enforcement in spite of the uncertainty, and a constrained estimation algorithm to guarantee admissible parameter estimates. The impact of the control input on parameter learning is accounted for by including in the cost function a reference input, which is designed online to provide persistent excitation. The reference input design problem is non-convex, and here is solved by a sequence of relaxed convex problems. The results of the proposed method in a soft-landing control application in transportation systems are shown.",
"title": ""
},
{
"docid": "c55de58c07352373570ec7d46c5df03d",
"text": "Understanding human-object interactions is critical for extracting meaning from everyday visual scenes and requires integrating complex relationships between human pose and object identity into a new percept. To understand how the brain builds these representations, we conducted 2 fMRI experiments in which subjects viewed humans interacting with objects, noninteracting human-object pairs, and isolated humans and objects. A number of visual regions process features of human-object interactions, including object identity information in the lateral occipital complex (LOC) and parahippocampal place area (PPA), and human pose information in the extrastriate body area (EBA) and posterior superior temporal sulcus (pSTS). Representations of human-object interactions in some regions, such as the posterior PPA (retinotopic maps PHC1 and PHC2) are well predicted by a simple linear combination of the response to object and pose information. Other regions, however, especially pSTS, exhibit representations for human-object interaction categories that are not predicted by their individual components, indicating that they encode human-object interactions as more than the sum of their parts. These results reveal the distributed networks underlying the emergent representation of human-object interactions necessary for social perception.",
"title": ""
},
{
"docid": "4227d667cac37fe0e2ecf5a8c199d885",
"text": "Plant diseases problem can cause significant reduction in both quality and quantity of agricultural products. Automatic detection of plant leaf diseases is an essential research topic as it may prove benefits in monitoring large fields of crops, and thus automatically detect the symptoms of diseases as soon as they appear on plant leaves. The proposed system is a software solution for automatic detection and computation of plant leaf diseases. The developed processing scheme consists of five main steps, first a color transformation structure for the input RGB image is created, then the noise i.e. unnecessary part is removed using specific threshold value, then the image is segmented with connected component labeling and the useful segments are extracted, finally the ANN classification is computed by giving different features i.e. size, color, proximity and average centroid distance. Experimental results on a database of 4 different diseases confirms the robustness of the proposed approach. KeywordsANN, Color, Plant Leaf Diseases, RGB ————————————————————",
"title": ""
},
{
"docid": "5cbb0957d8f5b6ab48975ced0758cb7b",
"text": "The last decade has seen a rapid increase in the number of tools to acquire volume electron microscopy (EM) data. Several new scanning EM (SEM) imaging methods have emerged, and classical transmission EM (TEM) methods are being scaled up and automated. Here we summarize the new methods for acquiring large EM volumes, and discuss the tradeoffs in terms of resolution, acquisition speed, and reliability. We then assess each method's applicability to the problem of reconstructing anatomical connectivity between neurons, considering both the current capabilities and future prospects of the method. Finally, we argue that neuronal 'wiring diagrams' are likely necessary, but not sufficient, to understand the operation of most neuronal circuits: volume EM imaging will likely find its best application in combination with other methods in neuroscience, such as molecular biology, optogenetics, and physiology.",
"title": ""
}
] |
scidocsrr
|
e26cf27137f165168f2bf2b3487b16c1
|
Assessment of Smartphone Addiction in Indian Adolescents: A Mixed Method Study by Systematic-review and Meta-analysis Approach
|
[
{
"docid": "dc8d6a99812b5a5953b4e319e519447f",
"text": "By 2025, when most of today's psychology undergraduates will be in their mid-30s, more than 5 billion people on our planet will be using ultra-broadband, sensor-rich smartphones far beyond the abilities of today's iPhones, Androids, and Blackberries. Although smartphones were not designed for psychological research, they can collect vast amounts of ecologically valid data, easily and quickly, from large global samples. If participants download the right \"psych apps,\" smartphones can record where they are, what they are doing, and what they can see and hear and can run interactive surveys, tests, and experiments through touch screens and wireless connections to nearby screens, headsets, biosensors, and other peripherals. This article reviews previous behavioral research using mobile electronic devices, outlines what smartphones can do now and will be able to do in the near future, explains how a smartphone study could work practically given current technology (e.g., in studying ovulatory cycle effects on women's sexuality), discusses some limitations and challenges of smartphone research, and compares smartphones to other research methods. Smartphone research will require new skills in app development and data analysis and will raise tough new ethical issues, but smartphones could transform psychology even more profoundly than PCs and brain imaging did.",
"title": ""
}
] |
[
{
"docid": "18ba6afa8aa1a1e603d87085f9de9332",
"text": "A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of AI development, namely, (1) before it starts self-improvement, (2) during its takeoff, when it uses various instruments to escape its initial confinement, or (3) after it successfully takes over the world and starts to implement its goal system, which could be plainly unaligned, or feature-flawed friendliness. AI could also halt at later stages of its development either due to technical glitches or ontological problems. Overall, we identified around several dozen scenarios of AI-driven global catastrophe. The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level.",
"title": ""
},
{
"docid": "447c008d30a6f86830d49bd74bd7a551",
"text": "OBJECTIVES\nTo investigate the effects of 24 weeks of whole-body-vibration (WBV) training on knee-extension strength and speed of movement and on counter-movement jump performance in older women.\n\n\nDESIGN\nA randomized, controlled trial.\n\n\nSETTING\nExercise Physiology and Biomechanics Laboratory, Leuven, Belgium.\n\n\nPARTICIPANTS\nEighty-nine postmenopausal women, off hormone replacement therapy, aged 58 to 74, were randomly assigned to a WBV group (n=30), a resistance-training group (RES, n=30), or a control group (n=29).\n\n\nINTERVENTION\nThe WBV group and the RES group trained three times a week for 24 weeks. The WBV group performed unloaded static and dynamic knee-extensor exercises on a vibration platform, which provokes reflexive muscle activity. The RES group trained knee-extensors by performing dynamic leg-press and leg-extension exercises increasing from low (20 repetitions maximum (RM)) to high (8RM) resistance. The control group did not participate in any training.\n\n\nMEASUREMENTS\nPre-, mid- (12 weeks), and post- (24 weeks) isometric strength and dynamic strength of knee extensors were measured using a motor-driven dynamometer. Speed of movement of knee extension was assessed using an external resistance equivalent to 1%, 20%, 40%, and 60% of isometric maximum. Counter-movement jump performance was determined using a contact mat.\n\n\nRESULTS\nIsometric and dynamic knee extensor strength increased significantly (P<.001) in the WBV group (mean+/-standard error 15.0+/-2.1% and 16.1+/-3.1%, respectively) and the RES group (18.4+/-2.8% and 13.9+/-2.7%, respectively) after 24 weeks of training, with the training effects not significantly different between the groups (P=.558). Speed of movement of knee extension significantly increased at low resistance (1% or 20% of isometric maximum) in the WBV group only (7.4+/-1.8% and 6.3+/-2.0%, respectively) after 24 weeks of training, with no significant differences in training effect between the WBV and the RES groups (P=.391; P=.142). Counter-movement jump height enhanced significantly (P<.001) in the WBV group (19.4+/-2.8%) and the RES group (12.9+/-2.9%) after 24 weeks of training. Most of the gain in knee-extension strength and speed of movement and in counter-movement jump performance had been realized after 12 weeks of training.\n\n\nCONCLUSION\nWBV is a suitable training method and is as efficient as conventional RES training to improve knee-extension strength and speed of movement and counter-movement jump performance in older women. As previously shown in young women, it is suggested that the strength gain in older women is mainly due to the vibration stimulus and not only to the unloaded exercises performed on the WBV platform.",
"title": ""
},
{
"docid": "82dd67625fd8f2af3bf825fdef410836",
"text": "Public health thrives on high-quality evidence, yet acquiring meaningful data on a population remains a central challenge of public health research and practice. Social monitoring, the analysis of social media and other user-generated web data, has brought advances in the way we leverage population data to understand health. Social media offers advantages over traditional data sources, including real-time data availability, ease of access, and reduced cost. Social media allows us to ask, and answer, questions we never thought possible. This book presents an overview of the progress on uses of social monitoring to study public health over the past decade. We explain available data sources, common methods, and survey research on social monitoring in a wide range of public health areas. Our examples come from topics such as disease surveillance, behavioral medicine, and mental health, among others. We explore the limitations and concerns of these methods. Our survey of this exciting new field of data-driven research lays out future research directions.",
"title": ""
},
{
"docid": "cd549297cb4644aaf24c28b5bbdadb24",
"text": "This study identifies the difference in the perceptions of academic stress and reaction to stressors based on gender among first year university students in Nigeria. Student Academic Stress Scale (SASS) was the instrument used to collect data from 2,520 first year university students chosen through systematic random sampling from Universities in the six geo-political zones of Nigeria. To determine gender differences among the respondents, independent samples t-test was used via SPSS version 15.0. The results of research showed that male and female respondents differed significantly in their perceptions of frustrations, financials, conflicts and selfexpectations stressors but did not significantly differ in their perceptions of pressures and changesrelated stressors. Generally, no significant difference was found between male and female respondents in their perceptions of academic stressors, however using the mean scores as basis, female respondents scored higher compared to male respondents. Regarding reaction to stressors, male and female respondents differ significantly in their perceptions of emotional and cognitive reactions but did not differ significantly in their perceptions of physiological and behavioural reaction to stressors.",
"title": ""
},
{
"docid": "a21b00c391d2eb795b90762a99819c37",
"text": "Phishing is a cyber attack which involves a fake website mimicking the some real legitimate website. The website makes the user believe the website being authentic and thus online user provides their sensitive information like password, PIN, Social Security Number, and Credit Card Information etc. Due to involvement of such high sensitivity information, these websites are a huge threat to online users and detection and blocking of such website become crucial. In this thesis, we propose a new phishing detection method to protect the internet users from such attacks. In particular, given a website, our proposed method will be able to detect between a phishing website and a legitimate website just by the screenshot of the logo image of it. Due to the usage of screenshot for extracting the logo, any hidden logo will not be able to spoof the algorithm into considering the website as phishing as happened in existing methods. In first study focus was on dataset gathering and then the logo image is extracted. This logo image is uploaded to Google image search engine using automated script which returns the URLs associated with that image. Since the relationship between logo and domain name is exclusive it is reasonable to treat the logo image as identity of original URL. Hence the phishing website will not have the same relation to the logo image as such and will not get returned as URL by Google when search for that logo image. Further, Alexa page rank is also used to strengthen the detection accuracy.",
"title": ""
},
{
"docid": "313c68843b2521d553772dd024eec202",
"text": "In this work we perform an analysis of probabilistic approaches to recommendation upon a different validation perspective, which focuses on accuracy metrics such as recall and precision of the recommendation list. Traditionally, state-of-art approches to recommendations consider the recommendation process from a “missing value prediction” perspective. This approach simplifies the model validation phase that is based on the minimization of standard error metrics such as RMSE. However, recent studies have pointed several limitations of this approach, showing that a lower RMSE does not necessarily imply improvements in terms of specific recommendations. We demonstrate that the underlying probabilistic framework offers several advantages over traditional methods, in terms of flexibility in the generation of the recommendation list and consequently in the accuracy of recommendation.",
"title": ""
},
{
"docid": "f72267cde1287bc3d0a235043c4dc5f5",
"text": "End-to-end congestion control mechanisms have been critical to the robustness and stability of the Internet. Most of today’s Internet traffic is TCP, and we expect this to remain so in the future. Thus, having “TCP-friendly” behavior is crucial for new applications. However, the emergence of non-congestion-controlled realtime applications threatens unfairness to competing TCP traffic and possible congestion collapse. We present an end-to-end TCP-friendly Rate Adaptation Protocol (RAP), which employs an additive-increase, multiplicativedecrease (AIMD) algorithm. It is well suited for unicast playback of realtime streams and other semi-reliable rate-based applications. Its primary goal is to be fair and TCP-friendly while separating network congestion control from application-level reliability. We evaluate RAP through extensive simulation, and conclude that bandwidth is usually evenly shared between TCP and RAP traffic. Unfairness to TCP traffic is directly determined by how TCP diverges from the AIMD algorithm. Basic RAP behaves in a TCPfriendly fashion in a wide range of likely conditions, but we also devised a fine-grain rate adaptation mechanism to extend this range further. Finally, we show that deploying RED queue management can result in an ideal fairness between TCP and RAP traffic.",
"title": ""
},
{
"docid": "f5e44676e9ce8a06bcdb383852fb117f",
"text": "We explore techniques to significantly improve the compute efficiency and performance of Deep Convolution Networks without impacting their accuracy. To improve the compute efficiency, we focus on achieving high accuracy with extremely low-precision (2-bit) weight networks, and to accelerate the execution time, we aggressively skip operations on zero-values. We achieve the highest reported accuracy of 76.6% Top-1/93% Top-5 on the Imagenet object classification challenge with low-precision network while reducing the compute requirement by ∼3× compared to a full-precision network that achieves similar accuracy. Furthermore, to fully exploit the benefits of our low-precision networks, we build a deep learning accelerator core, DLAC, that can achieve up to 1 TFLOP/mm2 equivalent for single-precision floating-point operations (∼2 TFLOP/mm2 for half-precision), which is ∼5× better than Linear Algebra Core [16] and ∼4× better than previous deep learning accelerator proposal [8].",
"title": ""
},
{
"docid": "e4236031c7d165a48a37171c47de1c38",
"text": "We present a discrete event simulation model reproducing the adoption of Radio Frequency Identification (RFID) technology for the optimal management of common logistics processes of a Fast Moving Consumer Goods (FMCG) warehouse. In this study, simulation is exploited as a powerful tool to replicate both the reengineered RFID logistics processes and the flows of Electronic Product Code (EPC) data generated by such processes. Moreover, a complex tool has been developed to analyze data resulting from the simulation runs, thus addressing the issue of how the flows of EPC data generated by RFID technology can be exploited to provide value-added information for optimally managing the logistics processes. Specifically, an EPCIS compliant Data Warehouse has been designed to act as EPCIS Repository and store EPC data resulting from simulation. Starting from EPC data, properly designed tools, referred to as Business Intelligence Modules, provide value-added information for processes optimization. Due to the newness of RFID adoption in the logistics context and to the lack of real case examples that can be examined, we believe that both the model and the data management system developed can be very useful to understand the practical implications of the technology and related information flow, as well as to show how to leverage EPC data for process management. Results of the study can provide a proof-of-concept to substantiate the adoption of RFID technology in the FMCG industry.",
"title": ""
},
{
"docid": "d70e908868eb34b82862915db18499a1",
"text": "Query reverse engineering seeks to re-generate the SQL query that produced a given query output table from a given database. In this paper, we solve this problem for OLAP queries with group-by and aggregation. We develop a novel three-phase algorithm named REGAL 1 for this problem. First, based on a lattice graph structure, we identify a set of group-by candidates for the desired query. Second, we apply a set of aggregation constraints that are derived from the properties of aggregate operators at both the table-level and the group-level to discover candidate combinations of group-by columns and aggregations that are consistent with the given query output table. Finally, we find a multi-dimensional filter, i.e., a conjunction of selection predicates over the base table attributes, that is needed to generate the exact query output table. We conduct an extensive experimental study over the TPC-H dataset to demonstrate the effectiveness and efficiency of our proposal.",
"title": ""
},
{
"docid": "326493520ccb5c8db07362f412f57e62",
"text": "This paper introduces Rank-based Interactive Evolution (RIE) which is an alternative to interactive evolution driven by computational models of user preferences to generate personalized content. In RIE, the computational models are adapted to the preferences of users which, in turn, are used as fitness functions for the optimization of the generated content. The preference models are built via ranking-based preference learning, while the content is generated via evolutionary search. The proposed method is evaluated on the creation of strategy game maps, and its performance is tested using artificial agents. Results suggest that RIE is both faster and more robust than standard interactive evolution and outperforms other state-of-the-art interactive evolution approaches.",
"title": ""
},
{
"docid": "3e586b15daab65a66066ae26ee34a3d5",
"text": "We first look at some nonstandard fuzzy sets, intuitionistic, and interval-valued fuzzy sets. We note both these allow a degree of commitment of less then one in assigning membership. We look at the formulation of the negation for these sets and show its expression in terms of the standard complement with respect to the degree of commitment. We then consider the complement operation. We describe its properties and look at alternative definitions of complement operations. We then focus on the Pythagorean complement. Using this complement, we introduce a class of nonstandard Pythagorean fuzzy subsets whose membership grades are pairs, (a, b) satisfying the requirement a2 + b2 ≤ 1. We introduce a variety of aggregation operations for these Pythagorean fuzzy subsets. We then look at multicriteria decision making in the case where the criteria satisfaction are expressed using Pythagorean membership grades. The issue of having to choose a best alternative in multicriteria decision making leads us to consider the problem of comparing Pythagorean membership grades.",
"title": ""
},
{
"docid": "428d522f59dbef1c52421abcaaa7a0c2",
"text": "We devise new coding methods to minimize Phase Change Memory write energy. Our method minimizes the energy required for memory rewrites by utilizing the differences between PCM read, set, and reset energies. We develop an integer linear programming method and employ dynamic programming to produce codes for uniformly distributed data. We also introduce data-aware coding schemes to efficiently address the energy minimization problem for stochastic data. Our evaluations show that the proposed methods result in up to 32% and 44% reduction in memory energy consumption for uniform and stochastic data respectively.",
"title": ""
},
{
"docid": "cce465180d48695a6ed150c7024fbbf2",
"text": "The Convolutional Neural Network (CNN) has significantly improved the state-of-the-art in person re-identification (re-ID). In the existing available identification CNN model, the softmax loss function is employed as the supervision signal to train the CNN model. However, the softmax loss only encourages the separability of the learned deep features between different identities. The distinguishing intra-class variations have not been considered during the training process of CNN model. In order to minimize the intra-class variations and then improve the discriminative ability of CNN model, this paper combines a new supervision signal with original softmax loss for person re-ID. Specifically, during the training process, a center of deep features is learned for each pedestrian identity and the deep features are subtracted from the corresponding identity centers, simultaneously. So that, the deep features of the same identity to the center will be pulled efficiently. With the combination of loss functions, the inter-class dispersion and intra-class aggregation can be constrained as much as possible. In this way, a more discriminative CNN model, which has two key learning objectives, can be learned to extract deep features for person re-ID task. We evaluate our method in two identification CNN models (i.e., CaffeNet and ResNet-50). It is encouraging to see that our method has a stable improvement compared with the baseline and yields a competitive performance to the state-of-the-art person re-ID methods on three important person re-ID benchmarks (i.e., Market-1501, CUHK03 and MARS).",
"title": ""
},
{
"docid": "90abe74101541aedb7dfd8f0d44a23d8",
"text": "Three recently developed control methods for voltage regulator modules, namely, V/sup 2/ control, enhanced V/sup 2/ control, and enhanced V/sup 2/ control without output voltage dynamic feedback, are analyzed and compared in this paper. All three methods utilize the output voltage switching ripple for pulse-width modulation (PWM), hence, are collectively referred to as ripple-based control. A general modeling method based on the Krylov-Bogoliubov-Mitropolsky ripple estimation technique is applied to develop averaged models for single-channel as well as multichannel buck converters employing each of the control methods. Unlike existing models that are limited to small-signal operation, the proposed models are valid for large-signal operation and are capable of predicting subharmonic instability without including any sample-and-hold block as used in previous models. The paper also shows that adding parallel, high-quality ceramic capacitors at the output, which are ignored in previous models, can lead to pulse skipping and ripple instability, and a solution based on proper selection of the ceramic capacitors and/or ramp compensation at the PWM is presented. The models are further applied to analyze and compare the performance of the three control methods in terms of ripple stability, effective load current feedforward gain, and output impedance.",
"title": ""
},
{
"docid": "11bee3755d18834ab36c746a983eb21f",
"text": "Neonaticide is the killing of a newborn within the first 24 h of life. Although relatively uncommon, numerous cases of maternal neonaticide have been reported. To date, only two cases of paternal neonaticide have appeared in the literature. The authors review neonaticide and present two new case reports of paternal neonaticide. A psychodynamic explanation of paternal neonaticide is formulated. A new definition for neonaticide, more consistent with biological and psychological determinants, is suggested.",
"title": ""
},
{
"docid": "319285416d58c9b2da618bb6f0c8021c",
"text": "Facial expression analysis is one of the popular fields of research in human computer interaction (HCI). It has several applications in next generation user interfaces, human emotion analysis, behavior and cognitive modeling. In this paper, a facial expression classification algorithm is proposed which uses Haar classifier for face detection purpose, Local Binary Patterns(LBP) histogram of different block sizes of a face image as feature vectors and classifies various facial expressions using Principal Component Analysis (PCA). The algorithm is implemented in real time for expression classification since the computational complexity of the algorithm is small. A customizable approach is proposed for facial expression analysis, since the various expressions and intensity of expressions vary from person to person. The system uses grayscale frontal face images of a person to classify six basic emotions namely happiness, sadness, disgust, fear, surprise and anger.",
"title": ""
},
{
"docid": "ca544972e6fe3c051f72d04608ff36c1",
"text": "The prefrontal cortex (PFC) plays a key role in controlling goal-directed behavior. Although a variety of task-related signals have been observed in the PFC, whether they are differentially encoded by various cell types remains unclear. Here we performed cellular-resolution microendoscopic Ca(2+) imaging from genetically defined cell types in the dorsomedial PFC of mice performing a PFC-dependent sensory discrimination task. We found that inhibitory interneurons of the same subtype were similar to each other, but different subtypes preferentially signaled different task-related events: somatostatin-positive neurons primarily signaled motor action (licking), vasoactive intestinal peptide-positive neurons responded strongly to action outcomes, whereas parvalbumin-positive neurons were less selective, responding to sensory cues, motor action, and trial outcomes. Compared to each interneuron subtype, pyramidal neurons showed much greater functional heterogeneity, and their responses varied across cortical layers. Such cell-type and laminar differences in neuronal functional properties may be crucial for local computation within the PFC microcircuit.",
"title": ""
},
{
"docid": "aae324f4fb48de537bf67bd4ea81e56b",
"text": "Double JPEG compression detection has received considerable attention in blind image forensics. However, only few techniques can provide automatic localization. To address this challenge, this paper proposes a double JPEG compression detection algorithm based on a convolutional neural network (CNN). The CNN is designed to classify histograms of discrete cosine transform (DCT) coefficients, which differ between single-compressed areas (tampered areas) and double-compressed areas (untampered areas). The localization result is obtained according to the classification results. Experimental results show that the proposed algorithm performs well in double JPEG compression detection and forgery localization, especially when the first compression quality factor is higher than the second.",
"title": ""
},
{
"docid": "fee50f8ab87f2b97b83ca4ef92f57410",
"text": "Ontologies now play an important role for many knowledge-intensive applications for which they provide a source of precisely defined terms. However, with their wide-spread usage there come problems concerning their proliferation. Ontology engineers or users frequently have a core ontology that they use, e.g., for browsing or querying data, but they need to extend it with, adapt it to, or compare it with the large set of other ontologies. For the task of detecting and retrieving relevant ontologies, one needs means for measuring the similarity between ontologies. We present a set of ontology similarity measures and a multiple-phase empirical evaluation.",
"title": ""
}
] |
scidocsrr
|
40ef1f19549f9521acba56c5785a4553
|
Social Network Extraction of Academic Researchers
|
[
{
"docid": "7ff084619d05d21975ff41748a260418",
"text": "In the development of speech recognition algorithms, it is important to know whether any apparent difference in performance of algorithms is statistically significant, yet this issue is almost always overlooked. We present two simple tests for deciding whether the difference in error-rates between two algorithms tested on the same data set is statistically significant. The first (McNemar’s test) requires the errors made by an algorithm to be independent events and is most appropriate for isolated word algorithms. The second (a matched-pairs test) can be used even when errors are not independent events and is more appropriate for connected speech.",
"title": ""
},
{
"docid": "05fcab9232eadffb6e8de94a88c1cec1",
"text": "Unsupervised clustering can be significantly improved using supervision in the form of pairwise constraints, i.e., pairs of instances labeled as belonging to same or different clusters. In recent years, a number of algorithms have been proposed for enhancing clustering quality by employing such supervision. Such methods use the constraints to either modify the objective function, or to learn the distance measure. We propose a probabilistic model for semi-supervised clustering based on Hidden Markov Random Fields (HMRFs) that provides a principled framework for incorporating supervision into prototype-based clustering. The model generalizes a previous approach that combines constraints and Euclidean distance learning, and allows the use of a broad range of clustering distortion measures, including Bregman divergences (e.g., Euclidean distance and I-divergence) and directional similarity measures (e.g., cosine similarity). We present an algorithm that performs partitional semi-supervised clustering of data by minimizing an objective function derived from the posterior energy of the HMRF model. Experimental results on several text data sets demonstrate the advantages of the proposed framework.",
"title": ""
}
] |
[
{
"docid": "7b3dd8bdc75bf99f358ef58b2d56e570",
"text": "This paper studies asset allocation decisions in the presence of regime switching in asset returns. We find evidence that four separate regimes characterized as crash, slow growth, bull and recovery states are required to capture the joint distribution of stock and bond returns. Optimal asset allocations vary considerably across these states and change over time as investors revise their estimates of the state probabilities. In the crash state, buy-and-hold investors allocate more of their portfolio to stocks the longer their investment horizon, while the optimal allocation to stocks declines as a function of the investment horizon in bull markets. The joint effects of learning about state probabilities and predictability of asset returns from the dividend yield give rise to a non-monotonic relationship between the investment horizon and the demand for stocks. Welfare costs from ignoring regime switching can be substantial even after accounting for parameter uncertainty. Out-of-sample forecasting experiments confirm the economic importance of accounting for the presence of regimes in asset returns.",
"title": ""
},
{
"docid": "5c85263f109a57662134607f2d50b095",
"text": "Reducing Employee Turnover in Retail Environments: An Analysis of Servant Leadership Variables by Beatriz Rodriguez MBA, Webster University, 1994 BBA, University of Puerto Rico, 1989 Doctoral Study Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Business Administration Walden University August 2016 Abstract In a competitive retail environment, retail store managers (RSMs) need to retain retail customer service employees (RCSE) to maximize sales and reduce employee turnover costs. Servant leadership (SL) is a preferred leadership style within customer service organizations; however, there is disagreement regarding the usefulness of SL in the retail industry. The theoretical framework for this correlational study is Greenleaf’s SL theory. Seventy-four of 109 contacted human resources managers (HRMs) from a Fortune 500 United States retailer, with responsibility for evaluating leadership competencies of the RSMs they support, completed Liden’s Servant Leadership Questionnaire. RCSE turnover rates were available from company records. To analyze the correlation between the 3 SL constructs and RCSE turnover, multiple regression analysis with Pearson’s r providing sample correlation coefficients were used. Individually the 3 constructs FIRST (beta = .083, p = .692), EMPOWER (beta = -.076, p = .685), and GROW (beta = -.018, p = .917) were not statistically significant to predict RCSE turnover. The study multiple regression model with F (3,74) = .071, p = .98, R2 = .003 failed to demonstrate a significant correlation between SL constructs and turnover. Considering these findings, the HRMs could hire or train for different leadership skills that may be more applicable to effectively lead a retail sales force. In doing so, the implications for positive social change may result in RCSE retention leading to economic stability and career growth.In a competitive retail environment, retail store managers (RSMs) need to retain retail customer service employees (RCSE) to maximize sales and reduce employee turnover costs. Servant leadership (SL) is a preferred leadership style within customer service organizations; however, there is disagreement regarding the usefulness of SL in the retail industry. The theoretical framework for this correlational study is Greenleaf’s SL theory. Seventy-four of 109 contacted human resources managers (HRMs) from a Fortune 500 United States retailer, with responsibility for evaluating leadership competencies of the RSMs they support, completed Liden’s Servant Leadership Questionnaire. RCSE turnover rates were available from company records. To analyze the correlation between the 3 SL constructs and RCSE turnover, multiple regression analysis with Pearson’s r providing sample correlation coefficients were used. Individually the 3 constructs FIRST (beta = .083, p = .692), EMPOWER (beta = -.076, p = .685), and GROW (beta = -.018, p = .917) were not statistically significant to predict RCSE turnover. The study multiple regression model with F (3,74) = .071, p = .98, R2 = .003 failed to demonstrate a significant correlation between SL constructs and turnover. Considering these findings, the HRMs could hire or train for different leadership skills that may be more applicable to effectively lead a retail sales force. In doing so, the implications for positive social change may result in RCSE retention leading to economic stability and career growth. Reducing Employee Turnover in Retail Environments: An Analysis of Servant Leadership Variables by Beatriz Rodriguez MBA, Webster University, 1994 BBA, University of Puerto Rico, 1989 Doctoral Study Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Business Administration Walden University August 2016 Dedication I dedicate this to my three sons: Javi, J.J., and Javier. You inspire me to reach my goals, hoping that I can set you on the path to reach yours. May this work be an example that with hard work and perseverance, we can accomplish anything. Acknowledgments I would like to extend a heartfelt thanks to my husband, Alfredo, and my friend, Elaine. Their continuous encouragement, help, and support provided the fuel that helped me reach this point in my academic career. I am lucky to have such supportive individuals in my life. Along with my unwavering parents, they believed in my ability to succeed. Lastly, thank you to my Chair, Dr. John Hannon, Co-Chair, Dr. Perry Haan and URR committee member Dr. Lyn Szostek. Their dedication to Walden University and the students is commendable.",
"title": ""
},
{
"docid": "1b5bb38b0a451238b2fc98a39d6766b0",
"text": "OBJECTIVES\nWe quantified concomitant medication polypharmacy, pharmacokinetic and pharmacodynamic interactions, adverse effects and adherence in Australian adults on effective antiretroviral therapy.\n\n\nDESIGN\nCross-sectional.\n\n\nMETHODS\nPatients recruited into a nationwide cohort and assessed for prevalence and type of concomitant medication (including polypharmacy, defined as ≥5 concomitant medications), pharmacokinetic or pharmacodynamic interactions, potential concomitant medication adverse effects and concomitant medication adherence. Factors associated with concomitant medication polypharmacy and with imperfect adherence were identified using multivariable logistic regression.\n\n\nRESULTS\nOf 522 participants, 392 (75%) took a concomitant medication (mostly cardiovascular, nonprescription or antidepressant). Overall, 280 participants (54%) had polypharmacy of concomitant medications and/or a drug interaction or contraindication. Polypharmacy was present in 122 (23%) and independently associated with clinical trial participation, renal impairment, major comorbidity, hospital/general practice-based HIV care (versus sexual health clinic) and benzodiazepine use. Seventeen participants (3%) took at least one concomitant medication contraindicated with their antiretroviral therapy, and 237 (45%) had at least one pharmacokinetic/pharmacodynamic interaction. Concomitant medication use was significantly associated with sleep disturbance and myalgia, and polypharmacy of concomitant medications with diarrhoea, fatigue, myalgia and peripheral neuropathy. Sixty participants (12%) reported imperfect concomitant medication adherence, independently associated with requiring financial support, foregoing necessities for financial reasons, good/very good self-reported general health and at least 1 bed day for illness in the previous 12 months.\n\n\nCONCLUSION\nIn a resource-rich setting with universal healthcare access, the majority of this sample took a concomitant medication. Over half had at least one of concomitant medication polypharmacy, pharmacokinetic or pharmacodynamic interaction. Concomitant medication use was associated with several adverse clinical outcomes.",
"title": ""
},
{
"docid": "b4c5ddab0cb3e850273275843d1f264f",
"text": "The increase of malware that are exploiting the Internet daily has become a serious threat. The manual heuristic inspection of malware analysis is no longer considered effective and efficient compared against the high spreading rate of malware. Hence, automated behavior-based malware detection using machine learning techniques is considered a profound solution. The behavior of each malware on an emulated (sandbox) environment will be automatically analyzed and will generate behavior reports. These reports will be preprocessed into sparse vector models for further machine learning (classification). The classifiers used in this research are k-Nearest Neighbors (kNN), Naïve Bayes, J48 Decision Tree, Support Vector Machine (SVM), and Multilayer Perceptron Neural Network (MlP). Based on the analysis of the tests and experimental results of all the 5 classifiers, the overall best performance was achieved by J48 decision tree with a recall of 95.9%, a false positive rate of 2.4%, a precision of 97.3%, and an accuracy of 96.8%. In summary, it can be concluded that a proof-of-concept based on automatic behavior-based malware analysis and the use of machine learning techniques could detect malware quite effectively and efficiently.",
"title": ""
},
{
"docid": "c1ba049befffa94e358555056df15cc2",
"text": "People design what they say specifically for their conversational partners, and they adapt to their partners over the course of a conversation. A comparison of keyboard conversations involving a simulated computer partner (as in a natural language interface) with those involving a human partner (as in teleconferencing) yielded striking differences and some equally striking similarities. For instance, there were significantly fewer acknowledgments in human/computer dialogue than in human/human. However, regardless of the conversational partner, people expected connectedness across conversational turns. In addition, the style of a partner's response shaped what people subsequently typed. These results suggest some issues that need to be addressed before a natural language computer interface will be able to hold up its end of a conversation.",
"title": ""
},
{
"docid": "d2a7b5cfb1a20ba5f66687326a8e1a3d",
"text": "This paper proposes the efficient Frame Rate Up-conversion (FRUC) that has low computational complexity. The proposed algorithm consists of motion vector (MV) smoothing, selective average based motion compensation (SAMC) and hole interpolation with different weights. The proposed MV smoothing constructs more smooth interpolated frames by correcting inaccurate MVs. The proposed SAMC and hole interpolation effectively deal with overlaps and holes, and thus, they can efficiently reduce the degradation of the interpolated frames by removing blocking artifacts and blurring. Experimental results show that the proposed algorithm improves the average PSNR of the interpolated frames by 4.15dB than the conventional algorithm using bilateral ME, while it shows the average 0.16dB less PSNR performance than the existing algorithm using unilateral ME. However, it can significantly reduce the computational complexity based on absolute difference by 89.3%.",
"title": ""
},
{
"docid": "07cd406cead1a086f61f363269de1aac",
"text": "Tolerating and recovering from link and switch failures are fundamental requirements of most networks, including Software-Defined Networks (SDNs). However, instead of traditional behaviors such as network-wide routing re-convergence, failure recovery in an SDN is determined by the specific software logic running at the controller. While this admits more freedom to respond to a failure event, it ultimately means that each controller application must include its own recovery logic, which makes the code more difficult to write and potentially more error-prone.\n In this paper, we propose a runtime system that automates failure recovery and enables network developers to write simpler, failure-agnostic code. To this end, upon detecting a failure, our approach first spawns a new controller instance that runs in an emulated environment consisting of the network topology excluding the failed elements. Then, it quickly replays inputs observed by the controller before the failure occurred, leading the emulated network into the forwarding state that accounts for the failed elements. Finally, it recovers the network by installing the difference ruleset between emulated and current forwarding states.",
"title": ""
},
{
"docid": "a712b6efb5c869619864cd817c2e27e1",
"text": "We measure the value of promotional activities and referrals by content creators to an online platform of user-generated content. To do so, we develop a modeling approach that explains individual-level choices of visiting the platform, creating, and purchasing content, as a function of consumer characteristics and marketing activities, allowing for the possibility of interdependence of decisions within and across users. Empirically, we apply our model to Hewlett-Packard’s (HP) print-on-demand service of user-created magazines, named MagCloud. We use two distinct data sets to show the applicability of our approach: an aggregate-level data set from Google Analytics, which is a widely available source of data to managers, and an individual-level data set from HP. Our results compare content creator activities, which include referrals and word-ofmouth efforts, with firm-based actions, such as price promotions and public relations. We show that price promotions have strong effects, but limited to the purchase decisions, while content creator referrals and public relations have broader effects which impact all consumer decisions at the platform. We provide recommendations to the level of the firm’s investments when “free” promotional activities by content creators exist. These “free” marketing campaigns are likely to have a substantial presence in most online services of user-generated content.",
"title": ""
},
{
"docid": "64efd590a51fc3cab97c9b4b17ba9b40",
"text": "The problem of detecting bots, automated social media accounts governed by software but disguising as human users, has strong implications. For example, bots have been used to sway political elections by distorting online discourse, to manipulate the stock market, or to push anti-vaccine conspiracy theories that caused health epidemics. Most techniques proposed to date detect bots at the account level, by processing large amount of social media posts, and leveraging information from network structure, temporal dynamics, sentiment analysis, etc. In this paper, we propose a deep neural network based on contextual long short-term memory (LSTM) architecture that exploits both content and metadata to detect bots at the tweet level: contextual features are extracted from user metadata and fed as auxiliary input to LSTM deep nets processing the tweet text. Another contribution that we make is proposing a technique based on synthetic minority oversampling to generate a large labeled dataset, suitable for deep nets training, from a minimal amount of labeled data (roughly 3,000 examples of sophisticated Twitter bots). We demonstrate that, from just one single tweet, our architecture can achieve high classification accuracy (AUC > 96%) in separating bots from humans. We apply the same architecture to account-level bot detection, achieving nearly perfect classification accuracy (AUC > 99%). Our system outperforms previous state of the art while leveraging a small and interpretable set of features yet requiring minimal training data.",
"title": ""
},
{
"docid": "dca2900c2b002e3119435bcf983c5aac",
"text": "Substantial evidence suggests that the accumulation of beta-amyloid (Abeta)-derived peptides contributes to the aetiology of Alzheimer's disease (AD) by stimulating formation of free radicals. Thus, the antioxidant alpha-lipoate, which is able to cross the blood-brain barrier, would seem an ideal substance in the treatment of AD. We have investigated the potential effectiveness of alpha-lipoic acid (LA) against cytotoxicity induced by Abeta peptide (31-35) (30 microM) and hydrogen peroxide (H(2)O(2)) (100 microM) with the cellular 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) reduction and fluorescence dye propidium iodide assays in primary neurons of rat cerebral cortex. We found that treatment with LA protected cortical neurons against cytotoxicity induced by Abeta or H(2)O(2). In addition, LA-induced increase in the level of Akt in the neurons was observed by Western blot. The LA-induced neuroprotection and Akt increase were attenuated by pre-treatment with the phosphatidylinositol 3-kinase inhibitor, LY294002 (50 microM). Our data suggest that the neuroprotective effects of the antioxidant LA are partly mediated through activation of the PKB/Akt signaling pathway.",
"title": ""
},
{
"docid": "f49923e0f36a47162ec087c661169459",
"text": "People use imitation to encourage each other during conversation. We have conducted an experiment to investigate how imitation by a robot affect people’s perceptions of their conversation with it. The robot operated in one of three ways: full head gesture mimicking, partial head gesture mimicking (nodding), and non-mimicking (blinking). Participants rated how satisfied they were with the interaction. We hypothesized that participants in the full head gesture condition will rate their interaction the most positively, followed by the partial and non-mimicking conditions. We also performed gesture analysis to see if any differences existed between groups, and did find that men made significantly more gestures than women while interacting with the robot. Finally, we interviewed participants to try to ascertain additional insight into their feelings of rapport with the robot, which revealed a number of valuable insights.",
"title": ""
},
{
"docid": "805d0578891511d3e3dab1309edded8f",
"text": "We propose to learn a curriculum or a syllabus for deep reinforcement learning and supervised learning with deep neural networks by an attachable deep neural network, called ScreenerNet. Specifically, we learn a weight for each sample by jointly training the ScreenerNet and the main network in an end-to-end selfpaced fashion. The ScreenerNet has neither sampling bias nor memory for the past learning history. We show the networks augmented with the ScreenerNet converge faster with better accuracy than the state-of-the-art curricular learning methods in extensive experiments of a Cart-pole task using Deep Q-learning and supervised visual recognition task using three vision datasets such as Pascal VOC2012, CIFAR10, and MNIST. Moreover, the ScreenerNet can be combined with other curriculum learning methods such as Prioritized Experience Replay (PER) for further accuracy improvement.",
"title": ""
},
{
"docid": "ec641ace6df07156891f2bf40ea5d072",
"text": "This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.",
"title": ""
},
{
"docid": "40e7ea2295994e1b822b3e4ab968d9f9",
"text": "This paper presents the use of a new meta-heuristic technique namely gray wolf optimizer (GWO) which is inspired from gray wolves’ leadership and hunting behaviors to solve optimal reactive power dispatch (ORPD) problem. ORPD problem is a well-known nonlinear optimization problem in power system. GWO is utilized to find the best combination of control variables such as generator voltages, tap changing transformers’ ratios as well as the amount of reactive compensation devices so that the loss and voltage deviation minimizations can be achieved. In this paper, two case studies of IEEE 30bus system and IEEE 118-bus system are used to show the effectiveness of GWO technique compared to other techniques available in literature. The results of this research show that GWO is able to achieve less power loss and voltage deviation than those determined by other techniques.",
"title": ""
},
{
"docid": "a45d3d3d5f3d7716371c792d01e4cee8",
"text": "Virtual personal assistants (VPA) (e.g., Amazon Alexa and Google Assistant) today mostly rely on the voice channel to communicate with their users, which however is known to be vulnerable, lacking proper authentication. The rapid growth of VPA skill markets opens a new attack avenue, potentially allowing a remote adversary to publish attack skills to attack a large number of VPA users through popular IoT devices such as Amazon Echo and Google Home. In this paper, we report a study that concludes such remote, large-scale attacks are indeed realistic. More specifically, we implemented two new attacks: voice squatting in which the adversary exploits the way a skill is invoked (e.g., “open capital one”), using a malicious skill with similarly pronounced name (e.g., “capital won”) or paraphrased name (e.g., “capital one please”) to hijack the voice command meant for a different skill, and voice masquerading in which a malicious skill impersonates the VPA service or a legitimate skill to steal the user’s data or eavesdrop on her conversations. These attacks aim at the way VPAs work or the user’s misconceptions about their functionalities, and are found to pose a realistic threat by our experiments (including user studies and real-world deployments) on Amazon Echo and Google Home. The significance of our findings have already been acknowledged by Amazon and Google, and further evidenced by the risky skills discovered on Alexa and Google markets by the new detection systems we built. We further developed techniques for automatic detection of these attacks, which already capture real-world skills likely to pose such threats. ∗All the squatting and impersonation vulnerabilities we discovered are reported to Amazon and Google and received their acknowledgement [7].",
"title": ""
},
{
"docid": "d6abc85e62c28755ed6118257d9c25c3",
"text": "MOTIVATION\nIn a previous paper, we presented a polynomial time dynamic programming algorithm for predicting optimal RNA secondary structure including pseudoknots. However, a formal grammatical representation for RNA secondary structure with pseudoknots was still lacking.\n\n\nRESULTS\nHere we show a one-to-one correspondence between that algorithm and a formal transformational grammar. This grammar class encompasses the context-free grammars and goes beyond to generate pseudoknotted structures. The pseudoknot grammar avoids the use of general context-sensitive rules by introducing a small number of auxiliary symbols used to reorder the strings generated by an otherwise context-free grammar. This formal representation of the residue correlations in RNA structure is important because it means we can build full probabilistic models of RNA secondary structure, including pseudoknots, and use them to optimally parse sequences in polynomial time.",
"title": ""
},
{
"docid": "b53bd3f4a0d8933d9af0f5651a445800",
"text": "Requirements for implemented system can be extracted and reused for a production of a new similar system. Extraction of common and variable features from requirements leverages the benefits of the software product lines engineering (SPLE). Although various approaches have been proposed in feature extractions from natural language (NL) requirements, no related literature review has been published to date for this topic. This paper provides a systematic literature review (SLR) of the state-of-the-art approaches in feature extractions from NL requirements for reuse in SPLE. We have included 13 studies in our synthesis of evidence and the results showed that hybrid natural language processing approaches were found to be in common for overall feature extraction process. A mixture of automated and semi-automated feature clustering approaches from data mining and information retrieval were also used to group common features, with only some approaches coming with support tools. However, most of the support tools proposed in the selected studies were not made available publicly and thus making it hard for practitioners’ adoption. As for the evaluation, this SLR reveals that not all studies employed software metrics as ways to validate experiments and case studies. Finally, the quality assessment conducted confirms that practitioners’ guidelines were absent in the selected studies. © 2015 Elsevier Inc. All rights reserved. c t t t r c S o r ( l w t r t",
"title": ""
},
{
"docid": "b71b9a6990866c89ab7bc65338f61a9d",
"text": "This paper compares advantages and disadvantages of several commonly used current sensing methods such as dedicated sense resistor sensing, MOSFET Rds(on) current sensing, and inductor DC resistance (DCR) current sensing. Among these current sense methods, inductor DCR current sense that shows more advantages over other current sensing methods is chosen for analysis. The time constants mismatch issue between the time constant made by the current sensing RC network and the one formed with output inductor and its DC resistance is addressed in this paper. And an unified small signal modeling of a buck converter using inductor DCR current sensing with matched and mismatched time constants is presented, and the modeling has been verified experimentally.",
"title": ""
},
{
"docid": "02bd18358ac5cb5539a99d4c2babd2ea",
"text": "This tutorial provides an overview of the key research results in the area of entity resolution that are relevant to addressing the new challenges in entity resolution posed by the Web of data, in which real world entities are described by interlinked data rather than documents. Since such descriptions are usually partial, overlapping and sometimes evolving, entity resolution emerges as a central problem both to increase dataset linking but also to search the Web of data for entities and their relations.",
"title": ""
},
{
"docid": "114211e5f3dd526a8b78c1dcba98e9f1",
"text": "We reviewed 75 primary total hip arthroplasty preoperative and postoperative radiographs and recorded limb length discrepancy, change in femoral offset, acetabular position, neck cut, and femoral component positioning. Interobturator line, as a technique to measure preoperative limb length discrepancy, had the least amount of variance when compared with interteardrop and intertuberosity lines (Levene test, P = .0527). The most common error in execution of preoperative templating was excessive limb lengthening (mean, 3.52 mm), primarily due to inferior acetabular cup positioning (Pearson correlation coefficient, P = .036). Incomplete medialization of the acetabular component contributed the most to offset discrepancy. The most common errors in the execution of preoperative templating resulted in excessive limb lengthening and increased offset. Identifying these errors can lead to more accurate templating techniques and improved intraoperative execution.",
"title": ""
}
] |
scidocsrr
|
2690b9b879ce9ab7e2f5a6449c4bdbfb
|
Deep Bottleneck Classifiers in Supervised Dimension Reduction
|
[
{
"docid": "340aa5616ef01e8d8a965f2efb510fe9",
"text": "The paper presents a new unsupervised dimensionality reduction technique, called parametric t-SNE, that learns a parametric mapping between the high-dimensional data space and the low-dimensional latent space. Parametric t-SNE learns the parametric mapping in such a way that the local structure of the data is preserved as well as possible in the latent space. We evaluate the performance of parametric t-SNE in experiments on three datasets, in which we compare it to the performance of two other unsupervised parametric dimensionality reduction techniques. The results of experiments illustrate the strong performance of parametric t-SNE, in particular, in learning settings in which the dimensionality of the latent space is relatively low.",
"title": ""
},
{
"docid": "ef04d580d7c1ab165335145c13a1701f",
"text": "Finding good representations of text documents is crucial in information retrieval and classification systems. Today the most popular document representation is based on a vector of word counts in the document. This representation neither captures dependencies between related words, nor handles synonyms or polysemous words. In this paper, we propose an algorithm to learn text document representations based on semi-supervised autoencoders that are stacked to form a deep network. The model can be trained efficiently on partially labeled corpora, producing very compact representations of documents, while retaining as much class information and joint word statistics as possible. We show that it is advantageous to exploit even a few labeled samples during training.",
"title": ""
}
] |
[
{
"docid": "8f4d228d03efcf161346a2a1c010ee7b",
"text": "This paper develops power control algorithms for energy efficiency (EE) maximization (measured in bit/Joule) in wireless networks. Unlike previous related works, minimum-rate constraints are imposed and the signal-to-interference-plus-noise ratio takes a more general expression, which allows one to encompass some of the most promising 5G candidate technologies. Both network-centric and user-centric EE maximizations are considered. In the network-centric scenario, the maximization of the global EE and the minimum EE of the network is performed. Unlike previous contributions, we develop centralized algorithms that are guaranteed to converge, with affordable computational complexity, to a Karush-Kuhn-Tucker point of the considered non-convex optimization problems. Moreover, closed-form feasibility conditions are derived. In the user-centric scenario, game theory is used to study the equilibria of the network and to derive convergent power control algorithms, which can be implemented in a fully decentralized fashion. Both scenarios above are studied under the assumption that single or multiple resource blocks are employed for data transmission. Numerical results assess the performance of the proposed solutions, analyzing the impact of minimum-rate constraints, and comparing the network-centric and user-centric approaches.",
"title": ""
},
{
"docid": "0f58d491e74620f43df12ba0ec19cda8",
"text": "Latent Dirichlet allocation (LDA) (Blei, Ng, Jordan 2003) is a fully generative statistical language model on the content and topics of a corpus of documents. In this paper we apply a modification of LDA, the novel multi-corpus LDA technique for web spam classification. We create a bag-of-words document for every Web site and run LDA both on the corpus of sites labeled as spam and as non-spam. In this way collections of spam and non-spam topics are created in the training phase. In the test phase we take the union of these collections, and an unseen site is deemed spam if its total spam topic probability is above a threshold. As far as we know, this is the first web retrieval application of LDA. We test this method on the UK2007-WEBSPAM corpus, and reach a relative improvement of 11% in F-measure by a logistic regression based combination with strong link and content baseline classifiers.",
"title": ""
},
{
"docid": "7641d1576250ed1a7d559cc1ad5ee439",
"text": "Considerados como la base evolutiva vertebrada tras su radiación adaptativa en el Devónico, los peces constituyen en la actualidad el grupo más exitoso y diversificado de vertebrados. Como grupo, este conjunto heterogéneo de organismos representa una aparente encrucijada entre la respuesta inmunitaria innata y la aparición de una respuesta inmunitaria adaptativa. La mayoría de órganos inmunitarios de los mamíferos tienen sus homólogos en los peces. Sin embargo, su eventual menor complejidad estructural podría potencialmente limitar la capacidad para generar una respuesta inmunitaria completamente funcional frente a la invasión de patógenos. Se discute aquí la capacidad de los peces para generar respuestas inmunitarias exitosas, teniendo en cuenta la robustez aparente de la respuesta innata de los peces, en comparación con la observada en vertebrados superiores.",
"title": ""
},
{
"docid": "2a3f37db1663c926be1effd5c1061d0a",
"text": "The Intrusion Detection System (IDS) generates huge amounts of alerts that are mostly false positives. The abundance of false positive alerts makes it difficu lt for the security analyst to identify successful attacks and to take remedial actions. Such alerts to have not b een classified in accordance with their degree of t hreats. They further need to be processed to ascertain the most serious alerts and the time of the reaction re sponse. They may take a long time and considerable space to discuss thoroughly. Each IDS generates a huge amount of alerts where most of them are real while t e others are not (i.e., false alert) or are redun dant alerts. The false alerts create a serious problem f or intrusion detection systems. Alerts are defined based on source/destination IP and source/destination ports. However, one cannot know which of those IP/ports b ing a threat to the network. The IDSs’ alerts are not c lassified depending on their degree of the threat. It is difficult for the security analyst to identify atta cks and take remedial action for this threat. So it is necessary to assist in categorizing the degree of the threat, by using data mining techniques. The proposed fram ework for proposal is IDS Alert Reduction and Assessment Based on Data Mining (ARADMF). The proposed framework contains three systems: Traffic data retr ieval and collection mechanism system, reduction ID S alert processes system and threat score process of IDS alert system. The traffic data retrieval and co llection mechanism systems develops a mechanism to save IDS alerts, extract the standard features as intrusion detection message exchange format and save them in DB file (CSV-type). It contains the Intrusion Detection Message Exchange Format (IDMEF) which wor ks as procurement alerts and field reduction is used as data standardization to make the format of lert as standard as possible. As for Feature Extra ction (FE) system, it is designed to extract the features of alert by using a gain information algorithm, wh ich gives a rank for every feature to facilitate the selectio n of the feature with the highest rank. The main fu ction of reduction IDS alert processes system is to remove duplicate IDS alerts and reduces the a mount of false alerts based on a new aggregation algorithm. It con sists of three phases. The first phase removes redu ndant alerts. The second phase reduces false alerts based on threshold time value and the last phase reduces false alerts based on rules with a threshold common vulne rabilities and exposure value. Threat score process of IDS alert system is characterized by using a propos ed adaptive Apriori algorithm, which has been modif ie to work with multi features, i.e., items and automa ted classification of alerts according to their thr eat's scores. The expected result of his proposed will be decreasing the number of false positive alert with rate expected 90% and increasing the level of accuracy c ompared with other approaches. The reasons behind using ARADMF are to reduce the false IDS alerts and to assess them to examine the threat score of IDS alert, that is will be effort to increase the effic iency and accuracy of network security.",
"title": ""
},
{
"docid": "f4c66ff0852b3ad640655e945f5639d9",
"text": "The emergence of a feature-analyzing function from the development rules of simple, multilayered networks is explored. It is shown that even a single developing cell of a layered network exhibits a remarkable set of optimization properties that are closely related to issues in statistics, theoretical physics, adaptive signal processing, the formation of knowledge representation in artificial intelligence, and information theory. The network studied is based on the visual system. These results are used to infer an information-theoretic principle that can be applied to the network as a whole, rather than a single cell. The organizing principle proposed is that the network connections develop in such a way as to maximize the amount of information that is preserved when signals are transformed at each processing stage, subject to certain constraints. The operation of this principle is illustrated for some simple cases.<<ETX>>",
"title": ""
},
{
"docid": "3daa9fc7d434f8a7da84dd92f0665564",
"text": "In this article we analyze the response of Time of Flight cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. Time of Flight sensors are sensitive to ambient light and have low resolution but deliver high frame rate accurate depth data under suitable conditions. We introduce some metrics for performance evaluation over a small region of interest. Based on these metrics, we analyze and compare depth imaging of leaf under indoor (room) and outdoor (shadow and sunlight) conditions by varying exposures of the sensors. Performance of three different time of flight cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo-correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancellation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs poorly under sunlight. stereo vision is more robust to ambient illumination and provides high resolution depth data but it is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves but is computationally much more expensive as compared to local correlation. Finally, we propose a method to increase the dynamic range of the ToF cameras for a scene involving both shadow and sunlight exposures at the same time using camera flags (PMD) or confidence matrix (SwissRanger).",
"title": ""
},
{
"docid": "99325b66653b592e1a0a97a32eafc75d",
"text": "Breast Cancer Diagnosis and Prognosis are two medical applications pose a great challenge to the researchers. The use of machine learning and data mining techniques has revolutionized the whole process of breast cancer Diagnosis and Prognosis. Breast Cancer Diagnosis distinguishes benign from malignant breast lumps and Breast Cancer Prognosis predicts when Breast Cancer is likely to recur in patients that have had their cancers excised. Thus, these two problems are mainly in the scope of the classification problems. This study paper summarizes various review and technical articles on breast cancer diagnosis and prognosis. In this paper we present an overview of the current research being carried out using the data mining techniques to enhance the breast cancer diagnosis and prognosis.",
"title": ""
},
{
"docid": "0a9bfd72a12dcddfd27d982c8b27b9d5",
"text": "A novel concept of a series feeding network of a microstrip antenna array has been shown. The proposed feeding network utilizes a four-port slot coupler as a three-way power divider, which is composed of two microstrip lines appropriately coupled through a slot within a common ground plane. The proposed power divider is used for simultaneous power distribution between two consecutive linear subarrays and between two 4 × 1 linear arrays constituting a single 8 × 1 linear subarray, where equal-amplitude and out-of-phase signals are required. Such a solution allows for realization of antenna arrays, in which all linear subarrays designed with the use of the “through-element” series feeding technique are fed at their centers from single transmission lines. The theoretical analysis as well as measurement results of the 8 × 8 antenna array operating within 10.5-GHz frequency range are shown.",
"title": ""
},
{
"docid": "5297929e65e662360d8ff262e877b08a",
"text": "Frontal electroencephalographic (EEG) alpha asymmetry is widely researched in studies of emotion, motivation, and psychopathology, yet it is a metric that has been quantified and analyzed using diverse procedures, and diversity in procedures muddles cross-study interpretation. The aim of this article is to provide an updated tutorial for EEG alpha asymmetry recording, processing, analysis, and interpretation, with an eye towards improving consistency of results across studies. First, a brief background in alpha asymmetry findings is provided. Then, some guidelines for recording, processing, and analyzing alpha asymmetry are presented with an emphasis on the creation of asymmetry scores, referencing choices, and artifact removal. Processing steps are explained in detail, and references to MATLAB-based toolboxes that are helpful for creating and investigating alpha asymmetry are noted. Then, conceptual challenges and interpretative issues are reviewed, including a discussion of alpha asymmetry as a mediator/moderator of emotion and psychopathology. Finally, the effects of two automated component-based artifact correction algorithms-MARA and ADJUST-on frontal alpha asymmetry are evaluated.",
"title": ""
},
{
"docid": "1dac1fc798794517d8db162a9ac80007",
"text": "We describe an automated method for image colorization that learns to colorize from examples. Our method exploits a LEARCH framework to train a quadratic objective function in the chromaticity maps, comparable to a Gaussian random field. The coefficients of the objective function are conditioned on image features, using a random forest. The objective function admits correlations on long spatial scales, and can control spatial error in the colorization of the image. Images are then colorized by minimizing this objective function. We demonstrate that our method strongly outperforms a natural baseline on large-scale experiments with images of real scenes using a demanding loss function. We demonstrate that learning a model that is conditioned on scene produces improved results. We show how to incorporate a desired color histogram into the objective function, and that doing so can lead to further improvements in results.",
"title": ""
},
{
"docid": "1fda14987f2524a9e0f5000814de22d4",
"text": "Machine Learning (ML) is an increasingly popular application in the cloud and data-center, inspiring new algorithmic and systems techniques that leverage unique properties of ML applications to improve their distributed performance by orders of magnitude. However, applications built using these techniques tend to be static, unable to elastically adapt to the changing resource availability that is characteristic of multi-tenant environments. Existing distributed frameworks are either inelastic, or offer programming models which are incompatible with the techniques employed by high-performance ML applications. Motivated by these trends, we present Litz, an elastic framework supporting distributed ML applications. We categorize the wide variety of techniques employed by these applications into three general themes — stateful workers, model scheduling, and relaxed consistency — which are collectively supported by Litz’s programming model. Our implementation of Litz’s execution system transparently enables elasticity and low-overhead execution. We implement several popular ML applications using Litz, and show that they can scale in and out quickly to adapt to changing resource availability, as well as how a scheduler can leverage elasticity for faster job completion and more efficient resource allocation. Lastly, we show that Litz enables elasticity without compromising performance, achieving competitive performance with state-of-the-art non-elastic ML frameworks.",
"title": ""
},
{
"docid": "4d52c04ad923ae51ed9f71fe06a9cf6f",
"text": "In this paper an Inclined Planes Optimization algorithm, is used to optimize the performance of the multilayer perceptron. Indeed, the performance of the neural network depends on its parameters such as the number of neurons in the hidden layer and the connection weights. So far, most research has been done in the field of training the neural network. In this paper, a new algorithm optimization is presented in optimal architecture for data classification. Neural network training is done by backpropagation (BP) algorithm and optimization the architecture of neural network is considered as independent variables in the algorithm. The results in three classification problems have shown that a neural network resulting from these methods have low complexity and high accuracy when compared with results of Particle Swarm Optimization and Gravitational Search Algorithm.",
"title": ""
},
{
"docid": "04c0a4613ab0ec7fd77ac5216a17bd1d",
"text": "Many contemporary biomedical applications such as physiological monitoring, imaging, and sequencing produce large amounts of data that require new data processing and visualization algorithms. Algorithms such as principal component analysis (PCA), singular value decomposition and random projections (RP) have been proposed for dimensionality reduction. In this paper we propose a new random projection version of the fuzzy c-means (FCM) clustering algorithm denoted as RPFCM that has a different ensemble aggregation strategy than the one previously proposed, denoted as ensemble FCM (EFCM). RPFCM is more suitable than EFCM for big data sets (large number of points, n). We evaluate our method and compare it to EFCM on synthetic and real datasets.",
"title": ""
},
{
"docid": "1fde86a3105684900bc51e29c84661ca",
"text": "During the last few years, Wireless Body Area Networks (WBANs) have emerged into many application domains, such as medicine, sport, entertainments, military, and monitoring. This emerging networking technology can be used for e-health monitoring. In this paper, we review the literature and investigate the challenges in the development architecture of WBANs. Then, we classified the challenges of WBANs that need to be addressed for their development. Moreover, we investigate the various diseases and healthcare systems and current state-ofthe-art of applications and mainly focus on the remote monitoring for elderly and chronically diseases patients. Finally, relevant research issues and future development are discussed. Keywords—Wireless body area networks; review; challenges; applications; architecture; radio technologies; telemedicine",
"title": ""
},
{
"docid": "e7e9d6054a61a1f4a3ab7387be28538a",
"text": "Next generation deep neural networks for classification hosted on embedded platforms will rely on fast, efficient, and accurate learning algorithms. Initialization of weights in learning networks has a great impact on the classification accuracy. In this paper we focus on deriving good initial weights by modeling the error function of a deep neural network as a high-dimensional landscape. We observe that due to the inherent complexity in its algebraic structure, such an error function may conform to general results of the statistics of large systems. To this end we apply some results from Random Matrix Theory to analyse these functions. We model the error function in terms of a Hamiltonian in N-dimensions and derive some theoretical results about its general behavior. These results are further used to make better initial guesses of weights for the learning algorithm.",
"title": ""
},
{
"docid": "d0b287d0bd41dedbbfa3357653389e9c",
"text": "Credit scoring model have been developed by banks and researchers to improve the process of assessing credit worthiness during the credit evaluation process. The objective of credit scoring models is to assign credit risk to either a ‘‘good risk’’ group that is likely to repay financial obligation or a ‘‘bad risk’’ group who has high possibility of defaulting on the financial obligation. Construction of credit scoring models requires data mining techniques. Using historical data on payments, demographic characteristics and statistical techniques, credit scoring models can help identify the important demographic characteristics related to credit risk and provide a score for each customer. This paper illustrates using data mining to improve assessment of credit worthiness using credit scoring models. Due to privacy concerns and unavailability of real financial data from banks this study applies the credit scoring techniques using data of payment history of members from a recreational club. The club has been facing a problem of rising number in defaulters in their monthly club subscription payments. The management would like to have a model which they can deploy to identify potential defaulters. The classification performance of credit scorecard model, logistic regression model and decision tree model were compared. The classification error rates for credit scorecard model, logistic regression and decision tree were 27.9%, 28.8% and 28.1%, respectively. Although no model outperforms the other, scorecards are relatively much easier to deploy in practical applications. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e0ae7f96ef81726777e974e140e4bac7",
"text": "Conjoined twins are a rare complication of 9 monozygotic twins and are associated with high perinatal mortality. Pygopagus are one of the rare types of conjoined twins with only a handful of cases reported in the literature. We present the case of one-and-half month-old male pygopagus conjoined twins, who were joined together dorsally in lower lumbar and sacral region and had spina bifida and shared a single thecal sac with combined weight of 6.14 kg. Spinal cord was separated at the level of the conus followed by duraplasty. They had uneventful recovery with normal 15 months follow-up. Separation of conjoined twins is recommended in where this is feasible with the anticipated survival of both or one infant.",
"title": ""
},
{
"docid": "1f6f4025fa450b845cefe5da2b842031",
"text": "The Carnegie Mellon In Silico Vox project seeks to move best-quality speech recognition technology from its current software-only form into a range of efficient all-hardware implementations. The central thesis is that, like graphics chips, the application is simply too performance hungry, and too power sensitive, to stay as a large software application. As a first step in this direction, we describe the design and implementation of a fully functional speech-to-text recognizer on a single Xilinx XUP platform. The design recognizes a 1000 word vocabulary, is speaker-independent, recognizes continuous (connected) speech, and is a \"live mode\" engine, wherein recognition can start as soon as speech input appears. To the best of our knowledge, this is the most complex recognizer architecture ever fully committed to a hardware-only form. The implementation is extraordinarily small, and achieves the same accuracy as state-of-the-art software recognizers, while running at a fraction of the clock speed.",
"title": ""
}
] |
scidocsrr
|
84548fb3c2fd41eb9126b3332243ddb7
|
Exact Recovery of Hard Thresholding Pursuit
|
[
{
"docid": "b0382aa0f8c8171b78dba1c179554450",
"text": "This paper is concerned with the hard thresholding operator which sets all but the k largest absolute elements of a vector to zero. We establish a tight bound to quantitatively characterize the deviation of the thresholded solution from a given signal. Our theoretical result is universal in the sense that it holds for all choices of parameters, and the underlying analysis depends only on fundamental arguments in mathematical optimization. We discuss the implications for two domains: Compressed Sensing. On account of the crucial estimate, we bridge the connection between the restricted isometry property (RIP) and the sparsity parameter for a vast volume of hard thresholding based algorithms, which renders an improvement on the RIP condition especially when the true sparsity is unknown. This suggests that in essence, many more kinds of sensing matrices or fewer measurements are admissible for the data acquisition procedure. Machine Learning. In terms of large-scale machine learning, a significant yet challenging problem is learning accurate sparse models in an efficient manner. In stark contrast to prior work that attempted the `1-relaxation for promoting sparsity, we present a novel stochastic algorithm which performs hard thresholding in each iteration, hence ensuring such parsimonious solutions. Equipped with the developed bound, we prove the global linear convergence for a number of prevalent statistical models under mild assumptions, even though the problem turns out to be non-convex.",
"title": ""
}
] |
[
{
"docid": "ce64c8f2769957a5b93e0947c1987db5",
"text": "Power companies can benefit from the use of knowledge discovery methods and statistical machine learning for preventive maintenance. We introduce a general process for transforming historical electrical grid data into models that aim to predict the risk of failures for components and systems. These models can be used directly by power companies to assist with prioritization of maintenance and repair work. Specialized versions of this process are used to produce (1) feeder failure rankings, (2) cable, joint, terminator, and transformer rankings, (3) feeder Mean Time Between Failure (MTBF) estimates, and (4) manhole events vulnerability rankings. The process in its most general form can handle diverse, noisy, sources that are historical (static), semi-real-time, or real-time, incorporates state-of-the-art machine learning algorithms for prioritization (supervised ranking or MTBF), and includes an evaluation of results via cross-validation and blind test. Above and beyond the ranked lists and MTBF estimates are business management interfaces that allow the prediction capability to be integrated directly into corporate planning and decision support; such interfaces rely on several important properties of our general modeling approach: that machine learning features are meaningful to domain experts, that the processing of data is transparent, and that prediction results are accurate enough to support sound decision making. We discuss the challenges in working with historical electrical grid data that were not designed for predictive purposes. The “rawness” of these data contrasts with the accuracy of the statistical models that can be obtained from the process; these models are sufficiently accurate to assist in maintaining New York City's electrical grid.",
"title": ""
},
{
"docid": "17599a683c92e9ad0d112fb358e0d30a",
"text": "Super-resolution algorithms reconstruct a high resolution image from a set of low resolution images of a scene. Precise alignment of the input images is an essential part of such algorithms. If the low resolution images are undersampled and have aliasing artifacts, the performance of standard registration algorithms decreases. We propose a frequency domain technique to precisely register a set of aliased images, based on their low-frequency, aliasing-free part. A high resolution image is then reconstructed using cubic interpolation. Our algorithm is compared to other algorithms in simulations and practical experiments using real aliased images. Both show very good visual results and prove the attractivity of our approach in the case of aliased input images. A possible application is to digital cameras where a set of rapidly acquired images can be used to recover a higher resolution final image. Index Terms Super-resolution imaging, image registration, aliasing",
"title": ""
},
{
"docid": "edbc09ea4ad9792abd9aa05176c17d42",
"text": "The therapeutic nature of the nurse-patient relationship is grounded in an ethic of caring. Florence Nightingale envisioned nursing as an art and a science...a blending of humanistic, caring presence with evidence-based knowledge and exquisite skill. In this article, the author explores the caring practice of nursing as a framework for understanding moral accountability and integrity in practice. Being morally accountable and responsible for one's judgment and actions is central to the nurse's role as a moral agent. Nurses who practice with moral integrity possess a strong sense of themselves and act in ways consistent with what they understand is the right thing to do. A review of the literature related to caring theory, the concepts of moral accountability and integrity, and the documents that speak of these values and concepts in professional practice (eg, Code of Ethics for Nurses with Interpretive Statements, Nursing's Social Policy Statement) are presented in this article.",
"title": ""
},
{
"docid": "2d2aca831aaf6b66c19b4ac9c2ae8ebb",
"text": "Reinforcement Learning (RL) algorithms can suffer from poor sample efficiency when rewards are delayed and sparse. We introduce a solution that enables agents to learn temporally extended actions at multiple levels of abstraction in a sample efficient and automated fashion. Our approach combines universal value functions and hindsight learning, allowing agents to learn policies belonging to different time scales in parallel. We show that our method significantly accelerates learning in a variety of discrete and continuous tasks. A video illustrating our results is available at https://www.youtube.com/watch?v=jQ5FkDgTBLI.",
"title": ""
},
{
"docid": "ddd275168d4e066df5e5937790a93986",
"text": " The Jyros (JR) and the Advancing The Standard (ATS) valves were compared with the St. Jude Medical (SJM) valve in the mitral position to study the effects of design differences, installed valve orientation to the flow, and closing sounds using particle tracking velocimetry and particle image velocimetry methods utilizing a high-speed video flow visualization technique to map the velocity field. Sound measurements were made to confirm the claims of the manufacturers. Based on the experimental data, the following general conclusions can be made: On the vertical measuring plane which passes through the centers of the aortic and the mitral valves, the SJM valve shows a distinct circulatory flow pattern when the valve is installed in the antianatomical orientation; the SJM valve maintains the flow through the central orifice quite well; the newer curved leaflet JR valve and the ATS valve, which does not fully open during the peak flow phase, generates a higher but divergent flow close to the valve location when the valve was installed anatomically. The antianatomically installed JR valve showed diverse and less distinctive flow patterns and slower velocity on the central measuring plane than the SJM valve did, with noticeably lower valve closing noise. On the velocity field directly below the mitral valve that is normal to the previous measuring plane, the three valves show symmetrical twin circulations due to the divergent nature of the flow generated by the two inclined half discs; the SJM valve with centrally downward circulation is contrasted by the two other valves with peripherally downward circulation. These differences may have an important role in generation of the valve closing sound.",
"title": ""
},
{
"docid": "2fb2c7b5bec56d59c453d6781a80f7bf",
"text": "Automatic generation of natural language description for individual images (a.k.a. image captioning) has attracted extensive research attention. In this paper, we take one step further to investigate the generation of a paragraph to describe a photo stream for the purpose of storytelling. This task is even more challenging than individual image description due to the difficulty in modeling the large visual variance in an ordered photo collection and in preserving the long-term language coherence among multiple sentences. To deal with these challenges, we formulate the task as a sequence-to-sequence learning problem and propose a novel joint learning model by leveraging the semantic coherence in a photo stream. Specifically, to reduce visual variance, we learn a semantic space by jointly embedding each photo with its corresponding contextual sentence, so that the semantically related photos and their correlations are discovered. Then, to preserve language coherence in the paragraph, we learn a novel Bidirectional Attention-based Recurrent Neural Network (BARNN) model, which can attend on the discovered semantic relation to produce a sentence sequence and maintain its consistence with the photo stream. We integrate the two-step learning components into one single optimization formulation and train the network in an end-to-end manner. Experiments on three widely-used datasets (NYC/Disney/SIND) show that the proposed approach outperforms state-of-the-art methods with large margins for both retrieval and paragraph generation tasks. We also show the subjective preference of the machinegenerated stories by the proposed approach over the baselines through a user study with 40 human subjects.",
"title": ""
},
{
"docid": "c116aab75223001bb4d216501b3c3b39",
"text": "OBJECTIVE\nBurnout, a psychological consequence of prolonged work stress, has been shown to coexist with physical and mental disorders. The aim of this study was to investigate whether burnout is related to all-cause mortality among employees.\n\n\nMETHODS\nIn 1996, of 15,466 Finnish forest industry employees, 9705 participated in the 'Still Working' study and 8371 were subsequently identified from the National Population Register. Those who had been treated in a hospital for the most common causes of death prior to the assessment of burnout were excluded on the basis of the Hospital Discharge Register, resulting in a final study population of 7396 people. Burnout was measured using the Maslach Burnout Inventory-General Survey. Dates of death from 1996 to 2006 were extracted from the National Mortality Register. Mortality was predicted with Cox hazard regression models, controlling for baseline sociodemographic factors and register-based health status according to entitled medical reimbursement and prescribed medication for mental health problems, cardiac risk factors, and pain problems.\n\n\nRESULTS\nDuring the 10-year 10-month follow-up, a total of 199 employees had died. The risk of mortality per one-unit increase in burnout was 35% higher (95% CI 1.07-1.71) for total score and 26% higher (0.99-1.60) for exhaustion, 29% higher for cynicism (1.03-1.62), and 22% higher for diminished professional efficacy (0.96-1.55) in participants who had been under 45 at baseline. After adjustments, only the associations regarding burnout and exhaustion were statistically significant. Burnout was not related to mortality among the older employees.\n\n\nCONCLUSION\nBurnout, especially work-related exhaustion, may be a risk for overall survival.",
"title": ""
},
{
"docid": "fc50b185323c45e3d562d24835e99803",
"text": "The neuropeptide calcitonin gene-related peptide (CGRP) is implicated in the underlying pathology of migraine by promoting the development of a sensitized state of primary and secondary nociceptive neurons. The ability of CGRP to initiate and maintain peripheral and central sensitization is mediated by modulation of neuronal, glial, and immune cells in the trigeminal nociceptive signaling pathway. There is accumulating evidence to support a key role of CGRP in promoting cross excitation within the trigeminal ganglion that may help to explain the high co-morbidity of migraine with rhinosinusitis and temporomandibular joint disorder. In addition, there is emerging evidence that CGRP facilitates and sustains a hyperresponsive neuronal state in migraineurs mediated by reported risk factors such as stress and anxiety. In this review, the significant role of CGRP as a modulator of the trigeminal system will be discussed to provide a better understanding of the underlying pathology associated with the migraine phenotype.",
"title": ""
},
{
"docid": "7618fa5b704c892b6b122f3602893d75",
"text": "At the dawn of the second automotive century it is apparent that the competitive realm of the automotive industry is shifting away from traditional classifications based on firms’ production systems or geographical homes. Companies across the regional and volume spectrum have adopted a portfolio of manufacturing concepts derived from both mass and lean production paradigms, and the recent wave of consolidation means that regional comparisons can no longer be made without considering the complexities induced by the diverse ownership structure and plethora of international collaborations. In this chapter we review these dynamics and propose a double helix model illustrating how the basis of competition has shifted from cost-leadership during the heyday of Ford’s original mass production, to variety and choice following Sloan’s portfolio strategy, to diversification through leadership in design, technology or manufacturing excellence, as in the case of Toyota, and to mass customisation, which marks the current competitive frontier. We will explore how the production paradigms that have determined much of the competition in the first automotive century have evolved, what trends shape the industry today, and what it will take to succeed in the automotive industry of the future. 1 This chapter provides a summary of research conducted as part of the ILIPT Integrated Project and the MIT International Motor Vehicle Program (IMVP), and expands on earlier works, including the book The second century: reconnecting customer and value chain through build-toorder (Holweg and Pil 2004) and the paper Beyond mass and lean production: on the dynamics of competition in the automotive industry (Économies et Sociétés: Série K: Économie de l’Enterprise, 2005, 15:245–270).",
"title": ""
},
{
"docid": "a9dd71d336baa0ea78ceb0435be67f67",
"text": "In current credit ratings models, various accounting-based information are usually selected as prediction variables, based on historical information rather than the market’s assessment for future. In the study, we propose credit rating prediction model using market-based information as a predictive variable. In the proposed method, Moody’s KMV (KMV) is employed as a tool to evaluate the market-based information of each corporation. To verify the proposed method, using the hybrid model, which combine random forests (RF) and rough set theory (RST) to extract useful information for credit rating. The results show that market-based information does provide valuable information in credit rating predictions. Moreover, the proposed approach provides better classification results and generates meaningful rules for credit ratings. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6b7d2d82bbfbaa7f55c25b4a304c8d4c",
"text": "Services that are delivered over the Internet—e-services— pose unique problems yet offer unprecedented opportunities. In this paper, we classify e-services along the dimensions of their level of digitization and the nature of their target markets (business-to-business, business-toconsumer, consumer-to-consumer). Using the case of application services, we analyze how they differ from traditional software procurement and development. Next, we extend the concept of modular platforms to this domain and identify how knowledge management can be used to assemble rapidly new application services. We also discuss how such traceabilty-based knowledge management can facilitate e-service evolution and version-based market segmentation.",
"title": ""
},
{
"docid": "7190e8e6f6c061bed8589719b7d59e0d",
"text": "Image-level feature descriptors obtained from convolutional neural networks have shown powerful representation capabilities for image retrieval. In this paper, we present an unsupervised method to aggregate deep convolutional features into compact yet discriminative image vectors by simulating the dynamics of heat diffusion. A distinctive problem in image retrieval is that repetitive or bursty features tend to dominate feature representations, leading to less than ideal matches. We show that by considering each deep feature as a heat source, our unsupervised aggregation method is able to avoiding over-representation of bursty features. We additionally provide a practical solution for the proposed aggregation method, and further show the efficiency of our method in experimental evaluation. Finally, we extensively evaluate the proposed approach with pre-trained and fine-tuned deep networks on common public benchmarks, and show superior performance compared to previous work. Image retrieval has always been an attractive research topic in the field of computer vision. By allowing users to search similar images from a large database of digital images, it provides a natural and flexible interface for image archiving and browsing. Convolutional Neural Networks (CNNs) have shown remarkable accuracy in tasks such as image classification, and object detection. Recent research has also shown positive results of using CNNs on image retrieval (Babenko and Lempitsky 2015; Kalantidis, Mellina, and Osindero 2016; Hoang et al. 2017). However, unlike image classification approaches which often use global feature vectors produced by fully connected layers, these methods extract local features depicting image patches from the outputs of convolutional layers and aggregate these features into compact (a few hundred dimensions) image-level descriptors. Once meaningful and representative image-level descriptors are defined, visually similar images are retrieved by computing similarities between pre-computed database feature representations and query representations. In this paper we devise a method to avoid overrepresenting bursty features. Inspired by an observation of similar phenomena in textual data, Jegou et al. (Jégou, Douze, and Schmid 2009) identified burstiness as the phenomenon by which overly repetitive features within an instance tend to dominate the instance feature representation. In order to alleviate this issue, we propose a feature aggregation approach that emulates the dynamics of heat diffusion. The idea is to model feature maps as a heat system where we weight highly the features leading to low system temperatures. This is because that these features are less connected to other features, and therefore they are more distinctive. The dynamics of the temperature in such system can be estimated using the partial differential equation induced by the heat equation. Heat diffusion, and more specifically anisotropic diffusion, has been used successfully in various image processing and computer vision tasks. Ranging from the classical work of Perona and Malik (Perona and Malik 1990) to further applications in image smoothing, image regularization, image co-segmentation, and optical flow estimation (Zhang, Zheng, and Cai 2010; Tschumperle and Deriche 2005; Kim et al. 2011; Bruhn, Weickert, and Schnörr 2005). However, to our knowledge, it has not been applied to weight features from the outputs of a deep convolutional neural network. We show that by combining this classical image processing technique with a deep learning model, we are able to obtain significant gains against previous work. Our contributions can be summarized as follows: • By greedily considering each deep feature as a heat source and enforcing the temperature of the system be a constant within each heat source, we propose a novel efficient feature weighting approach to reduce the undesirable influence of bursty features. • We provide a practical solution to computing weights for our feature weighting method. Additionally, we conduct extensive quantitative evaluations on commonly used image retrieval benchmarks, and demonstrate substantial performance improvement over existing unsupervised methods for feature aggregation.",
"title": ""
},
{
"docid": "f70447a47fb31fc94d6b57ca3ef57ad3",
"text": "BACKGROUND\nOn Aug 14, 2014, the US Food and Drug Administration approved the antiangiogenesis drug bevacizumab for women with advanced cervical cancer on the basis of improved overall survival (OS) after the second interim analysis (in 2012) of 271 deaths in the Gynecologic Oncology Group (GOG) 240 trial. In this study, we report the prespecified final analysis of the primary objectives, OS and adverse events.\n\n\nMETHODS\nIn this randomised, controlled, open-label, phase 3 trial, we recruited patients with metastatic, persistent, or recurrent cervical carcinoma from 81 centres in the USA, Canada, and Spain. Inclusion criteria included a GOG performance status score of 0 or 1; adequate renal, hepatic, and bone marrow function; adequately anticoagulated thromboembolism; a urine protein to creatinine ratio of less than 1; and measurable disease. Patients who had received chemotherapy for recurrence and those with non-healing wounds or active bleeding conditions were ineligible. We randomly allocated patients 1:1:1:1 (blocking used; block size of four) to intravenous chemotherapy of either cisplatin (50 mg/m2 on day 1 or 2) plus paclitaxel (135 mg/m2 or 175 mg/m2 on day 1) or topotecan (0·75 mg/m2 on days 1-3) plus paclitaxel (175 mg/m2 on day 1) with or without intravenous bevacizumab (15 mg/kg on day 1) in 21 day cycles until disease progression, unacceptable toxic effects, voluntary withdrawal by the patient, or complete response. We stratified randomisation by GOG performance status (0 vs 1), previous radiosensitising platinum-based chemotherapy, and disease status (recurrent or persistent vs metastatic). We gave treatment open label. Primary outcomes were OS (analysed in the intention-to-treat population) and adverse events (analysed in all patients who received treatment and submitted adverse event information), assessed at the second interim and final analysis by the masked Data and Safety Monitoring Board. The cutoff for final analysis was 450 patients with 346 deaths. This trial is registered with ClinicalTrials.gov, number NCT00803062.\n\n\nFINDINGS\nBetween April 6, 2009, and Jan 3, 2012, we enrolled 452 patients (225 [50%] in the two chemotherapy-alone groups and 227 [50%] in the two chemotherapy plus bevacizumab groups). By March 7, 2014, 348 deaths had occurred, meeting the prespecified cutoff for final analysis. The chemotherapy plus bevacizumab groups continued to show significant improvement in OS compared with the chemotherapy-alone groups: 16·8 months in the chemotherapy plus bevacizumab groups versus 13·3 months in the chemotherapy-alone groups (hazard ratio 0·77 [95% CI 0·62-0·95]; p=0·007). Final OS among patients not receiving previous pelvic radiotherapy was 24·5 months versus 16·8 months (0·64 [0·37-1·10]; p=0·11). Postprogression OS was not significantly different between the chemotherapy plus bevacizumab groups (8·4 months) and chemotherapy-alone groups (7·1 months; 0·83 [0·66-1·05]; p=0·06). Fistula (any grade) occurred in 32 (15%) of 220 patients in the chemotherapy plus bevacizumab groups (all previously irradiated) versus three (1%) of 220 in the chemotherapy-alone groups (all previously irradiated). Grade 3 fistula developed in 13 (6%) versus one (<1%). No fistulas resulted in surgical emergencies, sepsis, or death.\n\n\nINTERPRETATION\nThe benefit conferred by incorporation of bevacizumab is sustained with extended follow-up as evidenced by the overall survival curves remaining separated. After progression while receiving bevacizumab, we did not observe a negative rebound effect (ie, shorter survival after bevacizumab is stopped than after chemotherapy alone is stopped). These findings represent proof-of-concept of the efficacy and tolerability of antiangiogenesis therapy in advanced cervical cancer.\n\n\nFUNDING\nNational Cancer Institute.",
"title": ""
},
{
"docid": "eebf03df49eb4a99f61d371e059ef43e",
"text": "In theoretical cognitive science, there is a tension between highly structured models whose parameters have a direct psychological interpretation and highly complex, general-purpose models whose parameters and representations are difficult to interpret. The former typically provide more insight into cognition but the latter often perform better. This tension has recently surfaced in the realm of educational data mining, where a deep learning approach to estimating student proficiency, termed deep knowledge tracing or DKT [17], has demonstrated a stunning performance advantage over the mainstay of the field, Bayesian knowledge tracing or BKT [3].",
"title": ""
},
{
"docid": "bba0687091acf218d9039c87cd08c01c",
"text": "Our project had two main objectives. First, we wanted to use historical tennis match data to predict the outcomes of future tennis matches. Next, we wanted to use the predictions from our resulting model to beat the current betting odds. After setting up our prediction and betting models, we were able to accurately predict the outcome of 69.6% of the 2016 and 2017 tennis season, and turn a 3.3% profit per match.",
"title": ""
},
{
"docid": "bac7f4109f023ee2df039f340dbaefb1",
"text": "In many important ext classification problems, acquiring class labels for training documents is costly, while gathering large quantities of unlabeled ata is cheap. This paper shows that the accuracy of text classifiers trained with a small number of labeled documents can be improved by augmenting this small training set with a large pool of unlabeled documents. We present a theoretical argument showing that, under common assumptions, unlabeled data contain information about the target function. We then introduce an algorithm for learning from labeled and unlabeled text based on the combination of Expectation-Maximization with a naive Bayes classifier. The algorithm first trains a classifter using the available labeled documents, and probabilistically labels the unlabeled documents; it then trains a new classifier using the labels for all the documents, and iterates to convergence. Experimental results, obtained using text from three different realworld tasks, show that the use of unlabeled data reduces classification error by up to 33%.",
"title": ""
},
{
"docid": "c1fc1a31d9f5033a7469796d1222aef3",
"text": "Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more cameras are mounted on actuated mechanisms such as a gimbal. Existing methods for DCC calibration rely on joint angle measurements to resolve the time-varying transformation between the dynamic and static camera. This information is usually provided by motor encoders, however, joint angle measurements are not always readily available on off-the-shelf mechanisms. In this paper, we present an encoderless approach for DCC calibration which simultaneously estimates the kinematic parameters of the transformation chain as well as the unknown joint angles. We also demonstrate the integration of an encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show the extensions required in order to perform simultaneous online estimation of the joint angles and vehicle localization state. The proposed calibration approach is validated both in simulation and on a physical DCC composed of a 2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the calibrated mechanism integrated into the OKVIS VIO package, and demonstrate successful online joint angle estimation while maintaining localization accuracy that is comparable to a standard static multi-camera configuration.",
"title": ""
},
{
"docid": "c206399c6ebf96f3de3aa5fdb10db49d",
"text": "Canine monocytotropic ehrlichiosis (CME), caused by the rickettsia Ehrlichia canis, an important canine disease with a worldwide distribution. Diagnosis of the disease can be challenging due to its different phases and multiple clinical manifestations. CME should be suspected when a compatible history (living in or traveling to an endemic region, previous tick exposure), typical clinical signs and characteristic hematological and biochemical abnormalities are present. Traditional diagnostic techniques including hematology, cytology, serology and isolation are valuable diagnostic tools for CME, however a definitive diagnosis of E. canis infection requires molecular techniques. This article reviews the current literature covering the diagnosis of infection caused by E. canis.",
"title": ""
},
{
"docid": "564591c62475a2f9ec1eafb8ce95ae32",
"text": "IT companies worldwide have started to improve their service management processes based on best practice frameworks, such as IT Infrastructure Library (ITIL). However, many of these companies face difficulties in demonstrating the positive outcomes of IT service management (ITSM) process improvement. This has led us to investigate the research problem: What positive impacts have resulted from IT service management process improvement? The main contributions of this paper are 1) to identify the ITSM process improvement outcomes in two IT service provider organizations and 2) provide advice as lessons learnt.",
"title": ""
},
{
"docid": "8979ac412e25cf842611dcb257836cea",
"text": "Tensors or <italic>multiway arrays</italic> are functions of three or more indices <inline-formula> <tex-math notation=\"LaTeX\">$(i,j,k,\\ldots)$</tex-math></inline-formula>—similar to matrices (two-way arrays), which are functions of two indices <inline-formula><tex-math notation=\"LaTeX\">$(r,c)$</tex-math></inline-formula> for (row, column). Tensors have a rich history, stretching over almost a century, and touching upon numerous disciplines; but they have only recently become ubiquitous in signal and data analytics at the confluence of signal processing, statistics, data mining, and machine learning. This overview article aims to provide a good starting point for researchers and practitioners interested in learning about and working with tensors. As such, it focuses on fundamentals and motivation (using various application examples), aiming to strike an appropriate balance of breadth <italic>and depth</italic> that will enable someone having taken first graduate courses in matrix algebra and probability to get started doing research and/or developing tensor algorithms and software. Some background in applied optimization is useful but not strictly required. The material covered includes tensor rank and rank decomposition; basic tensor factorization models and their relationships and properties (including fairly good coverage of identifiability); broad coverage of algorithms ranging from alternating optimization to stochastic gradient; statistical performance analysis; and applications ranging from source separation to collaborative filtering, mixture and topic modeling, classification, and multilinear subspace learning.",
"title": ""
}
] |
scidocsrr
|
f9a7efbed11dfad3a174b2695f861ad7
|
Boosting and Differential Privacy
|
[
{
"docid": "b6fa1ee8c2f07b34768a78591c33bbbe",
"text": "We prove that there are arbitrarily long arithmetic progressions of primes. There are three major ingredients. [. . . ] [. . . ] for all x ∈ ZN (here (m0, t0, L0) = (3, 2, 1)) and E ( ν((x− y)/2)ν((x− y + h2)/2)ν(−y)ν(−y − h1)× × ν((x− y′)/2)ν((x− y′ + h2)/2)ν(−y)ν(−y − h1)× × ν(x)ν(x + h1)ν(x + h2)ν(x + h1 + h2) ∣∣∣∣ x, h1, h2, y, y′ ∈ ZN) = 1 + o(1) (0.1) (here (m0, t0, L0) = (12, 5, 2)). [. . . ] Proposition 0.1 (Generalised von Neumann). Suppose that ν is k-pseudorandom. Let f0, . . . , fk−1 ∈ L(ZN) be functions which are pointwise bounded by ν+νconst, or in other words |fj(x)| 6 ν(x) + 1 for all x ∈ ZN , 0 6 j 6 k − 1. (0.2) Let c0, . . . , ck−1 be a permutation of {0, 1, . . . , k − 1} (in practice we will take cj := j). Then E ( k−1 ∏ j=0 fj(x + cjr) ∣∣∣∣ x, r ∈ ZN) = O( inf 06j6k−1 ‖fj‖Uk−1) + o(1).",
"title": ""
},
{
"docid": "1d9004c4115c314f49fb7d2f44aaa598",
"text": "We show by means of several examples that robust statistical estimators present an excellent starting point for differentially private estimators. Our algorithms use a new paradigm for differentially private mechanisms, which we call Propose-Test-Release (PTR), and for which we give a formal definition and general composition theorems.",
"title": ""
}
] |
[
{
"docid": "bc49930fa967b93ed1e39b3a45237652",
"text": "In gene expression data, a bicluster is a subset of the genes exhibiting consistent patterns over a subset of the conditions. We propose a new method to detect significant biclusters in large expression datasets. Our approach is graph theoretic coupled with statistical modelling of the data. Under plausible assumptions, our algorithm is polynomial and is guaranteed to find the most significant biclusters. We tested our method on a collection of yeast expression profiles and on a human cancer dataset. Cross validation results show high specificity in assigning function to genes based on their biclusters, and we are able to annotate in this way 196 uncharacterized yeast genes. We also demonstrate how the biclusters lead to detecting new concrete biological associations. In cancer data we are able to detect and relate finer tissue types than was previously possible. We also show that the method outperforms the biclustering algorithm of Cheng and Church (2000).",
"title": ""
},
{
"docid": "21384ea8d80efbf2440fb09a61b03be2",
"text": "We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.",
"title": ""
},
{
"docid": "022a18f7fe530372720c6cb9daf64b94",
"text": "Deep neural networks are currently among the most commonly used classifiers. Despite easily achieving very good performance, one of the best selling points of these models is their modular design – one can conveniently adapt their architecture to specific needs, change connectivity patterns, attach specialised layers, experiment with a large amount of activation functions, normalisation schemes and many others. While one can find impressively wide spread of various configurations of almost every aspect of the deep nets, one element is, in authors’ opinion, underrepresented – while solving classification problems, vast majority of papers and applications simply use log loss. In this paper we try to investigate how particular choices of loss functions affect deep models and their learning dynamics, as well as resulting classifiers robustness to various effects. We perform experiments on classical datasets, as well as provide some additional, theoretical insights into the problem. In particular we show that L1 and L2 losses are, quite surprisingly, justified classification objectives for deep nets, by providing probabilistic interpretation in terms of expected misclassification. We also introduce two losses which are not typically used as deep nets objectives and show that they are viable alternatives to the existing ones.",
"title": ""
},
{
"docid": "f3abf5a6c20b6fff4970e1e63c0e836b",
"text": "We demonstrate a physically-based technique for predicting the drape of a wide variety of woven fabrics. The approach exploits a theoretical model that explicitly represents the microstructure of woven cloth with interacting particles, rather than utilizing a continuum approximation. By testing a cloth sample in a Kawabata fabric testing device, we obtain data that is used to tune the model's energy functions, so that it reproduces the draping behavior of the original material. Photographs, comparing the drape of actual cloth with visualizations of simulation results, show that we are able to reliably model the unique large-scale draping characteristics of distinctly different fabric types.",
"title": ""
},
{
"docid": "5c74d0cfcbeaebc29cdb58a30436556a",
"text": "Modular decomposition is an effective means to achieve a complex system, but that of current part-component-based does not meet the needs of the positive development of the production. Design Structure Matrix (DSM) can simultaneously reflect the sequence, iteration, and feedback information, and express the parallel, sequential, and coupled relationship between DSM elements. This article, a modular decomposition method, named Design Structure Matrix Clustering modularize method, is proposed, concerned procedures are define, based on sorting calculate and clustering analysis of DSM, according to the rules of rows exchanges and columns exchange with the same serial number. The purpose and effectiveness of DSM clustering modularize method are confirmed through case study of assembly and calibration system for the large equipment.",
"title": ""
},
{
"docid": "8e7cfad4f1709101e5790343200d1e16",
"text": "Although electronic commerce experts often cite privacy concerns as barriers to consumer electronic commerce, there is a lack of understanding about how these privacy concerns impact consumers' willingness to conduct transactions online. Therefore, the goal of this study is to extend previous models of e-commerce adoption by specifically assessing the impact that consumers' concerns for information privacy (CFIP) have on their willingness to engage in online transactions. To investigate this, we conducted surveys focusing on consumers’ willingness to transact with a well-known and less well-known Web merchant. Results of the study indicate that concern for information privacy affects risk perceptions, trust, and willingness to transact for a wellknown merchant, but not for a less well-known merchant. In addition, the results indicate that merchant familiarity does not moderate the relationship between CFIP and risk perceptions or CFIP and trust. Implications for researchers and practitioners are discussed. 1 Elena Karahanna was the accepting senior editor. Kathy Stewart Schwaig and David Gefen were the reviewers. This paper was submitted on October 12, 2004, and went through 4 revisions. Information Privacy and Online Consumer Purchasing/Van Slyke et al. Journal of the Association for Information Systems Vol. 7 No. 6, pp. 415-444/June 2006 416 Introduction Although information privacy concerns have long been cited as barriers to consumer adoption of business-to-consumer (B2C) e-commerce (Hoffman et al., 1999, Sullivan, 2005), the results of studies focusing on privacy concerns have been equivocal. Some studies find that mechanisms intended to communicate information about privacy protection such as privacy seals and policies increase intentions to engage in online transactions (Miyazaki and Krishnamurthy, 2002). In contrast, others find that these mechanisms have no effect on consumer willingness to engage in online transactions (Kimery and McCord, 2002). Understanding how consumers’ concerns for information privacy (CFIP), or their concerns about how organizations use and protect personal information (Smith et al., 1996), impact consumers’ willingness to engage in online transactions is important to our knowledge of consumer-oriented e-commerce. For example, if CFIP has a strong direct impact on willingness to engage in online transactions, both researchers and practitioners may want to direct efforts at understanding how to allay some of these concerns. In contrast, if CFIP only impacts willingness to transact through other factors, then efforts may be directed at influencing these factors through both CFIP as well as through their additional antecedents. Prior research on B2C e-commerce examining consumer willingness to transact has focused primarily on the role of trust and trustworthiness either using trust theory or using acceptance, and adoption-based theories as frameworks from which to study trust. The research based on trust theories tends to focus on the structure of trust or on antecedents to trust (Bhattacherjee, 2002; Gefen, 2000; Jarvenpaa et al., 2000; McKnight et al., 2002a). Adoptionand acceptance-based research includes studies using the Technology Acceptance Model (Gefen et al., 2003) and diffusion theory (Van Slyke et al., 2004) to examine the effects of trust within well-established models. To our knowledge, studies of the effects of trust in the context of e-commerce transactions have not included CFIP as an antecedent in their models. The current research addresses this by examining the effect of CFIP on willingness to transact within a nomological network of additional antecedents (i.e., trust and risk) that we expect will be influenced by CFIP. In addition, familiarity with the Web merchant may moderate the relationship between CFIP and both trust and risk perceptions. As an individual becomes more familiar with the Web merchant and how it collects and protects personal information, perceptions may be driven more by knowledge of the merchant than by information concerns. This differential relationship between factors for more familiar (e.g. experienced) and less familiar merchants is similar to findings of previous research on user acceptance for potential and repeat users of technology (Karahanna et al., 1999) and e-commerce customers (Gefen et al., 2003). Thus, this research has two goals. The first goal is to better understand the role that consumers’ concerns for information privacy (CFIP) have on their willingness to engage in online transactions. The second goal is to investigate whether familiarity moderates the effects of CFIP on key constructs in our nomological network. Specifically, the following research questions are investigated: How do consumers’ concerns for information privacy affect their willingness to engage in online transactions? Does consumers' familiarity with a Web merchant moderate the impact of concern for information privacy on risk and on trust? Information Privacy and Online Consumer Purchasing/Van Slyke et al. Journal of the Association for Information Systems Vol. 7 No. 6, pp. 415-444/June 2006 417 This paper is organized as follows. First, we provide background information regarding the existing literature and the constructs of interest. Next, we present our research model and develop the hypotheses arising from the model. We then describe the method by which we investigated the hypotheses. This is followed by a discussion of the results of our analysis. We conclude the paper by discussing the implications and limitations of our work, along with suggestions for future research. Research Model and Hypotheses Figure 1 presents this study's research model. Given that concern for information privacy is the central focus of the study, we embed the construct within a nomological network of willingness to transact in prior research. Specifically, we include risk, familiarity with the merchant, and trust (Bhattacherjee, 2002; Gefen et al., 2003; Jarvenpaa and Tractinsky, 1999; Van Slyke et al., 2004) constructs that CFIP is posited to influence and that have been found to influence. We first discuss CFIP and then present the theoretical rationale that underlies the relationships presented in the research model. We begin our discussion of the research model by providing an overview of CFIP, focusing on this construct in the context of e-commerce.",
"title": ""
},
{
"docid": "7401f7a0f82fa6384cd62eb4b77c1ea2",
"text": "The advance of GPS-enabled devices allows people to record their location histories with GPS traces, which imply human behaviors and preferences related to travel. In this article, we perform two types of travel recommendations by mining multiple users' GPS traces. The first is a generic one that recommends a user with top interesting locations and travel sequences in a given geospatial region. The second is a personalized recommendation that provides an individual with locations matching her travel preferences. To achieve the first recommendation, we model multiple users' location histories with a tree-based hierarchical graph (TBHG). Based on the TBHG, we propose a HITS (Hypertext Induced Topic Search)-based model to infer the interest level of a location and a user's travel experience (knowledge). In the personalized recommendation, we first understand the correlation between locations, and then incorporate this correlation into a collaborative filtering (CF)-based model, which predicts a user's interests in an unvisited location based on her locations histories and that of others. We evaluated our system based on a real-world GPS trace dataset collected by 107 users over a period of one year. As a result, our HITS-based inference model outperformed baseline approaches like rank-by-count and rank-by-frequency. Meanwhile, we achieved a better performance in recommending travel sequences beyond baselines like rank-by-count. Regarding the personalized recommendation, our approach is more effective than the weighted Slope One algorithm with a slightly additional computation, and is more efficient than the Pearson correlation-based CF model with the similar effectiveness.",
"title": ""
},
{
"docid": "2f8635d4da12fd6d161c7b10c140f8f9",
"text": "Technology has made navigation in 3D real time possible and this has made possible what seemed impossible. This paper explores the aspect of deep visual odometry methods for mobile robots. Visual odometry has been instrumental in making this navigation successful. Noticeable challenges in mobile robots including the inability to attain Simultaneous Localization and Mapping have been solved by visual odometry through its cameras which are suitable for human environments. More intuitive, precise and accurate detection have been made possible by visual odometry in mobile robots. Another challenge in the mobile robot world is the 3D map reconstruction for exploration. A dense map in mobile robots can facilitate for localization and more accurate findings. I. VISUAL ODOMETRY IN MOBILE ROBOTS Mobile robot applications heavily rely on the ability of the vehicle to achieve accurate localization. It is essential that a robot is able to maintain knowledge about its position at all times in order to achieve autonomous navigation. To attain this, various techniques, systems and sensors have been established to aid with mobile robot positioning including visual odometry [1]. Importantly, the adoption of Deep Learning based techniques was inspired by the precision to find solutions to numerous standard computer vision problems including object detection, image classification and segmentation. Visual odometry involves the pose estimation process that involves a robot and how they use a stream of images obtained from cameras that are attached to them [2]. The main aim of visual odometry is the estimations from camera pose. It is an approach that avoids contact with the robot for the purpose of ensuring that the mobile robots are effectively positioned. For this reason, the process is quite a challenging task that is related to mapping and simultaneous localization whose main aim is to generate the road map from a stream of visual data [3]. Estimates of motion from pixel differences and features between frames are made based on cameras that are strategically positioned. For mobile robots to achieve an actively controlled navigation, a real time 3D and reliable localization and reconstruction of functions is an essential prerequisite [4]. Mobile robots have to perform localization and mapping functions simultaneously and this poses a major challenge for them. The Simultaneous Localization and Mapping (SLAM) problem has attracted attention as various studies extensively evaluate it [5]. To solve the SLAM problem, visual odometry has been suggested especially because cameras provide high quality information at a low cost from the sensors that are conducive for human environments [6]. The major advances in computer vision also make possible quite a number of synergistic capabilities including terrain and scene classification, object detection and recognition. Notably, the visual odometry in mobile robot have enabled for more precise, intuitive and accurate detection. Although there has been significant progress in the last decade to bring improvements to passive mobile robots into controllable robots that are active, there are still notable challenges in the effort to achieve this. Particularly, a 3D map reconstruction that is fully dense to facilitate for exploration still remains an unsolved problem. It is only through a dense map that mobile robots can be able to more reliably do localization and ultimately leading to findings that are more accurate [7] [8]. According to Turan ( [9]), it is essential that adoptions of a comprehensive reconstruction on the suitable 3D method for mobile robots be adopted. This can be made possible through the building of a modular fashion including key frame selection, pre-processing, estimates on sparse then dense alignment based pose, shading based 3D and bundle fusion reconstruction [10]. There is also the challenge of the real time precise localization of the mobile robots that are actively controlled. The study by [11], which employed quantitative and qualitative in trajectory estimations sought to find solution to the challenge of precise localization for the endoscopic robot capsule. The data set was general and this was ensured through the fitting of 3 endoscopic cameras in different locations for the purpose of capturing the endoscopic videos [12]. Stomach videos were recorded for 15 minutes and they contained more than 10,000 frames. Through this, the ground truth was served for the 3D reconstruction module maps’ quantitative evaluations [13]. Its findings proposed that the direct SLAM be implemented on a map fusion based method that is non rigid for the mobile robots [14]. Through this method, high accuracy is likely to be achieved for extensive evaluations and conclusions [15]. The industry of mobile robots continues to face numerous challenges majorly because of enabling technology, including perception, artificial intelligence and power sources [16]. Evidently, motors, actuators and gears are essential to the robotic world today. Work is still in progress in the development of soft robotics, artificial muscles and strategies of assembly that are aimed at developing the autonomous robot’s generation in the coming future that are power efficient and multifunctional. There is also the aspect of robots lacing synchrony, calibration and symmetry which serves to increase the photometric error. This challenge maybe addressed by adopting the direct odometry method [17]. Direct sparse odometry has been recommended by various studies since it has been found to reduce the photometric error. This can be associated to the fact that it combines a probabilistic model with joint optimization of model parameters [9]. It has also been found to maintain high levels of consistency especially because it incorporates geometry parameters which also increase accuracy levels [18].",
"title": ""
},
{
"docid": "b729bb8bc6a9b8dd655b77a7bfc68846",
"text": "BACKGROUND\nWe describe our experiences with vaginal vault resection for vaginal recurrence of cervical cancer after hysterectomy and radiotherapy. After operative treatment, the rate of vaginal vault recurrence of uterine cervical cancer is reported to be about 5%. There is no consensus regarding the treatment for these cases.\n\n\nMETHODS\nBetween 2004 and 2012, eight patients with vaginal vault recurrence underwent removal of the vaginal wall via laparotomy after hysterectomy and radiotherapy.\n\n\nRESULTS\nThe median patient age was 45 years (range 35 to 70 years). The median operation time was 244.5 min (range 172 to 590 min), the median estimated blood loss was 362.5 mL (range 49 to 1,890 mL), and the median duration of hospitalization was 24.5 days (range 11 to 50 days). Two patients had intraoperative complications: a grade 1 bowel injury and a grade 1 bladder injury. The following postoperative complications were observed: one patient had vaginal vault bleeding, three patients developed vesicovaginal fistulae, and one patient had repeated ileus. Two patients needed clean intermittent catheterization. Local control was achieved in five of the eight cases.\n\n\nCONCLUSIONS\nVaginal vault resection is an effective treatment for vaginal recurrence of cervical cancer after hysterectomy and radiotherapy. However, complications of this procedure can be expected to reduce quality of life. Therefore, this operation should be selected with great care.",
"title": ""
},
{
"docid": "cc06553e4d03bf8541597d01de4d5eae",
"text": "Several technologies are used today to improve safety in transportation systems. The development of a system for drivability based on both V2V and V2I communication is considered an important task for the future. V2X communication will be a next step for the transportation safety in the nearest time. A lot of different structures, architectures and communication technologies for V2I based systems are under development. Recently a global paradigm shift known as the Internet-of-Things (IoT) appeared and its integration with V2I communication could increase the safety of future transportation systems. This paper brushes up on the state-of-the-art of systems based on V2X communications and proposes an approach for system architecture design of a safe intelligent driver assistant system using IoT communication. In particular, the paper presents the design process of the system architecture using IDEF modeling methodology and data flows investigations. The proposed approach shows the system design based on IoT architecture reference model.",
"title": ""
},
{
"docid": "8d1797caf78004e6ba548ace7d5a1161",
"text": "An automated irrigation system was developed to optimize water use for agricultural crops. The system has a distributed wireless network of soil-moisture and temperature sensors placed in the root zone of the plants. In addition, a gateway unit handles sensor information, triggers actuators, and transmits data to a web application. An algorithm was developed with threshold values of temperature and soil moisture that was programmed into a microcontroller-based gateway to control water quantity. The system was powered by photovoltaic panels and had a duplex communication link based on a cellular-Internet interface that allowed for data inspection and irrigation scheduling to be programmed through a web page. The automated system was tested in a sage crop field for 136 days and water savings of up to 90% compared with traditional irrigation practices of the agricultural zone were achieved. Three replicas of the automated system have been used successfully in other places for 18 months. Because of its energy autonomy and low cost, the system has the potential to be useful in water limited geographically isolated areas.",
"title": ""
},
{
"docid": "3613dd18a4c930a28ed520192f7ac23f",
"text": "OBJECTIVES\nIn this paper we present a contemporary understanding of \"nursing informatics\" and relate it to applications in three specific contexts, hospitals, community health, and home dwelling, to illustrate achievements that contribute to the overall schema of health informatics.\n\n\nMETHODS\nWe identified literature through database searches in MEDLINE, EMBASE, CINAHL, and the Cochrane Library. Database searching was complemented by one author search and hand searches in six relevant journals. The literature review helped in conceptual clarification and elaborate on use that are supported by applications in different settings.\n\n\nRESULTS\nConceptual clarification of nursing data, information and knowledge has been expanded to include wisdom. Information systems and support for nursing practice benefits from conceptual clarification of nursing data, information, knowledge, and wisdom. We introduce three examples of information systems and point out core issues for information integration and practice development.\n\n\nCONCLUSIONS\nExploring interplays of data, information, knowledge, and wisdom, nursing informatics takes a practice turn, accommodating to processes of application design and deployment for purposeful use by nurses in different settings. Collaborative efforts will be key to further achievements that support task shifting, mobility, and ubiquitous health care.",
"title": ""
},
{
"docid": "edb92440895801051e0bf63ade2cfbf8",
"text": "Over the last three decades, dietary pattern analysis has come to the forefront of nutritional epidemiology, where the combined effects of total diet on health can be examined. Two analytical approaches are commonly used: a priori and a posteriori. Cluster analysis is a commonly used a posteriori approach, where dietary patterns are derived based on differences in mean dietary intake separating individuals into mutually exclusive, non-overlapping groups. This review examines the literature on dietary patterns derived by cluster analysis in adult population groups, focusing, in particular, on methodological considerations, reproducibility, validity and the effect of energy mis-reporting. There is a wealth of research suggesting that the human diet can be described in terms of a limited number of eating patterns in healthy population groups using cluster analysis, where studies have accounted for differences in sex, age, socio-economic status, geographical area and weight status. Furthermore, patterns have been used to explore relationships with health and chronic diseases and more recently with nutritional biomarkers, suggesting that these patterns are biologically meaningful. Overall, it is apparent that consistent trends emerge when using cluster analysis to derive dietary patterns; however, future studies should focus on the inconsistencies in methodology and the effect of energy mis-reporting.",
"title": ""
},
{
"docid": "cafdc8bb8b86171026d5a852e7273486",
"text": "A majority of the existing algorithms which mine graph datasets target complete, frequent sub-graph discovery. We describe the graph-based data mining system Subdue which focuses on the discovery of sub-graphs which are not only frequent but also compress the graph dataset, using a heuristic algorithm. The rationale behind the use of a compression-based methodology for frequent pattern discovery is to produce a fewer number of highly interesting patterns than to generate a large number of patterns from which interesting patterns need to be identified. We perform an experimental comparison of Subdue with the graph mining systems gSpan and FSG on the Chemical Toxicity and the Chemical Compounds datasets that are provided with gSpan. We present results on the performance on the Subdue system on the Mutagenesis and the KDD 2003 Citation Graph dataset. An analysis of the results indicates that Subdue can efficiently discover best-compressing frequent patterns which are fewer in number but can be of higher interest.",
"title": ""
},
{
"docid": "5217c15c210a9475082329dff72811a2",
"text": "This paper describes the USAAR-WLV taxonomy induction system that participated in the Taxonomy Extraction Evaluation task of SemEval-2015. We extend prior work on using vector space word embedding models for hypernym-hyponym extraction by simplifying the means to extract a projection matrix that transforms any hyponym to its hypernym. This is done by making use of function words, which are usually overlooked in vector space approaches to NLP. Our system performs best in the chemical domain and has achieved competitive results in the overall evaluations.",
"title": ""
},
{
"docid": "fd3fb8803a618ff9f738b10dc484f6bc",
"text": "Various studies on consumer purchasing behaviors have been presented and used in real problems. Data mining techniques are expected to be a more effective tool for analyzing consumer behaviors. However, the data mining method has disadvantages as well as advantages. Therefore, it is important to select appropriate techniques to mine databases. The objective of this paper is to improve conventional data mining analysis by applying several methods including fuzzy clustering, principal component analysis, and discriminate analysis. Many defects included in the conventional methods are improved in the paper. Moreover, in an experiment, association rule is employed to mine rules for trusted customers using sales data in a fiber industry",
"title": ""
},
{
"docid": "2ab1f2d0ca28851dcc36721686a06fa2",
"text": "A quarter-century ago visual neuroscientists had little information about the number and organization of retinotopic maps in human visual cortex. The advent of functional magnetic resonance imaging (MRI), a non-invasive, spatially-resolved technique for measuring brain activity, provided a wealth of data about human retinotopic maps. Just as there are differences amongst non-human primate maps, the human maps have their own unique properties. Many human maps can be measured reliably in individual subjects during experimental sessions lasting less than an hour. The efficiency of the measurements and the relatively large amplitude of functional MRI signals in visual cortex make it possible to develop quantitative models of functional responses within specific maps in individual subjects. During this last quarter-century, there has also been significant progress in measuring properties of the human brain at a range of length and time scales, including white matter pathways, macroscopic properties of gray and white matter, and cellular and molecular tissue properties. We hope the next 25years will see a great deal of work that aims to integrate these data by modeling the network of visual signals. We do not know what such theories will look like, but the characterization of human retinotopic maps from the last 25years is likely to be an important part of future ideas about visual computations.",
"title": ""
},
{
"docid": "bf5d53e5465dd5e64385bf9204324059",
"text": "A model of core losses, in which the hysteresis coefficients are variable with the frequency and induction (flux density) and the eddy-current and excess loss coefficients are variable only with the induction, is proposed. A procedure for identifying the model coefficients from multifrequency Epstein tests is described, and examples are provided for three typical grades of non-grain-oriented laminated steel suitable for electric motor manufacturing. Over a wide range of frequencies between 20-400 Hz and inductions from 0.05 to 2 T, the new model yielded much lower errors for the specific core losses than conventional models. The applicability of the model for electric machine analysis is also discussed, and examples from an interior permanent-magnet and an induction motor are included.",
"title": ""
},
{
"docid": "8d0ce09e523001eb9d34d38108a2f603",
"text": "In this paper we describe a point-based approach for animating elastoplastic materials. Our primary contribution is a simple method for computing the deformation gradient for each particle in the simulation. The deformation gradient is computed for each particle by finding the affine transformation that best approximates the motion of neighboring particles over a single timestep. These transformations are then composed to compute the total deformation gradient that describes the deformation around a particle over the course of the simulation. Given the deformation gradient we can apply arbitrary constitutive models and compute the resulting elastic forces. Our method has two primary advantages: we do not store or compare to an initial rest configuration and we work directly with the deformation gradient. The first advantage avoids poor numerical conditioning and the second naturally leads to a multiplicative model of deformation appropriate for finite deformations. We demonstrate our approach on a number of examples that exhibit a wide range of material behaviors.",
"title": ""
}
] |
scidocsrr
|
dcb11284b2845dd25b24eba3cfb77a1b
|
A matrix-free cone complementarity approach for solving large-scale , nonsmooth , rigid body dynamics Preprint ANL / MCS-P 1692-1109
|
[
{
"docid": "78229ed553e824250f5514b81a3d5ba1",
"text": "In the context of simulating the frictional contact dynamic s of large systems of rigid bodies, this paper reviews a novel method for solving large cone complementarity proble ms by means of a fixed-point iteration algorithm. The method is an extension of the Gauss-Seidel and Gauss-Jacobi methods with overrelaxation for symmetric convex linear complementarity problems. Convergent under fairly standa rd assumptions, the method is implemented in a parallel framework by using a single instruction multiple data compu tation paradigm promoted by the Compute Unified Device Architecture library for graphical processing unit progra mming. The framework supports the simulation of problems with more than 1 million bodies in contact. Simulation thus b ecomes a viable tool for investigating the dynamics of complex systems such as ground vehicles running on sand, pow der composites, and granular material flow.",
"title": ""
}
] |
[
{
"docid": "b5d2e42909bf8ce64beebe38630fcb47",
"text": "In this paper we combine one method for hierarchical reinforcement learning—the options framework—with deep Q-networks (DQNs) through the use of different “option heads” on the policy network, and a supervisory network for choosing between the different options. We utilise our setup to investigate the effects of architectural constraints in subtasks with positive and negative transfer, across a range of network capacities. We empirically show that our augmented DQN has lower sample complexity when simultaneously learning subtasks with negative transfer, without degrading performance when learning subtasks with positive transfer.",
"title": ""
},
{
"docid": "d0de4e5b82de87431752d95a6afebc2e",
"text": "Fire deaths are usually accidental, but atypical cases of homicide or suicide have been described. In suicide by fire, the only method reported by several authors consists of self-immolation. We present here the unusual case of an adult female who committed suicide by waiting in the living room after setting fire to her bedroom. The autopsy revealed smoke inhalation and the toxicological analysis revealed carboxyhemoglobin levels of 67%. Very few cases of suicide by fire not of the self-immolation type have been reported, and all have been anecdotal. A review of the literature is presented and a new term, \"suicide by inhalation of carbon monoxide in a fire,\" is proposed for such cases.",
"title": ""
},
{
"docid": "7b4f6382a7421fa08177c045eb9fdd66",
"text": "Cross-site scripting (XSS) vulnerabilities are among the most common and serious web application vulnerabilities. XSS vulnerabilities are difficult to prevent because it is difficult for web applications to anticipate client-side semantics. We present Noncespaces, a technique that enables web clients to distinguish between trusted and untrusted content to prevent exploitation of XSS vulnerabilities. Using Noncespaces, a web application randomizes the XML namespace tags in each document before delivering it to the client. As long as the attacker is unable to predict the randomized prefixes, the client can distinguish between trusted content created by the web application and untrusted content provided by the attacker. Noncespaces uses client-side policy enforcement to avoid semantic ambiguities between the client and server. To implement Noncespaces with minimal changes to web applications, we leverage a popular web application architecture to automatically apply Noncespaces to static content processed through a popular PHP template engine. We show that with simple policies Noncespaces thwarts popular XSS attack vectors. As an additional benefit, the client-side policy not only allows a web application to restrict security-relevant capabilities to untrusted content but also narrows the application’s remaining attack vectors, which deserve more scrutiny by security auditors.",
"title": ""
},
{
"docid": "665f7b5de2ce617dd3009da3d208026f",
"text": "Throughout the history of the social evolution, man and animals come into frequent contact that forms an interdependent relationship between man and animals. The images of animals root in the everyday life of all nations, forming unique animal culture of each nation. Therefore, Chinese and English, as the two languages which spoken by the most people in the world, naturally contain a lot of words relating to animals, and because of different history and culture, the connotations of animal words in one language do not coincide with those in another. The clever use of animal words is by no means scarce in everyday communication or literary works, which helps make English and Chinese vivid and lively in image, plain and expressive in character, and rich and strong in flavor. In this study, many animal words are collected for the analysis of the similarities and the differences between the cultural connotations carried by animal words in Chinese and English, find out the causes of differences, and then discuss some methods and techniques for translating these animal words.",
"title": ""
},
{
"docid": "527387aa12e83ae1f9ec0f3056a26fb3",
"text": "Standard image captioning tasks such as COCO and Flickr30k are factual, neutral in tone and (to a human) state the obvious (e.g., “a man playing a guitar”). While such tasks are useful to verify that a machine understands the content of an image, they are not engaging to humans as captions. With this in mind we define a new task, PERSONALITY-CAPTIONS, where the goal is to be as engaging to humans as possible by incorporating controllable style and personality traits. We collect and release a large dataset of 201,858 of such captions conditioned over 215 possible traits. We build models that combine existing work from (i) sentence representations (Mazaré et al., 2018) with Transformers trained on 1.7 billion dialogue examples; and (ii) image representations (Mahajan et al., 2018) with ResNets trained on 3.5 billion social media images. We obtain state-of-theart performance on Flickr30k and COCO, and strong performance on our new task. Finally, online evaluations validate that our task and models are engaging to humans, with our best model close to human performance.",
"title": ""
},
{
"docid": "1f3e6d26fdfa8e73d40da544be34cb03",
"text": "State-of-the-art video restoration methods integrate optical flow estimation networks to utilize temporal information. However, these networks typically consider only a pair of consecutive frames and hence are not capable of capturing long-range temporal dependencies and fall short of establishing correspondences across several timesteps. To alleviate these problems, we propose a novel Spatio-temporal Transformer Network (STTN) which handles multiple frames at once and thereby manages to mitigate the common nuisance of occlusions in optical flow estimation. Our proposed STTN comprises a module that estimates optical flow in both space and time and a resampling layer that selectively warps target frames using the estimated flow. In our experiments, we demonstrate the efficiency of the proposed network and show state-ofthe-art restoration results in video super-resolution and video deblurring.",
"title": ""
},
{
"docid": "0ecb65da4effb562bfa29d06769b1a4c",
"text": "A new algorithm for testing primality is presented. The algorithm is distinguishable from the lovely algorithms of Solvay and Strassen [36], Miller [27] and Rabin [32] in that its assertions of primality are certain (i.e., provable from Peano's axioms) rather than dependent on unproven hypothesis (Miller) or probability (Solovay-Strassen, Rabin). An argument is presented which suggests that the algorithm runs within time c1ln(n)c2ln(ln(ln(n))) where n is the input, and C1, c2 constants independent of n. Unfortunately no rigorous proof of this running time is yet available.",
"title": ""
},
{
"docid": "83651ca357b0f978400de4184be96443",
"text": "The most common temporomandibular joint (TMJ) pathologic disease is anterior-medial displacement of the articular disk, which can lead to TMJ-related symptoms.The indication for disk repositioning surgery is irreversible TMJ damage associated with temporomandibular pain. We describe a surgical technique using a preauricular approach with a high condylectomy to reshape the condylar head. The disk is anchored with a bioabsorbable microanchor (Mitek Microfix QuickAnchor Plus 1.3) to the lateral aspect of the condylar head. The anchor is linked with a 3.0 Ethibond absorbable suture to fix the posterolateral side of the disk above the condyle.The aims of this surgery were to alleviate temporomandibular pain, headaches, and neck pain and to restore good jaw mobility. In the long term, we achieved these objectives through restoration of the physiological position and function of the disk and the lower articular compartment.In our opinion, the bioabsorbable anchor is the best choice for this type of surgery because it ensures the stability of the restored disk position and leaves no artifacts in the long term that might impede follow-up with magnetic resonance imaging.",
"title": ""
},
{
"docid": "da5339bb74d6af2bfa7c8f46b4f50bb3",
"text": "Conversational agents are exploding in popularity. However, much work remains in the area of non goal-oriented conversations, despite significant growth in research interest over recent years. To advance the state of the art in conversational AI, Amazon launched the Alexa Prize, a 2.5-million dollar university competition where sixteen selected university teams built conversational agents to deliver the best social conversational experience. Alexa Prize provided the academic community with the unique opportunity to perform research with a live system used by millions of users. The subjectivity associated with evaluating conversations is key element underlying the challenge of building non-goal oriented dialogue systems. In this paper, we propose a comprehensive evaluation strategy with multiple metrics designed to reduce subjectivity by selecting metrics which correlate well with human judgement. The proposed metrics provide granular analysis of the conversational agents, which is not captured in human ratings. We show that these metrics can be used as a reasonable proxy for human judgment. We provide a mechanism to unify the metrics for selecting the top performing agents, which has also been applied throughout the Alexa Prize competition. To our knowledge, to date it is the largest setting for evaluating agents with millions of conversations and hundreds of thousands of ratings from users. We believe that this work is a step towards an automatic evaluation process for conversational AIs.",
"title": ""
},
{
"docid": "d18a636768e6aea2e84c7fc59593ec89",
"text": "Enterprise social networking (ESN) techniques have been widely adopted by firms to provide a platform for public communication among employees. This study investigates how the relationships between stressors (i.e., challenge and hindrance stressors) and employee innovation are moderated by task-oriented and relationship-oriented ESN use. Since challenge-hindrance stressors and employee innovation are individual-level variables and task-oriented ESN use and relationship-oriented ESN use are team-level variables, we thus use hierarchical linear model to test this cross-level model. The results of a survey of 191 employees in 50 groups indicate that two ESN use types differentially moderate the relationship between stressors and employee innovation. Specifically, task-oriented ESN use positively moderates the effects of the two stressors on employee innovation, while relationship-oriented ESN use negatively moderates the relationship between the two stressors and employee innovation. In addition, we find that challenge stressors significantly improve employee innovation. Theoretical and practical implications are discussed.",
"title": ""
},
{
"docid": "9e669f91dcce29a497c8524fccc1380d",
"text": "Increased serum cholesterol and decreased high-density lipoprotein (HDL) cholesterol level in serum and cerebro-spinal fluid is a risk factor for the development of Alzheimer disease, and also a predictor of cardiovascular events and stroke in epidemiologic studies. Niacin (vitamin B 3 or nicotinic acid) is the most effective medication in current clinical use for increasing HDL cholesterol and it substantially lowers triglycerides and LDL cholesterol. This review provides an update on the role of the increasing HDL cholesterol agent, niacin, as a neuroprotective and neurorestorative agent which promotes angiogenesis and arteriogenesis after stroke and improves neurobehavioral recovery following central nervous system diseases such as stroke, Alzheimer’s disease and multiple sclerosis. The mechanisms underlying the niacin induced neuroprotective and neurorestorative effects after stroke are discussed. The primary focus of this review is on stroke, with shorter discussion on Alzheimer disease and multiple sclerosis.",
"title": ""
},
{
"docid": "04895dc56497ad7b6738b0bfa38812e6",
"text": "Recombinant protein production emerged in the early 1980s with the development of genetic engineering tools, which represented a compelling alternative to protein extraction from natural sources. Over the years, a high level of heterologous protein was made possible in a variety of hosts ranging from the bacteria Escherichia coli to mammalian cells. Recombinant protein importance is represented by its market size, which reached $1654 million in 2016 and is expected to reach $2850.5 million by 2022. Among the available hosts, yeasts have been used for producing a great variety of proteins applied to chemicals, fuels, food, and pharmaceuticals, being one of the most used hosts for recombinant production nowadays. Historically, Saccharomyces cerevisiae was the dominant yeast host for heterologous protein production. Lately, other yeasts such as Komagataella sp., Kluyveromyces lactis, and Yarrowia lipolytica have emerged as advantageous hosts. In this review, a comparative analysis is done listing the advantages and disadvantages of using each host regarding the availability of genetic tools, strategies for cultivation in bioreactors, and the main techniques utilized for protein purification. Finally, examples of each host will be discussed regarding the total amount of protein recovered and its bioactivity due to correct folding and glycosylation patterns.",
"title": ""
},
{
"docid": "0c76df51ba5e2d1aff885ac8fd146de8",
"text": "A design concept for a planar antenna array for Global Positioning System (GPS) applications is presented in this paper. A 4-element wideband circularly polarized array, which utilizes multi-layer microstrip patch antenna technology, was successfully designed and tested. The design achieves a very low axial ratio performance without compromising fabrication simplicity and overall antenna performance.",
"title": ""
},
{
"docid": "826e01210bb9ce8171ed72043b4a304d",
"text": "Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.",
"title": ""
},
{
"docid": "e0160911f70fa836f64c08f721f6409e",
"text": "Today’s openly available knowledge bases, such as DBpedia, Yago, Wikidata or Freebase, capture billions of facts about the world’s entities. However, even the largest among these (i) are still limited in up-to-date coverage of what happens in the real world, and (ii) miss out on many relevant predicates that precisely capture the wide variety of relationships among entities. To overcome both of these limitations, we propose a novel approach to build on-the-fly knowledge bases in a query-driven manner. Our system, called QKBfly, supports analysts and journalists as well as question answering on emerging topics, by dynamically acquiring relevant facts as timely and comprehensively as possible. QKBfly is based on a semantic-graph representation of sentences, by which we perform three key IE tasks, namely named-entity disambiguation, co-reference resolution and relation extraction, in a light-weight and integrated manner. In contrast to Open IE, our output is canonicalized. In contrast to traditional IE, we capture more predicates, including ternary and higher-arity ones. Our experiments demonstrate that QKBfly can build high-quality, on-the-fly knowledge bases that can readily be deployed, e.g., for the task of ad-hoc question answering. PVLDB Reference Format: D. B. Nguyen, A. Abujabal, N. K. Tran, M. Theobald, and G. Weikum. Query-Driven On-The-Fly Knowledge Base Construction. PVLDB, 11 (1): 66-7 , 2017. DOI: 10.14778/3136610.3136616",
"title": ""
},
{
"docid": "98e78d8fb047140a73f2a43cbe4a1c74",
"text": "Genomics can transform health-care through precision medicine. Plummeting sequencing costs would soon make genome testing affordable to the masses. Compute efficiency, however, has to improve by orders of magnitude to sequence and analyze the raw genome data. Sequencing software used today can take several hundreds to thousands of CPU hours to align reads to a reference sequence. This paper presents GenAx, an accelerator for read alignment, a time-consuming step in genome sequencing. It consists of a seeding and seed-extension accelerator. The latter is based on an innovative automata design that was designed from the ground-up to enable hardware acceleration. Unlike conventional Levenshtein automata, it is string independent and scales quadratically with edit distance, instead of string length. It supports critical features commonly used in sequencing such as affine gap scoring and traceback. GenAx provides a throughput of 4,058K reads/s for Illumina 101 bp reads. GenAx achieves 31.7× speedup over the standard BWA-MEM sequence aligner running on a 56-thread dualsocket 14-core Xeon E5 server processor, while reducing power consumption by 12× and area by 5.6×.",
"title": ""
},
{
"docid": "9cd7c945291db3fc0cc0ece4cf03a186",
"text": "Coronary angiography is considered to be a safe tool for the evaluation of coronary artery disease and perform in approximately 12 million patients each year worldwide. [1] In most cases, angiograms are manually analyzed by a cardiologist. Actually, there are no clinical practice algorithms which could improve and automate this work. Neural networks show high efficiency in tasks of image analysis and they can be used for the analysis of angiograms and facilitate diagnostics. We have developed an algorithm based on Convolutional Neural Network and Neural Network U-Net [2] for vessels segmentation and defects detection such as stenosis. For our research we used anonymized angiography data obtained from one of the city’s hospitals and augmented them to improve learning efficiency. U-Net usage provided high quality segmentation and the combination of our algorithm with an ensemble of classifiers shows a good accuracy in the task of ischemia evaluation on test data. Subsequently, this approach can be served as a basis for the creation of an analytical system that could speed up the diagnosis of cardiovascular diseases and greatly facilitate the work of a specialist.",
"title": ""
},
{
"docid": "e3bb16dfbe54599c83743e5d7f1facc6",
"text": "Testosterone-dependent secondary sexual characteristics in males may signal immunological competence and are sexually selected for in several species,. In humans, oestrogen-dependent characteristics of the female body correlate with health and reproductive fitness and are found attractive. Enhancing the sexual dimorphism of human faces should raise attractiveness by enhancing sex-hormone-related cues to youth and fertility in females,, and to dominance and immunocompetence in males,,. Here we report the results of asking subjects to choose the most attractive faces from continua that enhanced or diminished differences between the average shape of female and male faces. As predicted, subjects preferred feminized to average shapes of a female face. This preference applied across UK and Japanese populations but was stronger for within-population judgements, which indicates that attractiveness cues are learned. Subjects preferred feminized to average or masculinized shapes of a male face. Enhancing masculine facial characteristics increased both perceived dominance and negative attributions (for example, coldness or dishonesty) relevant to relationships and paternal investment. These results indicate a selection pressure that limits sexual dimorphism and encourages neoteny in humans.",
"title": ""
},
{
"docid": "c32673f901f67389e5ac5d4b5d994617",
"text": "This article is about testing the equality of several normal means when the variances are unknown and arbitrary, i.e., the set up of the one-way ANOVA. Even though several tests are available in the literature, none of them perform well in terms of type I error probability under various sample size and parameter combinations. In fact, the type I errors can be highly inflated for some of the commonly used tests; a serious issue that appears to have been overlooked. We propose a parametric bootstrap (PB) approach and compare it with three existing location-scale invariant tests – the Welch test, James test and the generalized F (GF) test. The Type I error rates and powers of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test is the best among the four tests with respect to Type I error rates. The PB test performs very satisfactorily even for small samples while the Welch test and the GF test exhibit poor Type I error properties when the sample sizes are small and/or the number of means to be compared is moderate to large. The James test performs better than the Welch test and the GF test. It is also noted that the same tests can be used to test the significance of the random effect variance component in a one-way random model under unequal error variances. Such models are widely used to analyze data from inter-laboratory studies. The methods are illustrated using some examples.",
"title": ""
}
] |
scidocsrr
|
7b4e8bdceb72156cad0d263fb124f9a7
|
At the roots of dictionary compression: string attractors
|
[
{
"docid": "eee53d116e4aa9e0276ef8deec66ac76",
"text": "This paper addresses the smallest grammar problem: What is the smallest context-free grammar that generates exactly one given string /spl sigma/? This is a natural question about a fundamental object connected to many fields such as data compression, Kolmogorov complexity, pattern identification, and addition chains. Due to the problem's inherent complexity, our objective is to find an approximation algorithm which finds a small grammar for the input string. We focus attention on the approximation ratio of the algorithm (and implicitly, the worst case behavior) to establish provable performance guarantees and to address shortcomings in the classical measure of redundancy in the literature. Our first results are concern the hardness of approximating the smallest grammar problem. Most notably, we show that every efficient algorithm for the smallest grammar problem has approximation ratio at least 8569/8568 unless P=NP. We then bound approximation ratios for several of the best known grammar-based compression algorithms, including LZ78, B ISECTION, SEQUENTIAL, LONGEST MATCH, GREEDY, and RE-PAIR. Among these, the best upper bound we show is O(n/sup 1/2/). We finish by presenting two novel algorithms with exponentially better ratios of O(log/sup 3/n) and O(log(n/m/sup */)), where m/sup */ is the size of the smallest grammar for that input. The latter algorithm highlights a connection between grammar-based compression and LZ77.",
"title": ""
}
] |
[
{
"docid": "00eb132ce5063dd983c0c36724f82cec",
"text": "This paper analyzes customer product-choice behavior based on the recency and frequency of each customer’s page views on e-commerce sites. Recently, we devised an optimization model for estimating product-choice probabilities that satisfy monotonicity, convexity, and concavity constraints with respect to recency and frequency. This shape-restricted model delivered high predictive performance even when there were few training samples. However, typical e-commerce sites deal in many different varieties of products, so the predictive performance of the model can be further improved by integration of such product heterogeneity. For this purpose, we develop a novel latent-class shape-restricted model for estimating product-choice probabilities for each latent class of products. We also give a tailored expectation-maximization algorithm for parameter estimation. Computational results demonstrate that higher predictive performance is achieved with our latent-class model than with the previous shape-restricted model and common latent-class logistic regression.",
"title": ""
},
{
"docid": "8182fe419366744a774ff637c8ace5dd",
"text": "The most useful environments for advancing research and development in video databases are those that provide complete video database management, including (1) video preprocessing for content representation and indexing, (2) storage management for video, metadata and indices, (3) image and semantic -based query processing, (4) realtime buffer management, and (5) continuous media streaming. Such environments support the entire process of investigating, implementing, analyzing and evaluating new techniques, thus identifying in a concrete way which techniques are truly practical and robust. In this paper we present a video database research initiative that culminated in the successful development of VDBMS, a video database research platform that supports comprehensive and efficient database management for digital video. We describe key video processing components of the system and illustrate the value of VDBMS as a research platform by describing several research projects carried out within the VDBMS environment. These include MPEG7 document support for video feature import and export, a new query operator for optimal multi-feature image similarity matching, secure access control for streaming video, and the mining of medical video data using hierarchical content organization.",
"title": ""
},
{
"docid": "e82918cb388666499767bbd4d59daf84",
"text": "The space around us is represented not once but many times in parietal cortex. These multiple representations encode locations and objects of interest in several egocentric reference frames. Stimulus representations are transformed from the coordinates of receptor surfaces, such as the retina or the cochlea, into the coordinates of effectors, such as the eye, head, or hand. The transformation is accomplished by dynamic updating of spatial representations in conjunction with voluntary movements. This direct sensory-to-motor coordinate transformation obviates the need for a single representation of space in environmental coordinates. In addition to representing object locations in motoric coordinates, parietal neurons exhibit strong modulation by attention. Both top-down and bottom-up mechanisms of attention contribute to the enhancement of visual responses. The saliance of a stimulus is the primary factor in determining the neural response to it. Although parietal neurons represent objects in motor coordinates, visual responses are independent of the intention to perform specific motor acts.",
"title": ""
},
{
"docid": "82a4bac1745e2d5dd9e39c5a4bf5b3e9",
"text": "Meaning can be as important as usability in the design of technology.",
"title": ""
},
{
"docid": "4552cfbd0aa36deeaa2e4a8c0b363f25",
"text": "This is a critical review of the literature on many-worlds interpretations (MWI), with arguments drawn partly from earlier critiques by Bell and Stein. The essential postulates involved in various MWI are extracted, and their consistency with the evident physical world is examined. Arguments are presented against MWI proposed by Everett, Graham and DeWitt. The relevance of frequency operators to MWI is examined; it is argued that frequency operator theorems of Hartle and Farhi-Goldstone-Gutmann do not in themselves provide a probability interpretation for quantum mechanics, and thus neither support existing MWI nor would be useful in constructing new MWI. Comments are made on papers by Geroch and Deutsch that advocate MWI. It is concluded that no plausible set of axioms exists for an MWI that describes",
"title": ""
},
{
"docid": "d355014cd6d5979307b6cdb49734db3e",
"text": "It is of great interest in exploiting texture information for classification of hyperspectral imagery (HSI) at high spatial resolution. In this paper, a classification paradigm to exploit rich texture information of HSI is proposed. The proposed framework employs local binary patterns (LBPs) to extract local image features, such as edges, corners, and spots. Two levels of fusion (i.e., feature-level fusion and decision-level fusion) are applied to the extracted LBP features along with global Gabor features and original spectral features, where feature-level fusion involves concatenation of multiple features before the pattern classification process while decision-level fusion performs on probability outputs of each individual classification pipeline and soft-decision fusion rule is adopted to merge results from the classifier ensemble. Moreover, the efficient extreme learning machine with a very simple structure is employed as the classifier. Experimental results on several HSI data sets demonstrate that the proposed framework is superior to some traditional alternatives.",
"title": ""
},
{
"docid": "41f87d266bce875ce5d603119d39c06c",
"text": "Tangible interaction shows promise to significantly enhance computer-mediated support for activities such as learning, problem solving, and design. However, tangible user interfaces are currently considered challenging to design and build. Designers and developers of these interfaces encounter several conceptual, methodological, and technical difficulties. Among others, these challenges include: the lack of appropriate interaction abstractions, the shortcomings of current user interface software tools to address continuous and parallel interactions, as well as the excessive effort required to integrate novel input and output technologies. To address these challenges, we propose a specification paradigm for designing and implementing Tangible User Interfaces (TUIs), that enables TUI developers to specify the structure and behavior of a tangible user interface using high-level constructs which abstract away implementation details. An important benefit of this approach, which is based on User Interface Description Language (UIDL) research, is that these specifications could be automatically or semi-automatically converted into concrete TUI implementations. In addition, such specifications could serve as a common ground for investigating both design and implementation concerns by TUI developers from different disciplines.\n Thus, the primary contribution of this article is a high-level UIDL that provides developers from different disciplines means for effectively specifying, discussing, and programming a broad range of tangible user interfaces. There are three distinct elements to this contribution: a visual specification technique that is based on Statecharts and Petri nets, an XML-compliant language that extends this visual specification technique, as well as a proof-of-concept prototype of a Tangible User Interface Management System (TUIMS) that semi-automatically translates high-level specifications into a program controlling specific target technologies.",
"title": ""
},
{
"docid": "d1a804b3ecd5ed5cf277ae0c01f85bde",
"text": "Researchers have extensively chronicled the trends and challenges in higher education (Altbach et al. 2009). MOOCs appear to be as much about the collective grasping of universities’ leaders to bring higher education into the digital age as they are about a particular method of teaching. In this chapter, I won’t spend time commenting on the role of MOOCs in educational transformation or even why attention to this mode of delivering education has received unprecedented hype (rarely has higher education as a system responded as rapidly to a trend as it has responded to open online courses). Instead, this chapter details different MOOC models and the underlying pedagogy of each.",
"title": ""
},
{
"docid": "bfea738332e9802e255881c5592195f2",
"text": "This paper presents a distributed Kalman filter to estimate the state of a sparsely connected, large-scale, n -dimensional, dynamical system monitored by a network of N sensors. Local Kalman filters are implemented on nl-dimensional subsystems, nl Lt n, obtained by spatially decomposing the large-scale system. The distributed Kalman filter is optimal under an Lth order Gauss-Markov approximation to the centralized filter. We quantify the information loss due to this Lth-order approximation by the divergence, which decreases as L increases. The order of the approximation L leads to a bound on the dimension of the subsystems, hence, providing a criterion for subsystem selection. The (approximated) centralized Riccati and Lyapunov equations are computed iteratively with only local communication and low-order computation by a distributed iterate collapse inversion (DICI) algorithm. We fuse the observations that are common among the local Kalman filters using bipartite fusion graphs and consensus averaging algorithms. The proposed algorithm achieves full distribution of the Kalman filter. Nowhere in the network, storage, communication, or computation of n-dimensional vectors and matrices is required; only nl Lt n dimensional vectors and matrices are communicated or used in the local computations at the sensors. In other words, knowledge of the state is itself distributed.",
"title": ""
},
{
"docid": "5ddcfa43a488ee92dbf13f0a91310d5a",
"text": "We present in this chapter an overview of the Mumford and Shah model for image segmentation. We discuss its various formulations, some of its properties, the mathematical framework, and several approximations. We also present numerical algorithms and segmentation results using the Ambrosio–Tortorelli phase-field approximations on one hand, and using the level set formulations on the other hand. Several applications of the Mumford–Shah problem to image restoration are also presented. . Introduction: Description of theMumford and Shah Model An important problem in image analysis and computer vision is the segmentation one, that aims to partition a given image into its constituent objects, or to find boundaries of such objects. This chapter is devoted to the description, analysis, approximations, and applications of the classical Mumford and Shah functional proposed for image segmentation. In [–], David Mumford and Jayant Shah have formulated an energy minimization problem that allows to compute optimal piecewise-smooth or piecewise-constant approximations u of a given initial image g. Since then, their model has been analyzed and considered in depth by many authors, by studying properties of minimizers, approximations, and applications to image segmentation, image partition, image restoration, and more generally to image analysis and computer vision. We denote by Ω ⊂ Rd the image domain (an interval if d = , or a rectangle in the plane if d = ). More generally, we assume that Ω is open, bounded, and connected. Let g : Ω → R be a given gray-scale image (a signal in one dimension, a planar image in two dimensions, or a volumetric image in three dimensions). It is natural and without losing any generality to assume that g is a bounded function in Ω, g ∈ L(Ω). As formulated byMumford and Shah [], the segmentation problem in image analysis and computer vision consists in computing a decomposition Ω = Ω ∪Ω ∪ . . . ∪ Ωn ∪ K of the domain of the image g such that (a) The image g varies smoothly and/or slowly within each Ω i . (b) The image g varies discontinuously and/or rapidly across most of the boundary K between different Ω i . From the point of view of approximation theory, the segmentation problem may be restated as seeking ways to define and compute optimal approximations of a general function g(x) by piecewise-smooth functions u(x), i.e., functions u whose restrictions ui to the pieces Ω i of a decomposition of the domain Ω are continuous or differentiable. Mumford and ShahModel and its Applications to Image Segmentation and Image Restoration In what follows, Ω i will be disjoint connected open subsets of a domain Ω, each one with a piecewise-smooth boundary, and K will be a closed set, as the union of boundaries of Ω i inside Ω, thus Ω = Ω ∪Ω ∪ . . . ∪ Ωn ∪ K, K = Ω ∩ (∂Ω ∪ . . . ∪ ∂Ωn). The functional E to be minimized for image segmentation is defined by [–], E(u,K) = μ ∫ Ω (u − g)dx + ∫ Ω/K ∣∇u∣dx + ∣K∣, (.) where u : Ω → R is continuous or even differentiable inside each Ω i (or u ∈ H(Ω i)) and may be discontinuous across K. Here, ∣K∣ stands for the total surface measure of the hypersurface K (the counting measure if d = , the length measure if d = , the area measure if d = ). Later, we will define ∣K∣ byHd−(K), the d − dimensional Hausdorff measure in Rd . As explained by Mumford and Shah, dropping any of these three terms in (> .), inf E = : without the first, take u = , K = /; without the second, take u = g, K = /; without the third, take for example, in the discrete case K to be the boundary of all pixels of the image g, each Ω i be a pixel and u to be the average (value) of g over each pixel. The presence of all three terms leads to nontrivial solutions u, and an optimal pair (u,K) can be seen as a cartoon of the actual image g, providing a simplification of g. An important particular case is obtained when we restrict E to piecewise-constant functions u, i.e., u = constant ci on each open set Ω i . Multiplying E by μ−, we have μ−E(u,K) = ∑ i ∫ Ω i (g − ci)dx + ∣K∣, where = /μ. It is easy to verify that this is minimized in the variables ci by setting ci = meanΩ i (g) = ∫Ω i g(x)dx ∣Ω i ∣ , where ∣Ω i ∣ denotes here the Lebesgue measure of Ω i (e.g., area if d = , volume if d = ), so it is sufficient to minimize E(K) = ∑ i ∫ Ω i (g −meanΩ i g) dx + ∣K∣. It is possible to interpret E as the limit functional of E as μ → []. Finally, the Mumford and Shah model can also be seen as a deterministic refinement of Geman and Geman’s image restoration model []. . Background: The First Variation In order to better understand, analyze, and use the minimization problem (> .), it is useful to compute its first variation with respect to each of the unknowns. Mumford and Shah Model and its Applications to Image Segmentation and Image Restoration We first recall the definition of Sobolev functions u ∈ W ,(U) [], necessary to properly define a minimizer u when K is fixed. Definition LetU ⊂ Rd be an open set. We denote byW ,(U) (or by H(U)) the set of functions u ∈ L(Ω), whose first-order distributional partial derivatives belong to L(U). This means that there are functions u, . . . ,ud ∈ L(U) such that ∫ U u(x) ∂φ ∂xi (x)dx = − ∫ U ui(x)φ(x)dx for ≤ i ≤ d and for all functions φ ∈ C∞c (U). We may denote by ∂u ∂xi the distributional derivative ui of u and by∇u = ( ∂u ∂x , . . . , ∂u ∂xd ) its distributional gradient. In what follows, we denote by ∣∇u∣(x) the Euclidean norm of the gradient vector at x. H(U) = W ,(U) becomes a Banach space endowed with the norm ∥u∥W ,(U) = ∫ U udx + d ∑ i= ∫ U ( ∂u ∂xi ) dx] / . .. Minimizing in uwith K Fixed Let us assume first that K is fixed, as a closed subset of the open and bounded set Ω ⊂ Rd , and denote by E(u) = μ ∫ Ω/K (u − g)dx + ∫ Ω/K ∣∇u∣dx, for u ∈ W ,(Ω/K), where Ω/K is open and bounded, and g ∈ L(Ω/K). We have the following classical results obtained as a consequence of the standard method of calculus of variations. Proposition There is a unique minimizer of the problem inf u∈W ,(Ω/K) E(u). (.) Proof [] First, we note that ≤ inf E < +∞, since we can choose u ≡ and E(u) = μ ∫Ω/K g (x)dx < +∞. Thus, we can denote by m = inf u E(u) and let {uj} j≥ ∈ W ,(Ω/K) be a minimizing sequence such that lim j→∞ E(uj) = m. Recall that for u, v ∈ L,",
"title": ""
},
{
"docid": "6d594c21ff1632b780b510620484eb62",
"text": "The last several years have seen intensive interest in exploring neural-networkbased models for machine comprehension (MC) and question answering (QA). In this paper, we approach the problems by closely modelling questions in a neural network framework. We first introduce syntactic information to help encode questions. We then view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline.",
"title": ""
},
{
"docid": "24cdd06953ffcab9fc028d3a345fcf21",
"text": "In this letter, a wideband microstrip-to-microstrip vertical transition using open-circuited slotline stepped-impedance resonator (SIR) is proposed. By controlling the impedance ratio of the slotline SIR, two transmission zeros (TZs) can be properly created and relocated to improve the passband selectivity. To determine the characteristic of this transition, its transmission-line circuit model is presented for analysis and design. Finally, a prototype transition is designed and fabricated for experimental verification. Not only high filtering selectivity via emergence of these TZs but also wideband filtering response with out-of-band suppression of about 22 dB in a frequency range up to $2.51f_{0}$ is experimentally achieved.",
"title": ""
},
{
"docid": "d68cd0d594f8db4a0decdbdf3656ece1",
"text": "In this paper we describe PRISM, a tool being developed at the University of Birmingham for the analysis of probabilistic systems. PRISM supports three probabilistic models: discrete-time Markov chains, continuous-time Markov chains and Markov decision processes. Analysis is performed through model checking such systems against specifications written in the probabilistic temporal logics PCTL and CSL. The tool features three model checking engines: one symbolic, using BDDs (binary decision diagrams) and MTBDDs (multi-terminal BDDs); one based on sparse matrices; and one which combines both symbolic and sparse matrix methods. PRISM has been successfully used to analyse probabilistic termination, performance, dependability and quality of service properties for a range of systems, including randomized distributed algorithms [2], polling systems [22], workstation clusters [18] and wireless cell communication [17].",
"title": ""
},
{
"docid": "5228454ef59c012b079885b2cce0c012",
"text": "As a contribution to the HICSS 50 Anniversary Conference, we proposed a new mini-track on Text Mining in Big Data Analytics. This mini-track builds on the successful HICSS Workshop on Text Mining and recognizes the growing importance of unstructured text as a data source for descriptive and predictive analytics in research on collaboration systems and technologies. In this initial iteration of the mini-track, we have accepted three papers that cover conceptual issues, methodological approaches to social media, and the development of categorization models and dictionaries useful in a corporate context. The minitrack highlights the potential of an interdisciplinary research community within the HICSS collaboration systems and technologies track.",
"title": ""
},
{
"docid": "59618dfab39640cd933e9eced4b203e4",
"text": "A recently widowed man constructed a guillotine in the entrance to his cellar, having previously announced his intention to decapitate himself. A neighbor who saw the device from her house alerted the police. The deceased was found completely decapitated, still holding a pair of pliers that he had used to activate the mechanism. The findings of the resulting investigation are described, and the mechanism of suicidal decapitation is reviewed.",
"title": ""
},
{
"docid": "7c666c07fffbd63e17470a74535d4c53",
"text": "A review of the diverse roles of entropy and the second law in computationalthermo– uid dynamics is presented. Entropy computations are related to numerical error, convergence criteria, time-step limitations, and other signi cant aspects of computational uid ow and heat transfer. The importance of the second law as a tool for estimating error bounds and the overall scheme’s robustness is described. As computational methods become more reliable and accurate, emerging applications involving the second law in the design of engineering thermal uid systems are described. Sample numerical results are presented and discussed for a multitude of applications in compressible ows, as well as problems with phase change heat transfer. Advantages and disadvantages of different entropy-based methods are discussed, as well as areas of importance suggested for future research.",
"title": ""
},
{
"docid": "3679fbedadd1541ba8c1f94ea9b3b85d",
"text": "Concrete is very sensitive to crack formation. As wide cracks endanger the durability, repair may be required. However, these repair works raise the life-cycle cost of concrete as they are labor intensive and because the structure becomes in disuse during repair. In 1994, C. Dry was the first who proposed the intentional introduction of self-healing properties in concrete. In the following years, several researchers started to investigate this topic. The goal of this review is to provide an in-depth comparison of the different self-healing approaches which are available today. Among these approaches, some are aimed at improving the natural mechanism of autogenous crack healing, while others are aimed at modifying concrete by embedding capsules with suitable healing agents so that cracks heal in a completely autonomous way after they appear. In this review, special attention is paid to the types of healing agents and capsules used. In addition, the various methodologies have been evaluated based on the trigger mechanism used and attention has been paid to the properties regained due to self-healing.",
"title": ""
},
{
"docid": "55b1eb2df97e5d8e871e341c80514ab1",
"text": "Modern digital still cameras sample the color spectrum using a color filter array coated to the CCD array such that each pixel samples only one color channel. The result is a mosaic of color samples which is used to reconstruct the full color image by taking the information of the pixels’ neighborhood. This process is called demosaicking. While standard literature evaluates the performance of these reconstruction algorithms by comparison of a ground-truth image with a reconstructed Bayer pattern image in terms of grayscale comparison, this work gives an evaluation concept to asses the geometrical accuracy of the resulting color images. Only if no geometrical distortions are created during the demosaicking process, it is allowed to use such images for metric calculations, e.g. 3D reconstruction or arbitrary metrical photogrammetric processing.",
"title": ""
},
{
"docid": "60600cf9e91c353c7c59fa5ac062b870",
"text": "The study of photovoltaic systems in an efficient manner requires a precise knowledge of the IV and PV characteristic curves of photovoltaic modules. A Simulation model for simulation of a single solar cell and two solar cells in series has been developed using Sim electronics (Mat lab /Simulink) environment and is presented here in this paper. A solar cell block is available in simelectronics, which was used with many other blocks to plot I-V and P-V characteristics under variations of parameters considering one parameter variation at a time. Effect of two environmental parameters of temperature and irradiance variations could also be observed from simulated characteristics.",
"title": ""
},
{
"docid": "7a8a98b91680cbc63594cd898c3052c8",
"text": "Policy-based access control is a technology that achieves separation of concerns through evaluating an externalized policy at each access attempt. While this approach has been well-established for request-response applications, it is not supported for database queries of data-driven applications, especially for attribute-based policies. In particular, search operations for such applications involve poor scalability with regard to the data set size for this approach, because they are influenced by dynamic runtime conditions. This paper proposes a scalable application-level middleware solution that performs runtime injection of the appropriate rules into the original search query, so that the result set of the search includes only items to which the subject is entitled. Our evaluation shows that our method scales far better than current state of practice approach that supports policy-based access control.",
"title": ""
}
] |
scidocsrr
|
773a4eaaf8a381f8d6511cb5e81af6ab
|
REAL-TIME FULLY AUTOMATED RESTAURANT MANAGEMENT AND COMMUNICATION SYSTEM “ RESTO ”
|
[
{
"docid": "897efb599e554bf453a7b787c5874d48",
"text": "The Rampant growth of wireless technology and Mobile devices in this era is creating a great impact on our lives. Some early efforts have been made to combine and utilize both of these technologies in advancement of hospitality industry. This research work aims to automate the food ordering process in restaurant and also improve the dining experience of customers. In this paper we discuss about the design & implementation of automated food ordering system with real time customer feedback (AOS-RTF) for restaurants. This system, implements wireless data access to servers. The android application on user’s mobile will have all the menu details. The order details from customer’s mobile are wirelessly updated in central database and subsequently sent to kitchen and cashier respectively. The restaurant owner can manage the menu modifications easily. The wireless application on mobile devices provide a means of convenience, improving efficiency and accuracy for restaurants by saving time, reducing human errors and real-time customer feedback. This system successfully over comes the drawbacks in earlier PDA based food ordering system and is less expensive and more effective than the multi-touchable restaurant management systems.",
"title": ""
}
] |
[
{
"docid": "fa91331ef31de20ae63cc6c8ab33f062",
"text": "Humans move their hands and bodies together to communicate and solve tasks. Capturing and replicating such coordinated activity is critical for virtual characters that behave realistically. Surprisingly, most methods treat the 3D modeling and tracking of bodies and hands separately. Here we formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences. When scanning or capturing the full body in 3D, hands are small and often partially occluded, making their shape and pose hard to recover. To cope with low-resolution, occlusion, and noise, we develop a new model called MANO (hand Model with Articulated and Non-rigid defOrmations). MANO is learned from around 1000 high-resolution 3D scans of hands of 31 subjects in a wide variety of hand poses. The model is realistic, low-dimensional, captures non-rigid shape changes with pose, is compatible with standard graphics packages, and can fit any human hand. MANO provides a compact mapping from hand poses to pose blend shape corrections and a linear manifold of pose synergies. We attach MANO to a standard parameterized 3D body shape model (SMPL), resulting in a fully articulated body and hand model (SMPL+H). We illustrate SMPL+H by fitting complex, natural, activities of subjects captured with a 4D scanner. The fitting is fully automatic and results in full body models that move naturally with detailed hand motions and a realism not seen before in full body performance capture. The models and data are freely available for research purposes at http://mano.is.tue.mpg.de.",
"title": ""
},
{
"docid": "5cccc7cc748d3461dc3c0fb42a09245f",
"text": "The self and attachment difficulties associated with chronic childhood abuse and other forms of pervasive trauma must be understood and addressed in the context of the therapeutic relationship for healing to extend beyond resolution of traditional psychiatric symptoms and skill deficits. The authors integrate contemporary research and theory about attachment and complex developmental trauma, including dissociation, and apply it to psychotherapy of complex trauma, especially as this research and theory inform the therapeutic relationship. Relevant literature on complex trauma and attachment is integrated with contemporary trauma theory as the background for discussing relational issues that commonly arise in this treatment, highlighting common challenges such as forming a therapeutic alliance, managing frame and boundaries, and working with dissociation and reenactments.",
"title": ""
},
{
"docid": "7af9293fbe12f3e859ee579d0f8739a5",
"text": "We present the findings from a Dutch field study of 30 outsourcing deals totaling to more than 100 million Euro, where both customers and corresponding IT-outsourcing providers participated. The main objective of the study was to examine from a number of well-known factors whether they discriminate between IT-outsourcing success and failure in the early phase of service delivery and to determine their impact on the chance on a successful deal. We investigated controllable factors to increase the odds during sourcing and rigid factors as a warning sign before closing a deal. Based on 250 interviews we collected 28 thousand data points. From the data and the perceived failure or success of the closed deals we investigated the discriminative power of the determinants (ex post). We found three statistically significant controllable factors that discriminated in an early phase between failure and success. They are: working according to the transition plan, demand management and, to our surprise, communication within the supplier organisation (so not between client and supplier). These factors also turned out to be the only significant factors for a (logistic) model predicting the chance of a successful IT-outsourcing. Improving demand management and internal communication at the supplier increases the odds the most. Sticking to the transition plan only modestly. Other controllable factors were not significant in our study. They are managing the business case, transfer of staff or assets, retention of expertise and communication within the client organisation. Of the rigid factors, the motive to outsource, cultural differences, and the type of work were insignificant. The motive of the supplier was significant: internal motivations like increasing profit margins or business volume decreased the chance of success while external motivations like increasing market share or becoming a player increased the success rate. From the data we inferred that the degree of experience with sourcing did not show to be a convincing factor of success. Hiring sourcing consultants worked contra-productive: it lowered chances of success.",
"title": ""
},
{
"docid": "ac56eb533e3ae40b8300d4269fd2c08f",
"text": "We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.",
"title": ""
},
{
"docid": "e66ce20b22d183d5b1d9aec2cdc1f736",
"text": "Performance tests were carried out for a microchannel printed circuit heat exchanger (PCHE), which was fabricated with micro photo-etching and diffusion bonding technologies. The microchannel PCHE was tested for Reynolds numbers in the range of 100‒850 varying the hot-side inlet temperature between 40 °C–50 °C while keeping the cold-side temperature fixed at 20 °C. It was found that the average heat transfer rate and heat transfer performance of the countercurrrent configuration were 6.8% and 10%‒15% higher, respectively, than those of the parallel flow. The average heat transfer rate, heat transfer performance and pressure drop increased with increasing Reynolds number in all experiments. Increasing inlet temperature did not affect the heat transfer performance while it slightly decreased the pressure drop in the experimental range considered. Empirical correlations have been developed for the heat transfer coefficient and pressure drop factor as functions of the Reynolds number.",
"title": ""
},
{
"docid": "269e1c0d737beafd10560360049c6ee3",
"text": "There is no doubt that Social media has gained wider acceptability and usability and is also becoming probably the most important communication tools among students especially at the higher level of educational pursuit. As much as social media is viewed as having bridged the gap in communication that existed. Within the social media Facebook, Twitter and others are now gaining more and more patronage. These websites and social forums are way of communicating directly with other people socially. Social media has the potentials of influencing decision-making in a very short time regardless of the distance. On the bases of its influence, benefits and demerits this study is carried out in order to highlight the potentials of social media in the academic setting by collaborative learning and improve the students' academic performance. The results show that collaborative learning positively and significantly with interactive with peers, interactive with teachers and engagement which impact the students’ academic performance.",
"title": ""
},
{
"docid": "f84c399ff746a8721640e115fd20745e",
"text": "Self-interference cancellation invalidates a long-held fundamental assumption in wireless network design that radios can only operate in half duplex mode on the same channel. Beyond enabling true in-band full duplex, which effectively doubles spectral efficiency, self-interference cancellation tremendously simplifies spectrum management. Not only does it render entire ecosystems like TD-LTE obsolete, it enables future networks to leverage fragmented spectrum, a pressing global issue that will continue to worsen in 5G networks. Self-interference cancellation offers the potential to complement and sustain the evolution of 5G technologies toward denser heterogeneous networks and can be utilized in wireless communication systems in multiple ways, including increased link capacity, spectrum virtualization, any-division duplexing (ADD), novel relay solutions, and enhanced interference coordination. By virtue of its fundamental nature, self-interference cancellation will have a tremendous impact on 5G networks and beyond.",
"title": ""
},
{
"docid": "ddc3241c09a33bde1346623cf74e6866",
"text": "This paper presents a new technique for predicting wind speed and direction. This technique is based on using a linear time-series-based model relating the predicted interval to its corresponding one- and two-year old data. The accuracy of the model for predicting wind speeds and directions up to 24 h ahead have been investigated using two sets of data recorded during winter and summer season at Madison weather station. Generated results are compared with their corresponding values when using the persistent model. The presented results validate the effectiveness and accuracy of the proposed prediction model for wind speed and direction.",
"title": ""
},
{
"docid": "9157266c7dea945bf5a68f058836e681",
"text": "For the task of implicit discourse relation recognition, traditional models utilizing manual features can suffer from data sparsity problem. Neural models provide a solution with distributed representations, which could encode the latent semantic information, and are suitable for recognizing semantic relations between argument pairs. However, conventional vector representations usually adopt embeddings at the word level and cannot well handle the rare word problem without carefully considering morphological information at character level. Moreover, embeddings are assigned to individual words independently, which lacks of the crucial contextual information. This paper proposes a neural model utilizing context-aware character-enhanced embeddings to alleviate the drawbacks of the current word level representation. Our experiments show that the enhanced embeddings work well and the proposed model obtains state-of-the-art results.",
"title": ""
},
{
"docid": "27bcbde431c340db7544b58faa597fb7",
"text": "Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.",
"title": ""
},
{
"docid": "1568a9bb47ca0ef28bccf6fdeaad87b7",
"text": "Many Android apps use SSL/TLS to transmit sensitive information securely. However, developers often provide their own implementation of the standard SSL/TLS certificate validation process. Unfortunately, many such custom implementations have subtle bugs, have built-in exceptions for self-signed certificates, or blindly assert all certificates are valid, leaving many Android apps vulnerable to SSL/TLS Man-in-the-Middle attacks. In this paper, we present SMV-HUNTER, a system for the automatic, large-scale identification of such vulnerabilities that combines both static and dynamic analysis. The static component detects when a custom validation procedure has been given, thereby identifying potentially vulnerable apps, and extracts information used to guide the dynamic analysis, which then uses user interface enumeration and automation techniques to trigger the potentially vulnerable code under an active Man-in-the-Middle attack. We have implemented SMV-HUNTER and evaluated it on 23,418 apps downloaded from the Google Play market, of which 1,453 apps were identified as being potentially vulnerable by static analysis, with an average overhead of approximately 4 seconds per app, running on 16 threads in parallel. Among these potentially vulnerable apps, 726 were confirmed vulnerable using our dynamic analysis, with an average overhead of about 44 seconds per app, running on 8 emulators in parallel.",
"title": ""
},
{
"docid": "a2a85b11d4bd6cc6cc709ae1efd11322",
"text": "This paper presents a new adoption framework i.e. Individual, Technology, Organization and Environment (I-TOE) to address the factors influencing computer-assisted auditing tools (CAATs) acceptance in public audit firms. CAATs are audit technology that helps in achieving effective and efficient audit work. While CAATs adoption varies among audit departments, prior studies focused narrowly on CAATs acceptance issues from the individual perspective and no comprehensive study has been done that focused on both organization and individual standpoints. Realizing this gap, this paper aims to predict CAATs adoption factors using the I-TOE framework. I-TOE stresses on the relationship of Individuals factors (i.e. performance expectancy, effort expectancy, social influence, facilitating condition, hedonic motivation and habit), CAATs Technology (i.e. technology cost-benefit, risk and technology fit), Organization characteristics (i.e. size, readiness and top management), and Environment factors (i.e. client’s AIS complexity, competitive pressure and professional accounting body regulations) towards CAATs acceptance. It integrates both Unified Theory of Acceptance and Use of Technology 2 and Technology-Organization-Environment framework. I-TOE provides a comprehensive model that helps audit firms and regulatory bodies to develop strategies and policies to increase CAATs adoption. Empirical study through questionnaire survey will be conducted to validate I-TOE model.",
"title": ""
},
{
"docid": "05f941acd4b2bd1188c7396d7edbd684",
"text": "A blockchain is a distributed ledger for recording transactions, maintained by many nodes without central authority through a distributed cryptographic protocol. All nodes validate the information to be appended to the blockchain, and a consensus protocol ensures that the nodes agree on a unique order in which entries are appended. Consensus protocols for tolerating Byzantine faults have received renewed attention because they also address blockchain systems. This work discusses the process of assessing and gaining confidence in the resilience of a consensus protocols exposed to faults and adversarial nodes. We advocate to follow the established practice in cryptography and computer security, relying on public reviews, detailed models, and formal proofs; the designers of several practical systems appear to be unaware of this. Moreover, we review the consensus protocols in some prominent permissioned blockchain platforms with respect to their fault models and resilience against attacks. 1998 ACM Subject Classification C.2.4 Distributed Systems, D.1.3 Concurrent Programming",
"title": ""
},
{
"docid": "6be3f84e371874e2df32de9cb1d92482",
"text": "We present an accurate and efficient stereo matching method using locally shared labels, a new labeling scheme that enables spatial propagation in MRF inference using graph cuts. They give each pixel and region a set of candidate disparity labels, which are randomly initialized, spatially propagated, and refined for continuous disparity estimation. We cast the selection and propagation of locally-defined disparity labels as fusion-based energy minimization. The joint use of graph cuts and locally shared labels has advantages over previous approaches based on fusion moves or belief propagation, it produces submodular moves deriving a subproblem optimality, enables powerful randomized search, helps to find good smooth, locally planar disparity maps, which are reasonable for natural scenes, allows parallel computation of both unary and pairwise costs. Our method is evaluated using the Middlebury stereo benchmark and achieves first place in sub-pixel accuracy.",
"title": ""
},
{
"docid": "e380710014dd33734636f077a59f1b62",
"text": "Since the work of Golgi and Cajal, light microscopy has remained a key tool for neuroscientists to observe cellular properties. Ongoing advances have enabled new experimental capabilities using light to inspect the nervous system across multiple spatial scales, including ultrastructural scales finer than the optical diffraction limit. Other progress permits functional imaging at faster speeds, at greater depths in brain tissue, and over larger tissue volumes than previously possible. Portable, miniaturized fluorescence microscopes now allow brain imaging in freely behaving mice. Complementary progress on animal preparations has enabled imaging in head-restrained behaving animals, as well as time-lapse microscopy studies in the brains of live subjects. Mouse genetic approaches permit mosaic and inducible fluorescence-labeling strategies, whereas intrinsic contrast mechanisms allow in vivo imaging of animals and humans without use of exogenous markers. This review surveys such advances and highlights emerging capabilities of particular interest to neuroscientists.",
"title": ""
},
{
"docid": "6abe1b7806f6452bbcc087b458a7ef96",
"text": "We demonstrate distributed, online, and real-time cooperative localization and mapping between multiple robots operating throughout an unknown environment using indirect measurements. We present a novel Expectation Maximization (EM) based approach to efficiently identify inlier multi-robot loop closures by incorporating robot pose uncertainty, which significantly improves the trajectory accuracy over long-term navigation. An EM and hypothesis based method is used to determine a common reference frame. We detail a 2D laser scan correspondence method to form robust correspondences between laser scans shared amongst robots. The implementation is experimentally validated using teams of aerial vehicles, and analyzed to determine its accuracy, computational efficiency, scalability to many robots, and robustness to varying environments. We demonstrate through multiple experiments that our method can efficiently build maps of large indoor and outdoor environments in a distributed, online, and real-time setting.",
"title": ""
},
{
"docid": "b25b7100c035ad2953fb43087ede1625",
"text": "In this paper, a novel 10W substrate integrated waveguide (SIW) high power amplifier (HPA) designed with SIW matching network (MN) is presented. The SIW MN is connected with microstrip line using microstrip-to-SIW transition. An inductive metallized post in SIW is employed to realize impedance matching. At the fundamental frequency of 2.14 GHz, the impedance matching is realized by moving the position of the inductive metallized post in the SIW. Both the input and output MNs are designed with the proposed SIW-based MN concept. One SIW-based 10W HPA using GaN HEMT at 2.14 GHz is designed, fabricated, and measured. The proposed SIW-based HPA can be easily connected with any microstrip circuit with microstrip-to-SIW transition. Measured results show that the maximum power added efficiency (PAE) is 65.9 % with 39.8 dBm output power and the maximum gain is 20.1 dB with 30.9 dBm output power at 2.18 GHz. The size of the proposed SIW-based HPA is comparable with other microstrip-based PAs designed at the operating frequency.",
"title": ""
},
{
"docid": "5097aae222f76023cf1d6dbe7765e504",
"text": "In this article, we introduce a prototype of an innovative technology for proving the origins of captured digital media. In an era of fake news, when someone shows us a video or picture of some event, how can we trust its authenticity? It seems that the public no longer believe that traditional media is a reliable reference of fact, perhaps due, in part, to the onset of many diverse sources of conflicting information, via social media. Indeed, the issue of \"fake\" reached a crescendo during the 2016 U.S. Presidential Election, when the winner, Donald Trump, claimed that The New York Times was trying to discredit him by pushing disinformation. Current research into overcoming the problem of fake news does not focus on establishing the ownership of media resources used in such stories-the blockchain-based application introduced in this article is technology that is capable of indicating the authenticity of digital media. Put simply, using the trust mechanisms of blockchain technology, the tool can show, beyond doubt, the provenance of any source of digital media, including images used out of context in attempts to mislead. Although the application is an early prototype and its capability to find fake resources is somewhat limited, we outline future improvements that would overcome such limitations. Furthermore, we believe that our application (and its use of blockchain technology and standardized metadata) introduces a novel approach to overcoming falsities in news reporting and the provenance of media resources used therein. However, while our application has the potential to be able to verify the originality of media resources, we believe that technology is only capable of providing a partial solution to fake news. That is because it is incapable of proving the authenticity of a news story as a whole. We believe that takes human skills.",
"title": ""
},
{
"docid": "87068ab038d08f9e1e386bc69ee8a5b2",
"text": "The success of deep convolutional architectures is often attributed in part to their ability to learn multiscale and invariant representations of natural signals. However, a precise study of these properties and how they affect learning guarantees is still missing. In this paper, we consider deep convolutional representations of signals; we study their invariance to translations and to more general groups of transformations, their stability to the action of diffeomorphisms, and their ability to preserve signal information. This analysis is carried by introducing a multilayer kernel based on convolutional kernel networks and by studying the geometry induced by the kernel mapping. We then characterize the corresponding reproducing kernel Hilbert space (RKHS), showing that it contains a large class of convolutional neural networks with homogeneous activation functions. This analysis allows us to separate data representation from learning, and to provide a canonical measure of model complexity, the RKHS norm, which controls both stability and generalization of any learned model. In addition to models in the constructed RKHS, our stability analysis also applies to convolutional networks with generic activations such as rectified linear units, and we discuss its relationship with recent generalization bounds based on spectral norms.",
"title": ""
},
{
"docid": "6a3210307c98b4311271c29da142b134",
"text": "Accelerating innovation in renewable energy (RE) requires not just more finance, but finance servicing the entire innovation landscape. Given that finance is not ‘neutral’, more information is required on the quality of finance that meets technology and innovation stage-specific financing needs for the commercialization of RE technologies. We investigate the relationship between different financial actors with investment in different RE technologies. We construct a new deal-level dataset of global RE asset finance from 2004 to 2014 based on Bloomberg New Energy Finance data, that distinguishes 10 investor types (e.g. private banks, public banks, utilities) and 11 RE technologies into which they invest. We also construct a heuristic investment risk measure that varies with technology, time and country of investment. We find that particular investor types have preferences for particular risk levels, and hence particular types of RE. Some investor types invested into far riskier portfolios than others, and financing of individual high-risk technologies depended on investment by specific investor types. After the 2008 financial crisis, state-owned or controlled companies and banks emerged as the high-risk taking locomotives of RE asset finance. We use these preliminary results to formulate new questions for future RE policy, and encourage further research.",
"title": ""
}
] |
scidocsrr
|
6ab14604773495f791954ce90412f07b
|
Underwater SLAM: Challenges, state of the art, algorithms and a new biologically-inspired approach
|
[
{
"docid": "692207fdd7e27a04924000648f8b1bbf",
"text": "Many animals, on air, water, or land, navigate in three-dimensional (3D) environments, yet it remains unclear how brain circuits encode the animal's 3D position. We recorded single neurons in freely flying bats, using a wireless neural-telemetry system, and studied how hippocampal place cells encode 3D volumetric space during flight. Individual place cells were active in confined 3D volumes, and in >90% of the neurons, all three axes were encoded with similar resolution. The 3D place fields from different neurons spanned different locations and collectively represented uniformly the available space in the room. Theta rhythmicity was absent in the firing patterns of 3D place cells. These results suggest that the bat hippocampus represents 3D volumetric space by a uniform and nearly isotropic rate code.",
"title": ""
}
] |
[
{
"docid": "2292c60d69c94f31c2831c2f21c327d8",
"text": "With the abundance of raw data generated from various sources, Big Data has become a preeminent approach in acquiring, processing, and analyzing large amounts of heterogeneous data to derive valuable evidences. The size, speed, and formats in which data is generated and processed affect the overall quality of information. Therefore, Quality of Big Data (QBD) has become an important factor to ensure that the quality of data is maintained at all Big data processing phases. This paper addresses the QBD at the pre-processing phase, which includes sub-processes like cleansing, integration, filtering, and normalization. We propose a QBD model incorporating processes to support Data quality profile selection and adaptation. In addition, it tracks and registers on a data provenance repository the effect of every data transformation happened in the pre-processing phase. We evaluate the data quality selection module using large EEG dataset. The obtained results illustrate the importance of addressing QBD at an early phase of Big Data processing lifecycle since it significantly save on costs and perform accurate data analysis.",
"title": ""
},
{
"docid": "f7daa0d175d4a7ae8b0869802ff3c4ab",
"text": "Several consumer speech devices feature voice interfaces that perform on-device keyword spotting to initiate user interactions. Accurate on-device keyword spotting within a tight CPU budget is crucial for such devices. Motivated by this, we investigated two ways to improve deep neural network (DNN) acoustic models for keyword spotting without increasing CPU usage. First, we used low-rank weight matrices throughout the DNN. This allowed us to increase representational power by increasing the number of hidden nodes per layer without changing the total number of multiplications. Second, we used knowledge distilled from an ensemble of much larger DNNs used only during training. We systematically evaluated these two approaches on a massive corpus of far-field utterances. Alone both techniques improve performance and together they combine to give significant reductions in false alarms and misses without increasing CPU or memory usage.",
"title": ""
},
{
"docid": "b5f2717f3398a94ebeac2465dff98098",
"text": "Blockchains primarily enable credible accounting of digital events, e.g., money transfers in cryptocurrencies. However, beyond this original purpose, blockchains also irrevocably record arbitrary data, ranging from short messages to pictures. This does not come without risk for users as each participant has to locally replicate the complete blockchain, particularly including potentially harmful content. We provide the first systematic analysis of the benefits and threats of arbitrary blockchain content. Our analysis shows that certain content, e.g., illegal pornography, can render the mere possession of a blockchain illegal. Based on these insights, we conduct a thorough quantitative and qualitative analysis of unintended content on Bitcoin’s blockchain. Although most data originates from benign extensions to Bitcoin’s protocol, our analysis reveals more than 1600 files on the blockchain, over 99% of which are texts or images. Among these files there is clearly objectionable content such as links to child pornography, which is distributed to all Bitcoin participants. With our analysis, we thus highlight the importance for future blockchain designs to address the possibility of unintended data insertion and protect blockchain users accordingly.",
"title": ""
},
{
"docid": "2c6fd73e6ec0ebc0ae257676c712d024",
"text": "This paper addresses the problem of spatiotemporal localization of actions in videos. Compared to leading approaches, which all learn to localize based on carefully annotated boxes on training video frames, we adhere to a weakly-supervised solution that only requires a video class label. We introduce an actor-supervised architecture that exploits the inherent compositionality of actions in terms of actor transformations, to localize actions. We make two contributions. First, we propose actor proposals derived from a detector for human and non-human actors intended for images, which is linked over time by Siamese similarity matching to account for actor deformations. Second, we propose an actor-based attention mechanism that enables the localization of the actions from action class labels and actor proposals and is end-to-end trainable. Experiments on three human and non-human action datasets show actor supervision is state-of-the-art for weakly-supervised action localization and is even competitive to some fullysupervised alternatives.",
"title": ""
},
{
"docid": "87a256b5e67b97cf4a11b5664a150295",
"text": "This paper presents a method for speech emotion recognition using spectrograms and deep convolutional neural network (CNN). Spectrograms generated from the speech signals are input to the deep CNN. The proposed model consisting of three convolutional layers and three fully connected layers extract discriminative features from spectrogram images and outputs predictions for the seven emotions. In this study, we trained the proposed model on spectrograms obtained from Berlin emotions dataset. Furthermore, we also investigated the effectiveness of transfer learning for emotions recognition using a pre-trained AlexNet model. Preliminary results indicate that the proposed approach based on freshly trained model is better than the fine-tuned model, and is capable of predicting emotions accurately and efficiently.",
"title": ""
},
{
"docid": "f544c879f4f496a752c3c3434469bf90",
"text": "Peter Eden Information Security Research group School of Computing and Mathematics Department of Computing, Engineering and Science University of South Wales Pontypridd, CF371DL UK [email protected] Andrew Blyth Information Security Research group School of Computing and Mathematics Department of Computing, Engineering and Science University of South Wales Pontypridd, CF371DL UK [email protected]",
"title": ""
},
{
"docid": "ad7b715f434f3a500be8d52a047b9be1",
"text": "This paper presents a quantitative analysis of data collected by an online testing system for SQL \"select\" queries. The data was collected from almost one thousand students, over eight years. We examine which types of queries our students found harder to write. The seven types of SQL queries studied are: simple queries on one table; grouping, both with and without \"having\"; natural joins; simple and correlated sub-queries; and self-joins. The order of queries in the preceding sentence reflects the order of student difficulty we see in our data.",
"title": ""
},
{
"docid": "f16676f00cd50173d75bd61936ec200c",
"text": "Training of the neural autoregressive density estimator (NADE) can be viewed as doing one step of probabilistic inference on missing values in data. We propose a new model that extends this inference scheme to multiple steps, arguing that it is easier to learn to improve a reconstruction in k steps rather than to learn to reconstruct in a single inference step. The proposed model is an unsupervised building block for deep learning that combines the desirable properties of NADE and multi-prediction training: (1) Its test likelihood can be computed analytically, (2) it is easy to generate independent samples from it, and (3) it uses an inference engine that is a superset of variational inference for Boltzmann machines. The proposed NADE-k is competitive with the state-of-the-art in density estimation on the two datasets tested.",
"title": ""
},
{
"docid": "7eb7cfc2ca574b0965008117cf7070d9",
"text": "We present a framework, Atlas, which incorporates application-awareness into Software-Defined Networking (SDN), which is currently capable of L2/3/4-based policy enforcement but agnostic to higher layers. Atlas enables fine-grained, accurate and scalable application classification in SDN. It employs a machine learning (ML) based traffic classification technique, a crowd-sourcing approach to obtain ground truth data and leverages SDN's data reporting mechanism and centralized control. We prototype Atlas on HP Labs wireless networks and observe 94% accuracy on average, for top 40 Android applications.",
"title": ""
},
{
"docid": "90ce5197708ee86f42ac8c5e985e481f",
"text": "This paper proposes a method to predict fluctuations in the prices of cryptocurrencies, which are increasingly used for online transactions worldwide. Little research has been conducted on predicting fluctuations in the price and number of transactions of a variety of cryptocurrencies. Moreover, the few methods proposed to predict fluctuation in currency prices are inefficient because they fail to take into account the differences in attributes between real currencies and cryptocurrencies. This paper analyzes user comments in online cryptocurrency communities to predict fluctuations in the prices of cryptocurrencies and the number of transactions. By focusing on three cryptocurrencies, each with a large market size and user base, this paper attempts to predict such fluctuations by using a simple and efficient method.",
"title": ""
},
{
"docid": "2ed35ae53d1d5b6a85a9ea234ecf24ec",
"text": "Low back pain is a significant public health problem and one of the most commonly reported reasons for the use of Complementary Alternative Medicine. A randomized control trial was conducted in subjects with non-specific chronic low back pain comparing Iyengar yoga therapy to an educational control group. Both programs were 16 weeks long. Subjects were primarily self-referred and screened by primary care physicians for study of inclusion/exclusion criteria. The primary outcome for the study was functional disability. Secondary outcomes including present pain intensity, pain medication usage, pain-related attitudes and behaviors, and spinal range of motion were measured before and after the interventions. Subjects had low back pain for 11.2+/-1.54 years and 48% used pain medication. Overall, subjects presented with less pain and lower functional disability than subjects in other published intervention studies for chronic low back pain. Of the 60 subjects enrolled, 42 (70%) completed the study. Multivariate analyses of outcomes in the categories of medical, functional, psychological and behavioral factors indicated that significant differences between groups existed in functional and medical outcomes but not for the psychological or behavioral outcomes. Univariate analyses of medical and functional outcomes revealed significant reductions in pain intensity (64%), functional disability (77%) and pain medication usage (88%) in the yoga group at the post and 3-month follow-up assessments. These preliminary data indicate that the majority of self-referred persons with mild chronic low back pain will comply to and report improvement on medical and functional pain-related outcomes from Iyengar yoga therapy.",
"title": ""
},
{
"docid": "850f29a1d3c5bc96bb36787aba428331",
"text": "In this paper, we introduce a novel framework for WEakly supervised Learning of Deep cOnvolutional neural Networks (WELDON). Our method is dedicated to automatically selecting relevant image regions from weak annotations, e.g. global image labels, and encompasses the following contributions. Firstly, WELDON leverages recent improvements on the Multiple Instance Learning paradigm, i.e. negative evidence scoring and top instance selection. Secondly, the deep CNN is trained to optimize Average Precision, and fine-tuned on the target dataset with efficient computations due to convolutional feature sharing. A thorough experimental validation shows that WELDON outperforms state-of-the-art results on six different datasets.",
"title": ""
},
{
"docid": "9444d244964ba6cc679a298efbf39cc9",
"text": "STREAMSCOPE (or STREAMS) is a reliable distributed stream computation engine that has been deployed in shared 20,000-server production clusters at Microsoft. STREAMS provides a continuous temporal stream model that allows users to express complex stream processing logic naturally and declaratively. STREAMS supports business-critical streaming applications that can process tens of billions (or tens of terabytes) of input events per day continuously with complex logic involving tens of temporal joins, aggregations, and sophisticated userdefined functions, while maintaining tens of terabytes inmemory computation states on thousands of machines. STREAMS introduces two abstractions, rVertex and rStream, to manage the complexity in distributed stream computation systems. The abstractions allow efficient and flexible distributed execution and failure recovery, make it easy to reason about correctness even with failures, and facilitate the development, debugging, and deployment of complex multi-stage streaming applications.",
"title": ""
},
{
"docid": "c340cbb5f6b062caeed570dc2329e482",
"text": "We present a mixed-mode analog/digital VLSI device comprising an array of leaky integrate-and-fire (I&F) neurons, adaptive synapses with spike-timing dependent plasticity, and an asynchronous event based communication infrastructure that allows the user to (re)configure networks of spiking neurons with arbitrary topologies. The asynchronous communication protocol used by the silicon neurons to transmit spikes (events) off-chip and the silicon synapses to receive spikes from the outside is based on the \"address-event representation\" (AER). We describe the analog circuits designed to implement the silicon neurons and synapses and present experimental data showing the neuron's response properties and the synapses characteristics, in response to AER input spike trains. Our results indicate that these circuits can be used in massively parallel VLSI networks of I&F neurons to simulate real-time complex spike-based learning algorithms.",
"title": ""
},
{
"docid": "3500278940baaf6f510ad47463cbf5ed",
"text": "Different word embedding models capture different aspects of linguistic properties. This inspired us to propose a model (MMaxLSTM-CNN) for employing multiple sets of word embeddings for evaluating sentence similarity/relation. Representing each word by multiple word embeddings, the MaxLSTM-CNN encoder generates a novel sentence embedding. We then learn the similarity/relation between our sentence embeddings via Multi-level comparison. Our method M-MaxLSTMCNN consistently shows strong performances in several tasks (i.e., measure textual similarity, identify paraphrase, recognize textual entailment). According to the experimental results on STS Benchmark dataset and SICK dataset from SemEval, M-MaxLSTM-CNN outperforms the state-of-the-art methods for textual similarity tasks. Our model does not use hand-crafted features (e.g., alignment features, Ngram overlaps, dependency features) as well as does not require pretrained word embeddings to have the same dimension.",
"title": ""
},
{
"docid": "cb6354591bbcf130beea46701ae0e59f",
"text": "Requirements engineering process is a human endeavor. People who hold a stake in a project are involved in the requirements engineering process. They are from different backgrounds and with different organizational and individual goals, social positions, and personalities. They have different ways to understand and express the knowledge, and communicate with others. The requirements development processes, therefore, vary widely depending on the people involved. In order to acquire quality requirements from different people, a large number of methods exit. However, because of the inadequate understanding about methods and the variability of the situations in which requirements are developed, it is difficult for organizations to identify a set of appropriate methods to develop requirements in a structured and systematic way. The insufficient requirements engineering process forms one important factor that cause the failure of an IT project [29].",
"title": ""
},
{
"docid": "1406e692dc31cd4f89ea9a5441b84691",
"text": "2004 Recent advancements in Field Programmable Gate Array (FPGA) technology have resulted in FPGA devices that support the implementation of a complete computer system on a single FPGA chip. A soft-core processor is a central component of such a system. A soft-core processor is a microprocessor defined in software, which can be synthesized in programmable hardware, such as FPGAs. The Nios soft-core processor from Altera Corporation is studied and a Verilog implementation of the Nios soft-core processor has been developed, called UT Nios. The UT Nios is described, its performance dependence on various architectural parameters is investigated and then compared to the original implementation from Altera. Experiments show that the performance of different types of applications varies significantly depending on the architectural parameters. The performance comparison shows that UT Nios achieves performance comparable to the original implementation. Finally, the design methodology, experiences from the design process and issues encountered are discussed. iii Acknowledgments",
"title": ""
},
{
"docid": "4b2e6f5a0ce30428377df72d8350d637",
"text": "Sentence matching is widely used in various natural language tasks such as natural language inference, paraphrase identification, and question answering. For these tasks, understanding logical and semantic relationship between two sentences is required but it is yet challenging. Although attention mechanism is useful to capture the semantic relationship and to properly align the elements of two sentences, previous methods of attention mechanism simply use a summation operation which does not retain original features enough. Inspired by DenseNet, a densely connected convolutional network, we propose a densely-connected co-attentive recurrent neural network, each layer of which uses concatenated information of attentive features as well as hidden features of all the preceding recurrent layers. It enables preserving the original and the co-attentive feature information from the bottommost word embedding layer to the uppermost recurrent layer. To alleviate the problem of an ever-increasing size of feature vectors due to dense concatenation operations, we also propose to use an autoencoder after dense concatenation. We evaluate our proposed architecture on highly competitive benchmark datasets related to sentence matching. Experimental results show that our architecture, which retains recurrent and attentive features, achieves state-of-the-art performances for most of the tasks.",
"title": ""
},
{
"docid": "9a925106f3cdf95ec08b7bf53cbb526f",
"text": "High-utility itemset mining (HUIM) is an emerging area of data mining and is widely used. HUIM differs from the frequent itemset mining (FIM), as the latter considers only the frequency factor, whereas the former has been designed to address both quantity and profit factors to reveal the most profitable products. The challenges of generating the HUI include exponential complexity in both time and space. Moreover, the pruning techniques of reducing the search space, which is available in FIM because of their monotonic and anti-monotonic properties, cannot be used in HUIM. In this paper, we propose a novel selective database projection-based HUI mining algorithm (SPHUI-Miner). We introduce an efficient data format, named HUI-RTPL, which is an optimum and compact representation of data requiring low memory. We also propose two novel data structures, viz, selective database projection utility list and Tail-Count list to prune the search space for HUI mining. Selective projections of the database reduce the scanning time of the database making our proposed approach more efficient. It creates unique data instances and new projections for data having less dimensions thereby resulting in faster HUI mining. We also prove upper bounds on the amount of memory consumed by these projections. Experimental comparisons on various benchmark data sets show that the SPHUI-Miner algorithm outperforms the state-of-the-art algorithms in terms of computation time, memory usage, scalability, and candidates generation.",
"title": ""
},
{
"docid": "bb74cbb76c6efb4a030d2c5653e18842",
"text": "Two new wideband in-phase and out-of-phase balanced power dividing/combining networks are proposed in this paper. Based on matrix transformation, the differential-mode and common-mode equivalent circuits of the two wideband in-phase and out-of-phase networks can be easily deduced. A patterned ground-plane technique is used to realize the strong coupling of the shorted coupled lines for the differential mode. Two planar wideband in-phase and out-of-phase balanced networks with bandwidths of 55.3% and 64.4% for the differential mode with wideband common-mode suppression are designed and fabricated. The theoretical and measured results agree well with each other and show good in-band performances.",
"title": ""
}
] |
scidocsrr
|
71ac4d485ada534942f77b00134a18c2
|
Fast Cartoon + Texture Image Filters
|
[
{
"docid": "4a779f5e15cc60f131a77c69e09e54bc",
"text": "We introduce a new iterative regularization procedure for inverse problems based on the use of Bregman distances, with particular focus on problems arising in image processing. We are motivated by the problem of restoring noisy and blurry images via variational methods by using total variation regularization. We obtain rigorous convergence results and effective stopping criteria for the general procedure. The numerical results for denoising appear to give significant improvement over standard models, and preliminary results for deblurring/denoising are very encouraging.",
"title": ""
}
] |
[
{
"docid": "96635746b06bf210ad5503a3459ff717",
"text": "OBJECTIVE\nPatients with complex regional pain syndrome (CRPS) suffer from continuous regional limb pain and from hyperesthesia to touch and pain. To better understand the pathophysiological mechanisms underlying the hyperesthesia of CRPS patients, we investigated their cortical processing of touch and acute pain.\n\n\nMETHODS\nCortical responses to tactile stimuli applied to the thumbs, index and little fingers (D1, D2, and D5) and nociceptive stimuli delivered to dorsa of the hands were recorded with a whole-scalp neuromagnetometer from eight chronic CRPS patients and from nine healthy control subjects.\n\n\nRESULTS\nIn the patients, primary somatosensory (SI) cortex activation to tactile stimulation of D2 was significantly stronger, and the D1-D5 distance in SI was significantly smaller for the painful hand compared to the healthy hand. The PPC activation to tactile stimulation was significantly weaker in the patients than in the control subjects. To nociceptive stimulation with equal laser energy, the secondary somatosensory (SII) cortices and posterior parietal cortex (PPC) were similarly activated in both groups. The PPC source strength correlated with the pain rating in the control subjects, but not in the patients.\n\n\nCONCLUSIONS\nThe enhanced SI activation in hyperesthetic CRPS patients may reflect central sensitization to touch. The decreased D1-D5 distance implies permanent changes in SI hand representations in chronic CRPS. The defective PPC activation could be associated with the neglect-like symptoms of the patients. As the SII and PPC responses were not enhanced in the CRPS patients, other brain areas are likely to contribute to the observed hyperesthesia to pain.\n\n\nSIGNIFICANCE\nOur results indicate changes of somatosensory processing at cortical level in CRPS.",
"title": ""
},
{
"docid": "fea3c6f49169e0af01e31b46d8c72a9b",
"text": "Psoriatic arthritis (PsA) is an archetypal type of spondyloarthritis, but may have some features of rheumatoid arthritis, namely a small joint polyarthritis pattern. Most of these features are well demonstrated on imaging, and as a result, imaging has helped us to better understand the pathophysiology of PsA. Although the unique changes of PsA such as the \"pencil-in-cup\" deformities and periostitis are commonly shown on conventional radiography, PsA affects all areas of joints, with enthesitis being the predominant pathology. Imaging, especially magnetic resonance imaging (MRI) and ultrasonography, has allowed us to explain the relationships between enthesitis, synovitis (or the synovio-entheseal complex) and osteitis or bone oedema in PsA. Histological studies have complemented the imaging findings, and have corroborated the MRI changes seen in the skin and nails in PsA. The advancement in imaging technology such as high-resolution \"microscopy\" MRI and whole-body MRI, and improved protocols such as ultrashort echo time, will further enhance our understanding of the disease mechanisms. The ability to demonstrate very early pre-clinical changes as shown by ultrasonography and bone scintigraphy may eventually provide a basis for screening for disease and will further improve the understanding of the link between skin and joint disease.",
"title": ""
},
{
"docid": "f702a8c28184a6d49cd2f29a1e4e7ea4",
"text": "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting.",
"title": ""
},
{
"docid": "cd549297cb4644aaf24c28b5bbdadb24",
"text": "This study identifies the difference in the perceptions of academic stress and reaction to stressors based on gender among first year university students in Nigeria. Student Academic Stress Scale (SASS) was the instrument used to collect data from 2,520 first year university students chosen through systematic random sampling from Universities in the six geo-political zones of Nigeria. To determine gender differences among the respondents, independent samples t-test was used via SPSS version 15.0. The results of research showed that male and female respondents differed significantly in their perceptions of frustrations, financials, conflicts and selfexpectations stressors but did not significantly differ in their perceptions of pressures and changesrelated stressors. Generally, no significant difference was found between male and female respondents in their perceptions of academic stressors, however using the mean scores as basis, female respondents scored higher compared to male respondents. Regarding reaction to stressors, male and female respondents differ significantly in their perceptions of emotional and cognitive reactions but did not differ significantly in their perceptions of physiological and behavioural reaction to stressors.",
"title": ""
},
{
"docid": "d7793313ab21020e79e41817b8372ee8",
"text": "We present a new approach to referring expression generation, casting it as a density estimation problem where the goal is to learn distributions over logical expressions identifying sets of objects in the world. Despite an extremely large space of possible expressions, we demonstrate effective learning of a globally normalized log-linear distribution. This learning is enabled by a new, multi-stage approximate inference technique that uses a pruning model to construct only the most likely logical forms. We train and evaluate the approach on a new corpus of references to sets of visual objects. Experiments show the approach is able to learn accurate models, which generate over 87% of the expressions people used. Additionally, on the previously studied special case of single object reference, we show a 35% relative error reduction over previous state of the art.",
"title": ""
},
{
"docid": "2e9365598408553ef0ac3310a5435213",
"text": "Applications of aerial robots are progressively expanding into complex urban and natural environments. Despite remarkable advancements in the field, robotic rotorcraft is still drastically limited by the environment in which they operate. Obstacle detection and avoidance systems have functionality limitations and substantially add to the computational complexity of the onboard equipment of flying vehicles. Furthermore, they often cannot identify difficult-to-detect obstacles such as windows and wires. Robustness to physical contact with the environment is essential to mitigate these limitations and continue mission completion. However, many current mechanical impact protection concepts are either not sufficiently effective or too heavy and cumbersome, severely limiting the flight time and the capability of flying in constrained and narrow spaces. Therefore, novel impact protection systems are needed to enable flying robots to navigate in confined or heavily cluttered environments easily, safely, and efficiently while minimizing the performance penalty caused by the protection method. Here, we report the development of a protection system for robotic rotorcraft consisting of a free-to-spin circular protector that is able to decouple impact yawing moments from the vehicle, combined with a cyclic origami impact cushion capable of reducing the peak impact force experienced by the vehicle. Experimental results using a sensor-equipped miniature quadrotor demonstrated the impact resilience effectiveness of the Rotary Origami Protective System (Rotorigami) for a variety of collision scenarios. We anticipate this work to be a starting point for the exploitation of origami structures in the passive or active impact protection of robotic vehicles.",
"title": ""
},
{
"docid": "5768212e1fa93a7321fa6c0deff10c88",
"text": "Human research biobanks have rapidly expanded in the past 20 years, in terms of both their complexity and utility. To date there exists no agreement upon classification schema for these biobanks. This is an important issue to address for several reasons: to ensure that the diversity of biobanks is appreciated, to assist researchers in understanding what type of biobank they need access to, and to help institutions/funding bodies appreciate the varying level of support required for different types of biobanks. To capture the degree of complexity, specialization, and diversity that exists among human research biobanks, we propose here a new classification schema achieved using a conceptual classification approach. This schema is based on 4 functional biobank \"elements\" (donor/participant, design, biospecimens, and brand), which we feel are most important to the major stakeholder groups (public/participants, members of the biobank community, health care professionals/researcher users, sponsors/funders, and oversight bodies), and multiple intrinsic features or \"subelements\" (eg, the element \"biospecimens\" could be further classified based on preservation method into fixed, frozen, fresh, live, and desiccated). We further propose that the subelements relating to design (scale, accrual, data format, and data content) and brand (user, leadership, and sponsor) should be specifically recognized by individual biobanks and included in their communications to the broad stakeholder audience.",
"title": ""
},
{
"docid": "8f99bf256228119ea220e2e22c19cd6f",
"text": "A Wi-Fi wireless platform with embedded Linux web server and its integration into a network of sensor nodes for building automation and industrial automation is implemented here. In this system focus is on developing an ESP8266 based Low cost Wi-Fi based wireless sensor network, the IEEE 802.11n protocol is used for system. In most of the existing wireless sensor network are designed based on ZigBee and RF. The pecking order of the system is such that the lowest level is that of the sensors, the in-between level is the controllers, and the highest level is a supervisory node. The supervisor can be react as an active or passive. The system is shown to permit all achievable controller failure scenarios. The supervisor can handle the entire control load of all controllers, should the need arise. An integrated system platform which can provide Linux web server, database, and PHP run-time environment was built by using ARM Linux development board with Apache+PHP+SQLite3. Various Internet accesses were offered by using Wi-Fi wireless networks communication technology. Raspberry Pi use as a main server in the system and which connects the sensor nodes via Wi-Fi in the wireless sensor network and collects sensors data from different sensors, and supply multi-clients services including data display through an Embedded Linux based Web-Server.",
"title": ""
},
{
"docid": "6befac01d5a3f21100a54de43ee62845",
"text": "Robots used for tasks in space have strict requirements. Modular reconfigurable robots have a variety of attributes that are advantageous for these conditions including the ability to serve as many tools at once saving weight, packing into compressed forms saving space and having large redundancy to increase robustness. Self-reconfigurable systems can also self-repair as well as automatically adapt to changing conditions or ones that were not anticipated. PolyBot may serve well in the space manipulation and surface mobility class of space applications.",
"title": ""
},
{
"docid": "5e59c29a3861b0b2387ed4c291661f1f",
"text": "Predicting meme burst is of great relevance to develop security-related detecting and early warning capabilities. In this paper, we propose a feature-based method for real-time meme burst predictions, namely “Semantic, Network, and Time” (SNAT). By considering the potential characteristics of bursty memes, such as the semantics and spatio-temporal characteristics during their propagation, SNAT is capable of capturing meme burst at the very beginning and in real time. Experimental results prove the effectiveness of SNAT in terms of both fixed-time and real-time meme burst prediction tasks.",
"title": ""
},
{
"docid": "2321a11afd8a9f4da42a092ea43b544b",
"text": "This paper proposes a method for recognizing postures and gestures using foot pressure sensors, and we investigate optimal positions for pressure sensors on soles are the best for motion recognition. In experiments, the recognition accuracies of 22 kinds of daily postures and gestures were evaluated from foot-pressure sensor values. Furthermore, the optimum measurement points for high recognition accuracy were examined by evaluating combinations of two foot pressure measurement areas on a round-robin basis. As a result, when selecting the optimum two points for a user, the recognition accuracy was about 93.6% on average. Although individual differences were seen, the best combinations of areas for each subject were largely divided into two major patterns. When two points were chosen, combinations of the near thenar, which is located near the thumb ball, and near the heel or point of the outside of the middle of the foot were highly recognized. Of the best two points, one was commonly the near thenar for subjects. By taking three points of data and covering these two combinations, it will be possible to cope with individual differences. The recognition accuracy of the averaged combinations of the best two combinations for all subjects was classified with an accuracy of about 91.0% on average. On the basis of these results, two types of pressure sensing shoes were developed.",
"title": ""
},
{
"docid": "48a18e689b226936813f8dcfd2664819",
"text": "This report explores integrating fuzzy logic with two data mining methods (association rules and frequency episodes) for intrusion detection. Data mining methods are capable of extracting patterns automatically from a large amount of data. The integration with fuzzy logic can produce more abstract and flexible patterns for intrusion detection, since many quantitative features are involved in intrusion detection and security itself is fuzzy. In this report, Chapter I introduces the concept of intrusion detection and the practicality of applying fuzzy logic to intrusion detection. In Chapter II, two types of intrusion detection systems, host-based systems and network-based systems, are briefly reviewed. Some important artificial intelligence techniques that have been applied to intrusion detection are also reviewed here, including data mining methods for anomaly detection. Chapter III summarizes a set of desired characteristics for the Intelligent Intrusion Detection Model (IIDM) being developed at Mississippi State University. A preliminary architecture which we have developed for integrating machine learning methods with other intrusion detection methods is also described. Chapter IV discusses basic fuzzy logic theory, traditional algorithms for mining association rules, and an original algorithm for mining frequency episodes. In Chapter V, the algorithms we have extended for mining fuzzy association rules and fuzzy frequency episodes are described. We add a normalization step to the procedure for mining fuzzy association rules in order to prevent one data instance from contributing more than others. We also modify the procedure for mining frequency episodes to learn fuzzy frequency episodes. Chapter VI describes a set of experiments of applying fuzzy association rules and fuzzy episode rules for off-line anomaly detection and real-time intrusion detection. We use fuzzy association rules and fuzzy frequency episodes to extract patterns for temporal statistical measurements at a higher level than the data level. We define a modified similarity evaluation function which is continuous and monotonic for the application of fuzzy association rules and fuzzy frequency episodes in anomaly detection. We also present a new real-time intrusion detection method using fuzzy episode rules. The experimental results show the utility of fuzzy association rules and fuzzy frequency episodes in intrusion detection. The conclusions are included in Chapter VII. ii DEDICATION I would like to dedicate this research to my family and my wife. iii ACKNOWLEDGMENTS I am deeply grateful to Dr. Susan Bridges for expending much time to direct me in this entire research project and directing my graduate study and research work …",
"title": ""
},
{
"docid": "41ebeea8947eac92a7a6fa3acb404446",
"text": "Sparse representation has attracted great attention in the past few years. Sparse representation based classification (SRC) algorithm was developed and successfully used for classification. In this paper, a kernel sparse representation based classification (KSRC) algorithm is proposed. Samples are mapped into a high dimensional feature space first and then SRC is performed in this new feature space by perform KSRC directly. In order to overcome this difficulty, we give the method to solve the problem of sparse representation in the high dimensional feature space. If an appropriate kernel is selected, in the high dimensional feature space, a test sample is probably represented as the linear combination of training samples of the same class more accurately. Therefore, KSRC has more powerful classification ability than SRC. Experiments of face recognition, palmprint recognition and finger-knuckle-print recognition demonstrate the effectiveness of KSRC. & 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a28c252f9f3e96869c72e6e41146b5bc",
"text": "Technically, a feature represents a distinguishing property, a recognizable measurement, and a functional component obtained from a section of a pattern. Extracted features are meant to minimize the loss of important information embedded in the signal. In addition, they also simplify the amount of resources needed to describe a huge set of data accurately. This is necessary to minimize the complexity of implementation, to reduce the cost of information processing, and to cancel the potential need to compress the information. More recently, a variety of methods have been widely used to extract the features from EEG signals, among these methods are time frequency distributions (TFD), fast fourier transform (FFT), eigenvector methods (EM), wavelet transform (WT), and auto regressive method (ARM), and so on. In general, the analysis of EEG signal has been the subject of several studies, because of its ability to yield an objective mode of recording brain stimulation which is widely used in brain-computer interface researches with application in medical diagnosis and rehabilitation engineering. The purposes of this paper, therefore, shall be discussing some conventional methods of EEG feature extraction methods, comparing their performances for specific task, and finally, recommending the most suitable method for feature extraction based on performance.",
"title": ""
},
{
"docid": "d5bc5837349333a6f1b0b47f16844c13",
"text": "Personalized news recommender systems have gained increasing attention in recent years. Within a news reading community, the implicit correlations among news readers, news articles, topics and named entities, e.g., what types of named entities in articles are preferred by users, and why users like the articles, could be valuable for building an effective news recommender. In this paper, we propose a novel news personalization framework by mining such correlations. We use hypergraph to model various high-order relations among different objects in news data, and formulate news recommendation as a ranking problem on fine-grained hypergraphs. In addition, by transductive inference, our proposed algorithm is capable of effectively handling the so-called cold-start problem. Extensive experiments on a data set collected from various news websites have demonstrated the effectiveness of our proposed algorithm.",
"title": ""
},
{
"docid": "f160dd844c54dafc8c5265ff0e4d4a05",
"text": "The increasing number of smart phones presents a significant opportunity for the development of m-payment services. Despite the predicted success of m-payment, the market remains immature in most countries. This can be explained by the lack of agreement on standards and business models for all stakeholders in m-payment ecosystem. In this paper, the STOF business model framework is employed to analyze m-payment services from the point of view of one of the key players in the ecosystem i.e., banks. We apply Analytic Hierarchy Process (AHP) method to analyze the critical design issues for four domains of the STOF model. The results of the analysis show that service domain is the most important, followed by technology, organization and finance domains. Security related issues are found to be the most important by bank representatives. The future research can be extended to the m-payment ecosystem by collecting data from different actors from the ecosystem.",
"title": ""
},
{
"docid": "c00c85ec9a5ef7b5127a3c1585780bb5",
"text": "Advective skew dispersion is a natural Markov process defined by a diffusion with drift across an interface of jump discontinuity in a piecewise constant diffusion coefficient. In the absence of drift this process may be represented as a function of α-skew Brownian motion for a uniquely determined value of α = α∗; see Ramirez, Thomann, Waymire, Haggerty and Wood (2006). In the present paper the analysis is extended to the case of non-zero drift. A determination of the (joint) distributions of key functionals of standard skew Brownian motion together with some associated probabilistic semigroup and local time theory is given for these purposes. An application to the dispersion of a solute concentration across an interface is provided that explains certain symmetries and asymmetries in recently reported laboratory experiments conducted at Lawrence-Livermore Berkeley Labs by Berkowitz, Cortis, Dror and Scher (2009).",
"title": ""
},
{
"docid": "9cfa58c71360b596694a27eea19f3f66",
"text": "Introduction. The use of social media is prevalent among college students, and it is important to understand how social media use may impact students' attitudes and behaviour. Prior studies have shown negative outcomes of social media use, but researchers have not fully discovered or fully understand the processes and implications of these negative effects. This research provides additional scientific knowledge by focussing on mediators of social media use and controlling for key confounding variables. Method. Surveys that captured social media use, various attitudes about academics and life, and personal characteristics were completed by 234 undergraduate students at a large U.S. university. Analysis. We used covariance-based structural equation modelling to analyse the response data. Results. Results indicated that after controlling for self-regulation, social media use was negatively associated with academic self-efficacy and academic performance. Additionally, academic self-efficacy mediated the negative relationship between social media use and satisfaction with life. Conclusion. There are negative relationships between social media use and academic performance, as well as with academic self-efficacy beliefs. Academic self-efficacy beliefs mediate the negative relationship between social media use and satisfaction with life. These relationships are present even when controlling for individuals' levels of self-regulation.",
"title": ""
},
{
"docid": "86627478e3d4abe81ed31ff5681925b0",
"text": "This article, the second in a two-part series, aims to provide an overview of the stages involved in conducting a systematic review, focusing on selecting and appraising articles for inclusion and the presentation of data and findings. It is assumed that readers have a basic understanding of research terminology and the skills necessary to critically appraise a review. After reading this article and completing the time out activities you should be able to:",
"title": ""
},
{
"docid": "d49fc093d43fa3cdf40ecfa3f670e165",
"text": "As a result of the increase in robots in various fields, the mechanical stability of specific robots has become an important subject of research. This study is concerned with the development of a two-wheeled inverted pendulum robot that can be applied to an intelligent, mobile home robot. This kind of robotic mechanism has an innately clumsy motion for stabilizing the robot’s body posture. To analyze and execute this robotic mechanism, we investigated the exact dynamics of the mechanism with the aid of 3-DOF modeling. By using the governing equations of motion, we analyzed important issues in the dynamics of a situation with an inclined surface and also the effect of the turning motion on the stability of the robot. For the experiments, the mechanical robot was constructed with various sensors. Its application to a two-dimensional floor environment was confirmed by experiments on factors such as balancing, rectilinear motion, and spinning motion.",
"title": ""
}
] |
scidocsrr
|
5fc47e39a880497bf352ab6fdcd04d6b
|
Detection of potato diseases using image segmentation and multiclass support vector machine
|
[
{
"docid": "804113bb0459eb04d9b163c086050207",
"text": "The techniques of machine vision are extensively applied to agricultural science, and it has great perspective especially in the plant protection field, which ultimately leads to crops management. The paper describes a software prototype system for rice disease detection based on the infected images of various rice plants. Images of the infected rice plants are captured by digital camera and processed using image growing, image segmentation techniques to detect infected parts of the plants. Then the infected part of the leaf has been used for the classification purpose using neural network. The methods evolved in this system are both image processing and soft computing technique applied on number of diseased rice plants.",
"title": ""
},
{
"docid": "5f6b9fd58c633bf1de0158f0356bda80",
"text": "Crop diseases are a major threat to food security, but their rapid identification remains difficult in many parts of the world due to the lack of the necessary infrastructure. The combination of increasing global smartphone penetration and recent advances in computer vision made possible by deep learning has paved the way for smartphone-assisted disease diagnosis. Using a public dataset of 54,306 images of diseased and healthy plant leaves collected under controlled conditions, we train a deep convolutional neural network to identify 14 crop species and 26 diseases (or absence thereof). The trained model achieves an accuracy of 99.35% on a held-out test set, demonstrating the feasibility of this approach. Overall, the approach of training deep learning models on increasingly large and publicly available image datasets presents a clear path toward smartphone-assisted crop disease diagnosis on a massive global scale.",
"title": ""
}
] |
[
{
"docid": "fec50e53536febc02b8fe832a97cf833",
"text": "Translational control plays a critical role in the regulation of gene expression in eukaryotes and affects many essential cellular processes, including proliferation, apoptosis and differentiation. Under most circumstances, translational control occurs at the initiation step at which the ribosome is recruited to the mRNA. The eukaryotic translation initiation factor 4E (eIF4E), as part of the eIF4F complex, interacts first with the mRNA and facilitates the recruitment of the 40S ribosomal subunit. The activity of eIF4E is regulated at many levels, most profoundly by two major signalling pathways: PI3K (phosphoinositide 3-kinase)/Akt (also known and Protein Kinase B, PKB)/mTOR (mechanistic/mammalian target of rapamycin) and Ras (rat sarcoma)/MAPK (mitogen-activated protein kinase)/Mnk (MAPK-interacting kinases). mTOR directly phosphorylates the 4E-BPs (eIF4E-binding proteins), which are inhibitors of eIF4E, to relieve translational suppression, whereas Mnk phosphorylates eIF4E to stimulate translation. Hyperactivation of these pathways occurs in the majority of cancers, which results in increased eIF4E activity. Thus, translational control via eIF4E acts as a convergence point for hyperactive signalling pathways to promote tumorigenesis. Consequently, recent works have aimed to target these pathways and ultimately the translational machinery for cancer therapy.",
"title": ""
},
{
"docid": "7394f3000da8af0d4a2b33fed4f05264",
"text": "We often base our decisions on uncertain data - for instance, when consulting the weather forecast before deciding what to wear. Due to their uncertainty, such forecasts can differ by provider. To make an informed decision, many people compare several forecasts, which is a time-consuming and cumbersome task. To facilitate comparison, we identified three aggregation mechanisms for forecasts: manual comparison and two mechanisms of computational aggregation. In a survey, we compared the mechanisms using different representations. We then developed a weather application to evaluate the most promising candidates in a real-world study. Our results show that aggregation increases users' confidence in uncertain data, independent of the type of representation. Further, we find that for daily events, users prefer to use computationally aggregated forecasts. However, for high-stakes events, they prefer manual comparison. We discuss how our findings inform the design of improved interfaces for comparison of uncertain data, including non-weather purposes.",
"title": ""
},
{
"docid": "18ec689bc3dcbb076beabaff3bdc43de",
"text": "Much attention has recently been given to the creation of large knowledge bases that contain millions of facts about people, things, and places in the world. These knowledge bases have proven to be incredibly useful for enriching search results, answering factoid questions, and training semantic parsers and relation extractors. The way the knowledge base is actually used in these systems, however, is somewhat shallow—they are treated most often as simple lookup tables, a place to find a factoid answer given a structured query, or to determine whether a sentence should be a positive or negative training example for a relation extraction model. Very little is done in the way of reasoning with these knowledge bases or using them to improve machine reading. This is because typical probabilistic reasoning systems do not scale well to collections of facts as large as modern knowledge bases, and because it is difficult to incorporate information from a knowledge base into typical natural language processing models. In this thesis we present methods for reasoning over very large knowledge bases, and we show how to apply these methods to models of machine reading. The approaches we present view the knowledge base as a graph and extract characteristics of that graph to construct a feature matrix for use in machine learning models. The graph characteristics that we extract correspond to Horn clauses and other logic statements over knowledge base predicates and entities, and thus our methods have strong ties to prior work on logical inference. We show through experiments in knowledge base completion, relation extraction, and question answering that our methods can successfully incorporate knowledge base information into machine learning models of natural language.",
"title": ""
},
{
"docid": "fcb175f1fb5bd1ab20acaa1a7460be53",
"text": "5G networks are expected to be able to satisfy users' different QoS requirements. Network slicing is a promising technology for 5G networks to provide services tailored for users' specific QoS demands. Driven by the increased massive wireless data traffic from different application scenarios, efficient resource allocation schemes should be exploited to improve the flexibility of network resource allocation and capacity of 5G networks based on network slicing. Due to the diversity of 5G application scenarios, new mobility management schemes are greatly needed to guarantee seamless handover in network-slicing-based 5G systems. In this article, we introduce a logical architecture for network-slicing-based 5G systems, and present a scheme for managing mobility between different access networks, as well as a joint power and subchannel allocation scheme in spectrum-sharing two-tier systems based on network slicing, where both the co-tier interference and cross-tier interference are taken into account. Simulation results demonstrate that the proposed resource allocation scheme can flexibly allocate network resources between different slices in 5G systems. Finally, several open issues and challenges in network-slicing-based 5G networks are discussed, including network reconstruction, network slicing management, and cooperation with other 5G technologies.",
"title": ""
},
{
"docid": "8746c488535baf8d715232811ca4c8ed",
"text": "To optimize polysaccharide extraction from Spirulina sp., the effect of solid-to-liquid ratio, extraction temperature and time were investigated using Box-Behnken experimental design and response surface methodology. The results showed that extraction temperature and solid-to-liquid ratio had a significant impact on the yield of polysaccharides. A polysaccharides yield of around 8.3% dry weight was obtained under the following optimized conditions: solid-to-liquid ratio of 1:45, temperature of 90°C, and time of 120 min. The polysaccharide extracts contained rhamnose, which accounted for 53% of the total sugars, with a phenolic content of 45 mg GAE/g sample.",
"title": ""
},
{
"docid": "341b1393ce30d856cb831db3b4796157",
"text": "This study presents the design of a dual-band cavity-backed phased array antenna with wide-angle scanning capability in K/Ka bands for the geostationary (GSO)-fixed satellite service (FSS). To reduce the losses and mutual coupling, a novel antenna element similar to the cavity-backed Strip-Slot-Air-Inverted-Patch (SSAIP) is proposed as an element of designed array of 81 elements. Wide-angle scanning up to 60o is achieved at both frequency bands of the operation. The developed antenna is suitable for seamless installation on both land and air vehicles for bi-directional SatCom systems.",
"title": ""
},
{
"docid": "db3523bc1e3616b9fe262e5f6cab7ad8",
"text": "Traditional models for question answering optimize using cross entropy loss, which encourages exact answers at the cost of penalizing nearby or overlapping answers that are sometimes equally accurate. We propose a mixed objective that combines cross entropy loss with self-critical policy learning. The objective uses rewards derived from word overlap to solve the misalignment between evaluation metric and optimization objective. In addition to the mixed objective, we improve dynamic coattention networks (DCN) with a deep residual coattention encoder that is inspired by recent work in deep self-attention and residual networks. Our proposals improve model performance across question types and input lengths, especially for long questions that requires the ability to capture long-term dependencies. On the Stanford Question Answering Dataset, our model achieves state-of-the-art results with 75.1% exact match accuracy and 83.1% F1, while the ensemble obtains 78.9% exact match accuracy and 86.0% F1.",
"title": ""
},
{
"docid": "024e4eebc8cb23d85676df920316f62c",
"text": "E-voting technology has been developed for more than 30 years. However it is still distance away from serious application. The major challenges are to provide a secure solution and to gain trust from the voters in using it. In this paper we try to present a comprehensive review to e-voting by looking at these challenges. We summarized the vast amount of security requirements named in the literature that allows researcher to design a secure system. We reviewed some of the e-voting systems found in the real world and the literature. We also studied how a e-voting system can be usable by looking at different usability research conducted on e-voting. Summarizes on different cryptographic tools in constructing e-voting systems are also presented in the paper. We hope this paper can served as a good introduction for e-voting researches.",
"title": ""
},
{
"docid": "cd587b4f35290bf779b0c7ee0214ab72",
"text": "Time series data is perhaps the most frequently encountered type of data examined by the data mining community. Clustering is perhaps the most frequently used data mining algorithm, being useful in it's own right as an exploratory technique, and also as a subroutine in more complex data mining algorithms such as rule discovery, indexing, summarization, anomaly detection, and classification. Given these two facts, it is hardly surprising that time series clustering has attracted much attention. The data to be clustered can be in one of two formats: many individual time series, or a single time series, from which individual time series are extracted with a sliding window. Given the recent explosion of interest in streaming data and online algorithms, the latter case has received much attention.In this work we make a surprising claim. Clustering of streaming time series is completely meaningless. More concretely, clusters extracted from streaming time series are forced to obey a certain constraint that is pathologically unlikely to be satisfied by any dataset, and because of this, the clusters extracted by any clustering algorithm are essentially random. While this constraint can be intuitively demonstrated with a simple illustration and is simple to prove, it has never appeared in the literature.We can justify calling our claim surprising, since it invalidates the contribution of dozens of previously published papers. We will justify our claim with a theorem, illustrative examples, and a comprehensive set of experiments on reimplementations of previous work. Although the primary contribution of our work is to draw attention to the fact that an apparent solution to an important problem is incorrect and should no longer be used, we also introduce a novel method which, based on the concept of time series motifs, is able to meaningfully cluster some streaming time series datasets.",
"title": ""
},
{
"docid": "ed097b44837a57ad0053ae06a95f1543",
"text": "For underwater videos, the performance of object tracking is greatly affected by illumination changes, background disturbances and occlusion. Hence, there is a need to have a robust function that computes image similarity, to accurately track the moving object. In this work, a hybrid model that incorporates the Kalman Filter, a Siamese neural network and a miniature neural network has been developed for object tracking. It was observed that the usage of the Siamese network to compute image similarity significantly improved the robustness of the tracker. Although the model was developed for underwater videos, it was found that it performs well for both underwater and human surveillance videos. A metric has been defined for analyzing detections-to-tracks mapping accuracy. Tracking results have been analyzed using Multiple Object Tracking Accuracy (MOTA) and Multiple Object Tracking Precision (MOTP)metrics.",
"title": ""
},
{
"docid": "b49698c3df4e432285448103cda7f2dd",
"text": "Acoustic emission (AE)-signal-based techniques have recently been attracting researchers' attention to rotational machine health monitoring and diagnostics due to the advantages of the AE signals over the extensively used vibration signals. Unlike vibration-based methods, the AE-based techniques are in their infant stage of development. From the perspective of machine health monitoring and fault detection, developing an AE-based methodology is important. In this paper, a methodology for rotational machine health monitoring and fault detection using empirical mode decomposition (EMD)-based AE feature quantification is presented. The methodology incorporates a threshold-based denoising technique into EMD to increase the signal-to-noise ratio of the AE bursts. Multiple features are extracted from the denoised signals and then fused into a single compressed AE feature. The compressed AE features are then used for fault detection based on a statistical method. A gear fault detection case study is conducted on a notional split-torque gearbox using AE signals to demonstrate the effectiveness of the methodology. A fault detection performance comparison using the compressed AE features with the existing EMD-based AE features reported in the literature is also conducted.",
"title": ""
},
{
"docid": "5594475c91355d113e0045043eff8b93",
"text": "Background: Since the introduction of the systematic review process to Software Engineering in 2004, researchers have investigated a number of ways to mitigate the amount of effort and time taken to filter through large volumes of literature.\n Aim: This study aims to provide a critical analysis of text mining techniques used to support the citation screening stage of the systematic review process.\n Method: We critically re-reviewed papers included in a previous systematic review which addressed the use of text mining methods to support the screening of papers for inclusion in a review. The previous review did not provide a detailed analysis of the text mining methods used. We focus on the availability in the papers of information about the text mining methods employed, including the description and explanation of the methods, parameter settings, assessment of the appropriateness of their application given the size and dimensionality of the data used, performance on training, testing and validation data sets, and further information that may support the reproducibility of the included studies.\n Results: Support Vector Machines (SVM), Naïve Bayes (NB) and Committee of classifiers (Ensemble) are the most used classification algorithms. In all of the studies, features were represented with Bag-of-Words (BOW) using both binary features (28%) and term frequency (66%). Five studies experimented with n-grams with n between 2 and 4, but mostly the unigram was used. χ2, information gain and tf-idf were the most commonly used feature selection techniques. Feature extraction was rarely used although LDA and topic modelling were used. Recall, precision, F and AUC were the most used metrics and cross validation was also well used. More than half of the studies used a corpus size of below 1,000 documents for their experiments while corpus size for around 80% of the studies was 3,000 or fewer documents. The major common ground we found for comparing performance assessment based on independent replication of studies was the use of the same dataset but a sound performance comparison could not be established because the studies had little else in common. In most of the studies, insufficient information was reported to enable independent replication. The studies analysed generally did not include any discussion of the statistical appropriateness of the text mining method that they applied. In the case of applications of SVM, none of the studies report the number of support vectors that they found to indicate the complexity of the prediction engine that they use, making it impossible to judge the extent to which over-fitting might account for the good performance results.\n Conclusions: There is yet to be concrete evidence about the effectiveness of text mining algorithms regarding their use in the automation of citation screening in systematic reviews. The studies indicate that options are still being explored, but there is a need for better reporting as well as more explicit process details and access to datasets to facilitate study replication for evidence strengthening. In general, the reader often gets the impression that text mining algorithms were applied as magic tools in the reviewed papers, relying on default settings or default optimization of available machine learning toolboxes without an in-depth understanding of the statistical validity and appropriateness of such tools for text mining purposes.",
"title": ""
},
{
"docid": "1cf5ffbd1929b1d6d475cdfabeb9bf2a",
"text": "In this paper we concern ourselves with the problem of minimizing leakage power in CMOS circuits consisting of AOI (and-or-invert) gates as they operate in stand-by mode or an idle mode waiting for other circuits to complete their operation. It is known that leakage power due to subthreshold leakage current in transistors in the OFF state is dependent on the input vector applied. Therefore, we try to compute an input vector that can be applied to the circuit in stand-by mode so that the power loss due to sub-threshold leakage current is the minimum possible. We employ a integer linear programming (ILP) approach to solve the problem of minimizing leakage by first obtaining a good lower bound (estimate) on the minimum leakage power and then rounding the solution to actually obtain an input vector that causes low leakage. The chief advantage of this technique as opposed to others in the literature is that it invariably provides us with a good idea about the quality of the input vector found.",
"title": ""
},
{
"docid": "419116a3660f1c1f7127de31f311bd1e",
"text": "Unlike dimensionality reduction (DR) tools for single-view data, e.g., principal component analysis (PCA), canonical correlation analysis (CCA) and generalized CCA (GCCA) are able to integrate information from multiple feature spaces of data. This is critical in multi-modal data fusion and analytics, where samples from a single view may not be enough for meaningful DR. In this work, we focus on a popular formulation of GCCA, namely, MAX-VAR GCCA. The classic MAX-VAR problem is optimally solvable via eigen-decomposition, but this solution has serious scalability issues. In addition, how to impose regularizers on the sought canonical components was unclear - while structure-promoting regularizers are often desired in practice. We propose an algorithm that can easily handle datasets whose sample and feature dimensions are both large by exploiting data sparsity. The algorithm is also flexible in incorporating regularizers on the canonical components. Convergence properties of the proposed algorithm are carefully analyzed. Numerical experiments are presented to showcase its effectiveness.",
"title": ""
},
{
"docid": "a7e8c3a64f6ba977e142de9b3dae7e57",
"text": "Craniofacial superimposition is a process that aims to identify a person by overlaying a photograph and a model of the skull. This process is usually carried out manually by forensic anthropologists; thus being very time consuming and presenting several difficulties in finding a good fit between the 3D model of the skull and the 2D photo of the face. In this paper we present a fast and automatic procedure to tackle the superimposition problem. The proposed method is based on real-coded genetic algorithms. Synthetic data are used to validate the method. Results on a real case from our Physical Anthropology lab of the University of Granada are also presented.",
"title": ""
},
{
"docid": "eaf08b7ea5592617fe88bc713c3e874b",
"text": "In this paper we propose, implement and evaluate OpenSample: a low-latency, sampling-based network measurement platform targeted at building faster control loops for software-defined networks. OpenSample leverages sFlow packet sampling to provide near-real-time measurements of both network load and individual flows. While OpenSample is useful in any context, it is particularly useful in an SDN environment where a network controller can quickly take action based on the data it provides. Using sampling for network monitoring allows OpenSample to have a 100 millisecond control loop rather than the 1-5 second control loop of prior polling-based approaches. We implement OpenSample in the Floodlight Open Flow controller and evaluate it both in simulation and on a test bed comprised of commodity switches. When used to inform traffic engineering, OpenSample provides up to a 150% throughput improvement over both static equal-cost multi-path routing and a polling-based solution with a one second control loop.",
"title": ""
},
{
"docid": "13b60edf872141b7164ed2a92f6534fc",
"text": "Ordinary differential equations (ODEs) provide a classical framework to model the dynamics of biological systems, given temporal experimental data. Qualitative analysis of the ODE model can lead to further biological insight and deeper understanding compared to traditional experiments alone. Simulation of the model under various perturbations can generate novel hypotheses and motivate the design of new experiments. This short paper will provide an overview of the ODE modeling framework, and present examples of how ODEs can be used to address problems in cancer biology.",
"title": ""
},
{
"docid": "f81261c4a64359778fd3d399ba3fe749",
"text": "Credit card frauds are increasing day by day regardless of the various techniques developed for its detection. Fraudsters are so expert that they engender new ways for committing fraudulent transactions each day which demands constant innovation for its detection techniques as well. Many techniques based on Artificial Intelligence, Data mining, Fuzzy logic, Machine learning, Sequence Alignment, decision tree, neural network, logistic regression, naïve Bayesian, Bayesian network, metalearning, Genetic Programming etc., has evolved in detecting various credit card fraudulent transactions. A steady indulgent on all these approaches will positively lead to an efficient credit card fraud detection system. This paper presents a survey of various techniques used in credit card fraud detection mechanisms and Hidden Markov Model (HMM) in detail. HMM categorizes card holder’s profile as low, medium and high spending based on their spending behavior in terms of amount. A set of probabilities for amount of transaction is being assigned to each cardholder. Amount of each incoming transaction is then matched with card owner’s category, if it justifies a predefined threshold value then the transaction is decided to be legitimate else declared as fraudulent. Index Terms — Credit card, fraud detection, Hidden Markov Model, online shopping",
"title": ""
},
{
"docid": "76f11326d1a2573aae8925d63a10a1f9",
"text": "It has been widely claimed that attention and awareness are doubly dissociable and that there is no causal relation between them. In support of this view are numerous claims of attention without awareness, and awareness without attention. Although there is evidence that attention can operate on or be drawn to unconscious stimuli, various recent findings demonstrate that there is no empirical support for awareness without attention. To properly test for awareness without attention, we propose that a stimulus be studied using a battery of tests based on diverse, mainstream paradigms from the current attention literature. When this type of analysis is performed, the evidence is fully consistent with a model in which attention is necessary, but not sufficient, for awareness.",
"title": ""
},
{
"docid": "f70ab6ad03609ff4388a2e78c8891b31",
"text": "The paper describes the design, collection, transcription and analysis of 200 hours of HKUST Mandarin Telephone Speech Corpus (HKUST/MTS) from over 2100 Mandarin speakers in mainland China under the DARPA EARS framework. The corpus includes speech data, transcriptions and speaker demographic information. The speech data include 1206 ten-minute natural Mandarin conversations between either strangers or friends. Each conversation focuses on a single topic. All calls are recorded over public telephone networks. All calls are manually annotated with standard Chinese characters (GBK) as well as specific mark-ups for spontaneous speech. A file with speaker demographic information is also provided. The corpus is the largest and first of its kind for Mandarin conversational telephone speech, providing abundant and diversified samples for Mandarin speech recognition and other applicationdependent tasks, such as topic detection, information retrieval, keyword spotting, speaker recognition, etc. In a 2004 evaluation test by NIST, the corpus is found to improve system performance quite significantly.",
"title": ""
}
] |
scidocsrr
|
c9e39616781a5a45afde2d371a75f11f
|
Rethinking procrastination: positive effects of "active" procrastination behavior on attitudes and performance.
|
[
{
"docid": "eb10f86262180b122d261f5acbe4ce18",
"text": "Procrasttnatton ts variously descnbed a? harmful, tnnocuous, or even beneficial Two longitudinal studies examined procrastination among students Procrasttnators reported lower stress and less illness than nonprocrasttnators early in the semester, but they reported higher stress and more illness late in the term, and overall they were sicker Procrastinators also received lower grades on atl assignment's Procrasttnatton thus appears to be a self-defeating behavior pattem marked by short-term benefits and long-term costs Doing one's work and fulfilling other obligations in a timely fashion seem like integral parts of rational, proper adult funcuoning Yet a majonty of the population admits to procrastinating at least sometimes, and substantial minonties admit to significant personal, occupational, or financial difficulties resulting from their dilatory behavior (Ferran, Johnson, & McCown, 1995) Procrastinauon is often condemned, particularly by people who do not think themselves guilty of it (Burka & Yuen, 1983, Ferran et dl, 1995) Cntics of procrastination depict it as a lazy self-indulgent habit of putting things off for no reason They say it is self-defeating m that It lowers the quality of performance, because one ends up with less time to work (Baumeister & Scher, 1988, Ellis & Knaus, 1977) Others depict it as a destructive strategy of self-handicappmg (Jones & Berglas, 1978), such a,s when people postpone or withhold effort so as to give themselves an excuse for anticipated poor performance (Tice, 1991, Tice & Baumeister, 1990) People who finish their tasks and assignments early may point self-nghteously to the stress suffered by procrastinators at the last minute and say that putting things off is bad for one's physical or mental health (see Boice, 1989, 1996, Rothblum, Solomon, & Murakami, 1986 Solomon & Rothblum, 1984) On the other hand, some procrastinators defend their practice They point out correctly that if one puts in the same amount of work on the project, it does not matter whether this is done early or late Some even say that procrastination improves perfonnance, because the imminent deadline creates excitement and pressure that elicit peak performance \"I do my best work under pressure,\" in the standard phrase (Ferran, 1992, Ferran et al , 1995, Uy, 1995) Even if it were true that stress and illness are higher for people who leave things unul the last minute—and research has not yet provided clear evidence that in fact they both are higher—this might be offset by the enjoyment of carefree times earlier (see Ainslie, 1992) The present investigation involved a longitudinal study of the effects of procrastination on quality of performance, stress, and illness Early in the semester, students were given an assignment with a deadline Procrastinators were identified usmg Lay's (1986) scale Students' well-being was assessed with self-reports of stress and illAddress correspondence Case Western Reserve Unive 7123, e-mail dxt2@po cwiu o Dianne M Tice Department of Psychology, sity 10900 Euclid Ave Cleveland OH 44106ness The validity of the scale was checked by ascertaining whethtr students tumed in the assignment early, on time, or late Finally, task performance was assessed by consulting the grades received Competing predictions could be made",
"title": ""
},
{
"docid": "15dd5107aefccb8dcf2ad6adf1fa1915",
"text": "Several existing self-report measures of coping and the relevant research using these instruments are reviewed. Many of these coping measures suffer from a variety of psychometric weaknesses. A self-report instrument, the Multidimensional Coping Inventory (MCI), was constructed that identifies 3 types of coping styles: task-oriented, emotion-oriented, and avoidance-oriented coping. Support for the multidimensional nature of the MCI is presented, along with support for the reliability of the MCI coping scales. Two studies are presented that assessed the validity of the MCI. The 1st study assessed the construct validity of the MCI by comparing it with the Ways of Coping Questionnaire. The 2nd study also assessed the criterion validity of the MCI by comparing it with measures of depression, anxiety, Type A behaviour, neuroticism, and extraversion. Overall, the results suggest that the MCI is a valid and highly reliable multidimensional measure of coping styles.",
"title": ""
}
] |
[
{
"docid": "fe0587c51c4992aa03f28b18f610232f",
"text": "We show how to find sufficiently small integer solutions to a polynomial in a single variable modulo N, and to a polynomial in two variables over the integers. The methods sometimes extend to more variables. As applications: RSA encryption with exponent 3 is vulnerable if the opponent knows two-thirds of the message, or if two messages agree over eight-ninths of their length; and we can find the factors of N=PQ if we are given the high order $\\frac{1}{4} \\log_2 N$ bits of P.",
"title": ""
},
{
"docid": "8de0dd3319971a5991a1649b3ae8e1c2",
"text": "Increased intracranial pressure (ICP) is a pathologic state common to a variety of serious neurologic conditions, all of which are characterized by the addition of volume to the intracranial vault. Hence all ICP therapies are directed toward reducing intracranial volume. Elevated ICP can lead to brain damage or death by two principle mechanisms: (1) global hypoxic-ischemic injury, which results from reduction of cerebral perfusion pressure (CPP) and cerebral blood ̄ow, and (2) mechanical compression, displacement, and herniation of brain tissue, which results from mass effect associated with compartmentalized ICP gradients. In unmonitored patients with acute neurologic deterioration, head elevation (30 degrees), hyperventilation (pCO2 26±30 mmHg), and mannitol (1.0±1.5 g/kg) can lower ICP within minutes. Fluid-coupled ventricular catheters and intraparenchymal pressure transducers are the most accurate and reliable devices for measuring ICP in the intensive care unit (ICU) setting. In a monitored patient, treatment of critical ICP elevation (>20 mmHg) should proceed in the following steps: (1) consideration of repeat computed tomography (CT) scanning or consideration of de®nitive neurosurgical intervention, (2) intravenous sedation to attain a quiet, motionless state, (3) optimization of CPP to levels between 70 and 110 mmHg, (4) osmotherapy with mannitol or hypertonic saline, (5) hyperventilation (pCO2 26±30 mmHg), (6) high-dose pentobarbital therapy, and (7) systemic cooling to attain moderate hypothermia (32±33°C). Placement of an ICP monitor and use of a stepwise treatment algorithm are both essential for managing ICP effectively in the ICU setting. Increased intracranial pressure (ICP) can result from a number of insults to the brain, including traumatic brain injury (TBI), stroke, encephalitis, neoplasms, and abscesses (Table 1). The fundamental abnormality common to these diverse disease states is an increase in intracranial volume. Accordingly, all treatments for elevated ICP work by reducing intracranial volume. Prompt recognition and treatment of elevated ICP is essential because sustained elevated ICP can cause brain damage or be rapidly fatal.",
"title": ""
},
{
"docid": "2761ebc7958e27cad7972fd1b9f027dc",
"text": "In this work we describe the design, implementation and evaluation of a novel eye tracker for context-awareness and mobile HCI applications. In contrast to common systems using video cameras, this compact device relies on Electrooculography (EOG). It consists of goggles with dry electrodes integrated into the frame and a small pocket-worn component with a DSP for real-time EOG signal processing. The device is intended for wearable and standalone use: It can store data locally for long-term recordings or stream processed EOG signals to a remote device over Bluetooth. We describe how eye gestures can be efficiently recognised from EOG signals for HCI purposes. In an experiment conducted with 11 subjects playing a computer game we show that 8 eye gestures of varying complexity can be continuously recognised with equal performance to a state-of-the-art video-based system. Physical activity leads to artefacts in the EOG signal. We describe how these artefacts can be removed using an adaptive filtering scheme and characterise this approach on a 5-subject dataset. In addition to explicit eye movements for HCI, we discuss how the analysis of unconscious eye movements may eventually allow to deduce information on user activity and context not available with current sensing modalities.",
"title": ""
},
{
"docid": "05b9ec9f105287fd8091cb79478da6bc",
"text": "There has been great interest in determining if mindfulness can be cultivated and if this cultivation leads to well-being. The current study offers preliminary evidence that at least one aspect of mindfulness, measured by the Mindful Attention and Awareness Scale (MAAS; K. W. Brown & R. M. Ryan, 2003), can be cultivated and does mediate positive outcomes. Further, adherence to the practices taught during the meditation-based interventions predicted positive outcomes. College undergraduates were randomly allocated between training in two distinct meditation-based interventions, Mindfulness Based Stress Reduction (MBSR; J. Kabat-Zinn, 1990; n=15) and E. Easwaran's (1978/1991) Eight Point Program (EPP; n=14), or a waitlist control (n=15). Pretest, posttest, and 8-week follow-up data were gathered on self-report outcome measures. Compared to controls, participants in both treatment groups (n=29) demonstrated increases in mindfulness at 8-week follow-up. Further, increases in mindfulness mediated reductions in perceived stress and rumination. These results suggest that distinct meditation-based practices can increase mindfulness as measured by the MAAS, which may partly mediate benefits. Implications and future directions are discussed.",
"title": ""
},
{
"docid": "5950c26d7a823192dc25b1637203ac43",
"text": "The nature of pain has been the subject of bitter controversy since the turn of the century (1). There are currently two opposing theories of pain: (i) specificity theory, which holds that pain is a specific modality like vision or hearing, \"with its own central and peripheral apparatus\" (2), and (ii) pattern theory, which maintains that the nerve impulse pattern for pain is produced by intense stimulation of nonspecific receptors since \"there are no specific fibers and no specific endings\" (3). Both theories derive from earlier concepts proposed by von Frey (4) and Goldscheider (5) in 1894, and historically they are held to be mutually exclusive. Since it is our purpose here to propose a new theory of pain mechanisms, we shall state explicitly at the outset where we agree and disagree with specificity and pattern theories.",
"title": ""
},
{
"docid": "6683e36077fdcdc60a8cfa5616fbb7d8",
"text": "The concept of sustainability continues to rapidly grow in interest from disparate academic and industrial fields. This research aims to elucidate further the implications of the sustainability drivers upon project management methodological approaches specifically in the manufacturing industry. This paper studies the three prevalent dialogues in the field of sustainability, relevant to the environmental and social aspects of the Triple Bottom Line, and utilises Institutional Theory to propose organisational pressures as affecting sustainability efforts in industrial manufacturing project management. Furthermore, the literature bodies of Lean and Life Cycle Analysis in manufacturing project management guided our reflection that the various drivers of sustainability put forward that do not consider the distinctive organisational pressures fail to address institutional and systemic project management issues holistically. The authors further conduct and draw on a systematic literature review on the constructs of sustainability in the manufacturing industry and their adopted methodologies, evaluating academic articles published from the year 2001 to 2017. The findings indicate that normative pressures prevail over coercive and mimetic pressures and are seen as the main drivers of sustainability in the manufacturing industry. In an incremental reductionist approach, project management knowledge areas are analysed, and the study posits that Stakeholder and Communications Management are two of the knowledge areas that need to integrate the above pressures to achieve cohesive sustainable industrial results. The principle contribution is to offer a new conceptual perspective on integrating project management knowledge areas with Institutional Theory pressures for more sustainable project management methodologies.",
"title": ""
},
{
"docid": "bda2541d2c2a5a5047b29972cb1536f6",
"text": "Fog is an emergent architecture for computing, storage, control, and networking that distributes these services closer to end users along the cloud-to-things continuum. It covers both mobile and wireline scenarios, traverses across hardware and software, resides on network edge but also over access networks and among end users, and includes both data plane and control plane. As an architecture, it supports a growing variety of applications, including those in the Internet of Things (IoT), fifth-generation (5G) wireless systems, and embedded artificial intelligence (AI). This survey paper summarizes the opportunities and challenges of fog, focusing primarily in the networking context of IoT.",
"title": ""
},
{
"docid": "5e117ed1646972dfec919250666dcd64",
"text": "Low-rank matrices play a fundamental role in modeling and computational methods for signal processing and machine learning. In many applications where low-rank matrices arise, these matrices cannot be fully sampled or directly observed, and one encounters the problem of recovering the matrix given only incomplete and indirect observations. This paper provides an overview of modern techniques for exploiting low-rank structure to perform matrix recovery in these settings, providing a survey of recent advances in this rapidly-developing field. Specific attention is paid to the algorithms most commonly used in practice, the existing theoretical guarantees for these algorithms, and representative practical applications of these techniques.",
"title": ""
},
{
"docid": "b47d53485704f4237e57d220640346a7",
"text": "Features of consciousness difficult to understand in terms of conventional neuroscience have evoked application of quantum theory, which describes the fundamental behavior of matter and energy. In this paper we propose that aspects of quantum theory (e.g. quantum coherence) and of a newly proposed physical phenomenon of quantum wave function \"self-collapse\" (objective reduction: OR Penrose, 1994) are essential for consciousness, and occur in cytoskeletal microtubules and other structures within each of the brain's neurons. The particular characteristics of microtubules suitable for quantum effects include their crystal-like lattice structure, hollow inner core, organization of cell function and capacity for information processing. We envisage that conformational states of microtubule subunits (tubulins) are coupled to internal quantum events, and cooperatively interact (compute) with other tubulins. We further assume that macroscopic coherent superposition of quantum-coupled tubulin conformational states occurs throughout significant brain volumes and provides the global binding essential to consciousness. We equate the emergence of the microtubule quantum coherence with pre-conscious processing which grows (for up to 500 ms) until the mass energy difference among the separated states of tubulins reaches a threshold related to quantum gravity. According to the arguments for OR put forth in Penrose (1994), superpositioned states each have their own space-time geometries. When the degree of coherent mass energy difference leads to sufficient separation of space time geometry, the system must choose and decay (reduce, collapse) to a single universe state. In this way, a transient superposition of slightly differing space-time geometries persists until an abrupt quantum --, classical reduction occurs. Unlike the random, \"subjective reduction\" (SR, or R) of standard quantum theory caused by observation or environmental entanglement, the OR we propose in microtubules is a se(f-collapse and it results in particular patterns of microtubule-tubulin conformational states that regulate neuronal activities including synaptic functions. Possibilities and probabilities for post-reduction tubulin states are influenced by factors including attachments of microtubule-associated proteins (MAPs) acting as \"nodes\" which tune and \"orchestrate\" the quantum oscillations. We thus term the self-tuning OR process in microtubules \"orchestrated objective reduction\" (\"Orch OR\"), and calculate an estimate for the number of tubulins (and neurons) whose coherence for relevant time periods (e.g. 500ms) will elicit Orch OR. In providing a connection among (1) pre-conscious to conscious transition, (2) fundamental space time notions, (3) non-computability, and (4) binding of various (time scale and spatial) reductions into an instantaneous event (\"conscious now\"), we believe Orch OR in brain microtubules is the most specific and plausible model for consciousness yet proposed. * Corresponding author. Tel.: (520) 626-2116. Fax: (520) 626-2689. E-Mail: srh(cv ccit.arizona.edu. 0378-4754/96/$15.00 © 1996 Elsevier Science B.V. All rights reserved SSDI0378-4754(95 ) 0049-6 454 S. Hameroff, R. Penrose/Mathematics and Computers in Simulation 40 (1996) 453 480",
"title": ""
},
{
"docid": "34ab20699d12ad6cca34f67cee198cd9",
"text": "Such as relational databases, most graphs databases are OLTP databases (online transaction processing) of generic use and can be used to produce a wide range of solutions. That said, they shine particularly when the solution depends, first, on our understanding of how things are connected. This is more common than one may think. And in many cases it is not only how things are connected but often one wants to know something about the different relationships in our field their names, qualities, weight and so on. Briefly, connectivity is the key. The graphs are the best abstraction one has to model and query the connectivity; databases graphs in turn give developers and the data specialists the ability to apply this abstraction to their specific problems. For this purpose, in this paper one used this approach to simulate the route planner application, capable of querying connected data. Merely having keys and values is not enough; no more having data partially connected through joins semantically poor. We need both the connectivity and contextual richness to operate these solutions. The case study herein simulates a railway network railway stations connected with one another where each connection between two stations may have some properties. And one answers the question: how to find the optimized route (path) and know whether a station is reachable from one station or not and in which depth.",
"title": ""
},
{
"docid": "18fcdcadc3290f9c8dd09f0aa1a27e8f",
"text": "The Industry 4.0 is a vision that includes connecting more intensively physical systems with their virtual counterparts in computers. This computerization of manufacturing will bring many advantages, including allowing data gathering, integration and analysis in the scale not seen earlier. In this paper we describe our Semantic Big Data Historian that is intended to handle large volumes of heterogeneous data gathered from distributed data sources. We describe the approach and implementation with a special focus on using Semantic Web technologies for integrating the data.",
"title": ""
},
{
"docid": "92a14466bd675f10edb509765cbae18d",
"text": "Conditional Random Field (CRF) and recurrent neural models have achieved success in structured prediction. More re cently, there is a marriage of CRF and recurrent neural models, so that we can gain from both non-linear dense features and globally normalized CRF objective. These recurrent neu ral CRF models mainly focus on encode node features in CRF undirected graphs. However, edge features prove important to CRF in structured prediction. In this work, we introduce a new recurrent neural CRF model, which learns non-linear edge features, and thus makes non-linear featur es encoded completely. We compare our model with different neural models in well-known structured prediction tasks. E xperiments show that our model outperforms state-of-the-ar t methods in NP chunking, shallow parsing, Chinese word segmentation and POS tagging.",
"title": ""
},
{
"docid": "5815fb8da17375f24bbdeab7af91f3a3",
"text": "We introduce a new method for framesemantic parsing that significantly improves the prior state of the art. Our model leverages the advantages of a deep bidirectional LSTM network which predicts semantic role labels word by word and a relational network which predicts semantic roles for individual text expressions in relation to a predicate. The two networks are integrated into a single model via knowledge distillation, and a unified graphical model is employed to jointly decode frames and semantic roles during inference. Experiments on the standard FrameNet data show that our model significantly outperforms existing neural and non-neural approaches, achieving a 5.7 F1 gain over the current state of the art, for full frame structure extraction.",
"title": ""
},
{
"docid": "b1d2de1a59945dcdb05be93d510caaaa",
"text": "This chapter surveys the literature on bubbles, financial crises, and systemic risk. The first part of the chapter provides a brief historical account of bubbles and financial crisis. The second part of the chapter gives a structured overview of the literature on financial bubbles. The third part of the chapter discusses the literatures on financial crises and systemic risk, with particular emphasis on amplification and propagation mechanisms during financial crises, and the measurement of systemic risk. Finally, we point toward some questions for future",
"title": ""
},
{
"docid": "f9c484461ae0bb94387ca372cee2b869",
"text": "Detection of potential hijackings of Unmanned Aerial Vehicles (UAVs) is an important capability to have for the safety of the future airspace and prevention of loss of life and property. In this paper, we propose using basic statistical measures as a fingerprint to flight patterns that can be checked against previous flights. We generated baseline flights and then simulated hijacking scenarios to determine the extent of the feasibility of this method. Our results indicated that all of the direct hijacking scenarios were detected, but flights with control instability caused by malicious acts were not detected.",
"title": ""
},
{
"docid": "da54e46adb991e66a7896f5089e3326e",
"text": "OBJECTIVE\nThis exploratory study reports on maternity clinicians' perceptions of transfer of their responsibility and accountability for patients in relation to clinical handover with particular focus transfers of care in birth suite.\n\n\nDESIGN\nA qualitative study of semistructured interviews and focus groups of maternity clinicians was undertaken in 2007. De-indentified data were transcribed and coded using the constant comparative method. Multiple themes emerged but only those related to responsibility and accountability are reported in this paper.\n\n\nSETTING\nOne tertiary Australian maternity hospital.\n\n\nPARTICIPANTS\nMaternity care midwives, nurses (neonatal, mental health, bed managers) and doctors (obstetric, neontatology, anaesthetics, internal medicine, psychiatry).\n\n\nPRIMARY OUTCOME MEASURES\nPrimary outcome measures were the perceptions of clinicians of maternity clinical handover.\n\n\nRESULTS\nThe majority of participants did not automatically connect maternity handover with the transfer of responsibility and accountability. Once introduced to this concept, they agreed that it was one of the roles of clinical handover. They spoke of complete transfer, shared and ongoing responsibility and accountability. When clinicians had direct involvement or extensive clinical knowledge of the patient, blurring of transition of responsibility and accountability sometimes occurred. A lack of 'ownership' of a patient and their problems were seen to result in confusion about who was to address the clinical issues of the patient. Personal choice of ongoing responsibility and accountability past the handover communication were described. This enabled the off-going person to rectify an inadequate handover or assist in an emergency when duty clinicians were unavailable.\n\n\nCONCLUSIONS\nThere is a clear lack of consensus about the transition of responsibility and accountability-this should be explicit at the handover. It is important that on each shift and new workplace environment clinicians agree upon primary role definitions, responsibilities and accountabilities for patients. To provide system resilience, secondary responsibilities may be allocated as required.",
"title": ""
},
{
"docid": "1d6066e7adbaccaf97e2b55a6bd0c084",
"text": "This paper presents a Java-based hyperbolic-style browser designed to render RDF files as structured ontological maps. The program was motivated by the need to browse the content of a web-accessible ontology server: WEBKB-2. The ontology server contains descriptions of over 74,500 object types derived from the WORDNET 1.7 lexical database and can be accessed using RDF syntax. Such a structure creates complications for hyperbolic-style displays. In WEBKB-2 there are 140 stable ontology link types and a hyperbolic display needs to filter and iconify the view so different link relations can be distinguished in multi-link views. Our browsing tool, ONTORAMA, is therefore motivated by two possibly interfering aims: the first to display up to 10 times the number of nodes in a hyperbolicstyle view than using a conventional graphics display; secondly, to render the ontology with multiple links comprehensible in that view.",
"title": ""
},
{
"docid": "4c61d388acfde29dbf049842ef54a800",
"text": "Image matting plays an important role in image and video editing. However, the formulation of image matting is inherently ill-posed. Traditional methods usually employ interaction to deal with the image matting problem with trimaps and strokes, and cannot run on the mobile phone in real-time. In this paper, we propose a real-time automatic deep matting approach for mobile devices. By leveraging the densely connected blocks and the dilated convolution, a light full convolutional network is designed to predict a coarse binary mask for portrait image. And a feathering block, which is edge-preserving and matting adaptive, is further developed to learn the guided filter and transform the binary mask into alpha matte. Finally, an automatic portrait animation system based on fast deep matting is built on mobile devices, which does not need any interaction and can realize real-time matting with 15 fps. The experiments show that the proposed approach achieves comparable results with the state-of-the-art matting solvers.",
"title": ""
},
{
"docid": "44c0da7556c3fd5faacc7faf0d3692cf",
"text": "The study examined the etiology of individual differences in early drawing and of its longitudinal association with school mathematics. Participants (N = 14,760), members of the Twins Early Development Study, were assessed on their ability to draw a human figure, including number of features, symmetry, and proportionality. Human figure drawing was moderately stable across 6 months (average r = .40). Individual differences in drawing at age 4½ were influenced by genetic (.21), shared environmental (.30), and nonshared environmental (.49) factors. Drawing was related to later (age 12) mathematical ability (average r = .24). This association was explained by genetic and shared environmental factors that also influenced general intelligence. Some genetic factors, unrelated to intelligence, also contributed to individual differences in drawing.",
"title": ""
},
{
"docid": "d5948a9cc98ecb0a5080400d30dd4b05",
"text": "In recent years, ideas from the computational field of reinforcement learning have revolutionized the study of learning in the brain, famously providing new, precise theories of how dopamine affects learning in the basal ganglia. However, reinforcement learning algorithms are notorious for not scaling well to multidimensional environments, as is required for real-world learning. We hypothesized that the brain naturally reduces the dimensionality of real-world problems to only those dimensions that are relevant to predicting reward, and conducted an experiment to assess by what algorithms and with what neural mechanisms this \"representation learning\" process is realized in humans. Our results suggest that a bilateral attentional control network comprising the intraparietal sulcus, precuneus, and dorsolateral prefrontal cortex is involved in selecting what dimensions are relevant to the task at hand, effectively updating the task representation through trial and error. In this way, cortical attention mechanisms interact with learning in the basal ganglia to solve the \"curse of dimensionality\" in reinforcement learning.",
"title": ""
}
] |
scidocsrr
|
737b67d4b3014a9e0f19001c80060ad2
|
Profit optimizing customer churn prediction with Bayesian network classifiers
|
[
{
"docid": "b5f8f310f2f4ed083b20f42446d27feb",
"text": "This paper provides algorithms that use an information-theoretic analysis to learn Bayesian network structures from data. Based on our three-phase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional independence (CI) tests in typical cases. We provide precise conditions that specify when these algorithms are guaranteed to be correct as well as empirical evidence (from real world applications and simulation tests) that demonstrates that these systems work efficiently and reliably in practice.",
"title": ""
}
] |
[
{
"docid": "e45144bf1d377cd910f6f6bd18939a24",
"text": "The Body Esteem Scale (BES; Franzoi and Shields 1984) has been a primary research tool for over 30 years, yet its factor structure has not been fully assessed since its creation, so a two-study design examined whether the BES needed revision. In Study 1, a series of principal components analyses (PCAs) was conducted using the BES responses of 798 undergraduate students, with results indicating that changes were necessary to improve the scale’s accuracy. In Study 2, 1237 undergraduate students evaluated each BES item, along with a select set of new body items, while also rating each item’s importance to their own body esteem. Body items meeting minimum importance criteria were then utilized in a series of PCAs to develop a revised scale that has strong internal consistency and good convergent and discriminant validity. As with the original BES, the revised BES (BES-R) conceives of body esteem as both gender-specific and multidimensional. Given that the accurate assessment of body esteem is essential in better understanding the link between this construct and mental health, the BES-R can now be used in research to illuminate this link, as well as in prevention and treatment programs for body-image issues. Further implications are discussed.",
"title": ""
},
{
"docid": "14fdf8fa41d46ad265b48bbc64a2d3cc",
"text": "Preserving edge structures is a challenge to image interpolation algorithms that reconstruct a high-resolution image from a low-resolution counterpart. We propose a new edge-guided nonlinear interpolation technique through directional filtering and data fusion. For a pixel to be interpolated, two observation sets are defined in two orthogonal directions, and each set produces an estimate of the pixel value. These directional estimates, modeled as different noisy measurements of the missing pixel are fused by the linear minimum mean square-error estimation (LMMSE) technique into a more robust estimate, using the statistics of the two observation sets. We also present a simplified version of the LMMSE-based interpolation algorithm to reduce computational cost without sacrificing much the interpolation performance. Experiments show that the new interpolation techniques can preserve edge sharpness and reduce ringing artifacts",
"title": ""
},
{
"docid": "95d6189ba97f15c7cc33028f13f8789f",
"text": "This paper presents a new Bayesian nonnegative matrix factorization (NMF) for monaural source separation. Using this approach, the reconstruction error based on NMF is represented by a Poisson distribution, and the NMF parameters, consisting of the basis and weight matrices, are characterized by the exponential priors. A variational Bayesian inference procedure is developed to learn variational parameters and model parameters. The randomness in separation process is faithfully represented so that the system robustness to model variations in heterogeneous environments could be achieved. Importantly, the exponential prior parameters are used to impose sparseness in basis representation. The variational lower bound of log marginal likelihood is adopted as the objective to control model complexity. The dependencies of variational objective on model parameters are fully characterized in the derived closed-form solution. A clustering algorithm is performed to find the groups of bases for unsupervised source separation. The experiments on speech/music separation and singing voice separation show that the proposed Bayesian NMF (BNMF) with adaptive basis representation outperforms the NMF with fixed number of bases and the other BNMFs in terms of signal-to-distortion ratio and the global normalized source to distortion ratio.",
"title": ""
},
{
"docid": "c918f662a60b0ccb36159cf2f0bd051e",
"text": "Graph embedding is an eective method to represent graph data in a low dimensional space for graph analytics. Most existing embedding algorithms typically focus on preserving the topological structure or minimizing the reconstruction errors of graph data, but they have mostly ignored the data distribution of the latent codes from the graphs, which oen results in inferior embedding in real-world graph data. In this paper, we propose a novel adversarial graph embedding framework for graph data. e framework encodes the topological structure and node content in a graph to a compact representation, on which a decoder is trained to reconstruct the graph structure. Furthermore, the latent representation is enforced to match a prior distribution via an adversarial training scheme. To learn a robust embedding, two variants of adversarial approaches, adversarially regularized graph autoencoder (ARGA) and adversarially regularized variational graph autoencoder (ARVGA), are developed. Experimental studies on real-world graphs validate our design and demonstrate that our algorithms outperform baselines by a wide margin in link prediction, graph clustering, and graph visualization tasks.",
"title": ""
},
{
"docid": "859c1b7269c2a297478ca73f521b2ea2",
"text": "This paper analyzes the ability of a doubly fed induction generator (DFIG) in a wind turbine to ride through a grid fault and the limitations to its performance. The fundamental difficulty for the DFIG in ride-through is the electromotive force (EMF) induced in the machine rotor during the fault, which depends on the dc and negative sequence components in the stator-flux linkage and the rotor speed. The investigation develops a control method to increase the probability of successful grid fault ride-through, given the current and voltage capabilities of the rotor-side converter. A time-domain computer simulation model is developed and laboratory experiments are conducted to verify the model and a control method is proposed. Case studies are then performed on a representatively sized system to define the feasibility regions of successful ride-through for different types of grid faults",
"title": ""
},
{
"docid": "ecd8393f05d2e30b488a5828c9a6944a",
"text": "Understanding the changes in the brain which occur in the transition from normal to addictive behavior has major implications in public health. Here we postulate that while reward circuits (nucleus accumbens, amygdala), which have been central to theories of drug addiction, may be crucial to initiate drug self-administration, the addictive state also involves disruption of circuits involved with compulsive behaviors and with drive. We postulate that intermittent dopaminergic activation of reward circuits secondary to drug self-administration leads to dysfunction of the orbitofrontal cortex via the striato-thalamo-orbitofrontal circuit. This is supported by imaging studies showing that in drug abusers studied during protracted withdrawal, the orbitofrontal cortex is hypoactive in proportion to the levels of dopamine D2 receptors in the striatum. In contrast, when drug abusers are tested shortly after last cocaine use or during drug-induced craving, the orbitofrontal cortex is hypermetabolic in proportion to the intensity of the craving. Because the orbitofrontal cortex is involved with drive and with compulsive repetitive behaviors, its abnormal activation in the addicted subject could explain why compulsive drug self-administration occurs even with tolerance to the pleasurable drug effects and in the presence of adverse reactions. This model implies that pleasure per se is not enough to maintain compulsive drug administration in the drugaddicted subject and that drugs that could interfere with the activation of the striato-thalamo-orbitofrontal circuit could be beneficial in the treatment of drug addiction.",
"title": ""
},
{
"docid": "9b5eca94a1e02e97e660d0f5e445a8a1",
"text": "PURPOSE\nThe purpose of this study was to evaluate the effect of individualized repeated intravitreal injections of ranibizumab (Lucentis, Genentech, South San Francisco, CA) on visual acuity and central foveal thickness (CFT) for branch retinal vein occlusion-induced macular edema.\n\n\nMETHODS\nThis study was a prospective interventional case series. Twenty-eight eyes of 28 consecutive patients diagnosed with branch retinal vein occlusion-related macular edema treated with repeated intravitreal injections of ranibizumab (when CFT was >225 microm) were evaluated. Optical coherence tomography and fluorescein angiography were performed monthly.\n\n\nRESULTS\nThe mean best-corrected distance visual acuity improved from 62.67 Early Treatment of Diabetic Retinopathy Study letters (logarithm of the minimum angle of resolution = 0.74 +/- 0.28 [mean +/- standard deviation]) at baseline to 76.8 Early Treatment of Diabetic Retinopathy Study letters (logarithm of the minimum angle of resolution = 0.49 +/- 0.3; statistically significant, P < 0.001) at the end of the follow-up (9 months). The mean letter gain (including the patients with stable and worse visual acuities) was 14.3 letters (2.9 lines). During the same period, 22 of the 28 eyes (78.6%) showed improved visual acuity, 4 (14.2%) had stable visual acuity, and 2 (7.14%) had worse visual acuity compared with baseline. The mean CFT improved from 349 +/- 112 microm at baseline to 229 +/- 44 microm (significant, P < 0.001) at the end of follow-up. A mean of six injections was performed during the follow-up period. Our subgroup analysis indicated that patients with worse visual acuity at presentation (<or=50 letters in our series) showed greater visual benefit from treatment. \"Rebound\" macular edema was observed in 5 patients (17.85%) at the 3-month follow-up visit and in none at the 6- and 9-month follow-ups. In 18 of the 28 patients (53.6%), the CFT was <225 microm at the last follow-up visit, and therefore, further treatment was not instituted. No ocular or systemic side effects were noted.\n\n\nCONCLUSION\nIndividualized repeated intravitreal injections of ranibizumab showed promising short-term results in visual acuity improvement and decrease in CFT in patients with macular edema associated with branch retinal vein occlusion. Further studies are needed to prove the long-term effect of ranibizumab treatment on patients with branch retinal vein occlusion.",
"title": ""
},
{
"docid": "012c50c7674078fcc524ec16e99cf953",
"text": "We present the use of a modified corporoplasty, based on geometrical principles, to determine the exact site for the incision in the tunica or plaque and the exact amount of albuginea for overlaying to correct with extreme precision the different types of congenital or acquired penile curvature due to Peyronie’s disease. To describe our experience with a new surgical procedure for the enhancement of penile curvature avoiding any overcorrection or undercorrection. Between March 2004 and April 2013, a total of 74 patients underwent the geometrical modified corporoplasty. All patients had congenital curvature until 90° or acquired stable penile curvature ‘less’ than 60°, that made sexual intercourse very difficult or impossible, normal erectile function, absence of hourglass or hinge effect. Preoperative testing included a physical examination, 3 photographs (frontal, dorsal and lateral) of penis during erection, a 10 mcg PGE1-induced erection and Doppler ultrasound, administration of the International Index of Erectile Function (IIEF-15) questionnaire. A follow-up with postoperative evaluation at 12 weeks, 12 and 24 months, included the same preoperative testing. Satisfaction rates were better assessed with the use of validated questionnaire such as the International Erectile Dysfunction Inventory of the Treatment Satisfaction (EDITS). Statistical analysis with Student’s t-test was performed using commercially available, personal computer software. A total of 25 patients had congenital penile curvature with a mean deviation of 46.8° (range 40–90), another 49 patients had Peyronie’s disease with a mean deviation of 58.4 (range 45–60). No major complications were reported. Postoperative correction of the curvature was achieved in all patients (100%). Neither undercorrection nor overcorrection were recorded. No significant relapse (curvature>15°) occurred in our patients. Shortening of the penis was reported by 74% but did not influence the high overall satisfaction of 92% (patients completely satisfied with their sexual life). The erectile function was analyzed in both groups, Student’s t-test showed a significant improvement in erectile function, preoperative average IIEF-15 scores were 17.43±4.67, whereas postoperatively it was 22.57±4.83 (P=0.001). This geometrical modified Nesbit corporoplasty is a valid therapy which allows penile straightening. The geometric principles make the technique reproducible in multicentre studies.",
"title": ""
},
{
"docid": "abb43256001147c813d12b89d2f9e67b",
"text": "We study the distributed computing setting in which there are multiple servers, each holding a set of points, who wish to compute functions on the union of their point sets. A key task in this setting is Principal Component Analysis (PCA), in which the servers would like to compute a low dimensional subspace capturing as much of the variance of the union of their point sets as possible. Given a procedure for approximate PCA, one can use it to approximately solve problems such as k-means clustering and low rank approximation. The essential properties of an approximate distributed PCA algorithm are its communication cost and computational efficiency for a given desired accuracy in downstream applications. We give new algorithms and analyses for distributed PCA which lead to improved communication and computational costs for k-means clustering and related problems. Our empirical study on real world data shows a speedup of orders of magnitude, preserving communication with only a negligible degradation in solution quality. Some of these techniques we develop, such as a general transformation from a constant success probability subspace embedding to a high success probability subspace embedding with a dimension and sparsity independent of the success probability, may be of independent interest.",
"title": ""
},
{
"docid": "1f3f352c7584fb6ec1924ca3621fb1fb",
"text": "The National Firearms Forensic Intelligence Database (NFFID (c) Crown Copyright 2003-2008) was developed by The Forensic Science Service (FSS) as an investigative tool for collating and comparing information from items submitted to the FSS to provide intelligence reports for the police and relevant government agencies. The purpose of these intelligence reports was to highlight current firearm and ammunition trends and their distribution within the country. This study reviews all the trends that have been highlighted by NFFID between September 2003 and September 2008. A total of 8887 guns of all types have been submitted to the FSS over the last 5 years, where an average of 21% of annual submissions are converted weapons. The makes, models, and modes of conversion of these weapons are described in detail. The number of trends identified by NFFID shows that this has been a valuable tool in the analysis of firearms-related crime.",
"title": ""
},
{
"docid": "4cfd4f09a88186cb7e5f200e340d1233",
"text": "Keyword spotting (KWS) aims to detect predefined keywords in continuous speech. Recently, direct deep learning approaches have been used for KWS and achieved great success. However, these approaches mostly assume fixed keyword vocabulary and require significant retraining efforts if new keywords are to be detected. For unrestricted vocabulary, HMM based keywordfiller framework is still the mainstream technique. In this paper, a novel deep learning approach is proposed for unrestricted vocabulary KWS based on Connectionist Temporal Classification (CTC) with Long Short-Term Memory (LSTM). Here, an LSTM is trained to discriminant phones with the CTC criterion. During KWS, an arbitrary keyword can be specified and it is represented by one or more phone sequences. Due to the property of peaky phone posteriors of CTC, the LSTM can produce a phone lattice. Then, a fast substring matching algorithm based on minimum edit distance is used to search the keyword phone sequence on the phone lattice. The approach is highly efficient and vocabulary independent. Experiments showed that the proposed approach can achieve significantly better results compared to a DNN-HMM based keyword-filler decoding system. In addition, the proposed approach is also more efficient than the DNN-HMM KWS baseline.",
"title": ""
},
{
"docid": "f4639c2523687aa0d5bfdd840df9cfa4",
"text": "This established database of manufacturers and thei r design specification, determined the condition and design of the vehicle based on the perception and preference of jeepney drivers and passengers, and compared the pa rts of the jeepney vehicle using Philippine National Standards and international sta ndards. The study revealed that most jeepney manufacturing firms have varied specificati ons with regard to the capacity, dimensions and weight of the vehicle and similar sp ecification on the parts and equipment of the jeepney vehicle. Most of the jeepney drivers an d passengers want to improve, change and standardize the parts of the jeepney vehicle. The p arts of jeepney vehicles have similar specifications compared to the 4 out of 5 mandatory PNS and 22 out 32 UNECE Regulations applicable for jeepney vehicle. It is concluded tha t t e jeepney vehicle can be standardized in terms of design, safety and environmental concerns.",
"title": ""
},
{
"docid": "80ce6c8c9fc4bf0382c5f01d1dace337",
"text": "Customer loyalty is viewed as the strength of the relationship between an individual's relative attitude and repeat patronage. The relationship is seen as mediated by social norms and situational factors. Cognitive, affective, and conative antecedents of relative attitude are identified as contributing to loyalty, along with motivational, perceptual, and behavioral consequences. Implications for research and for the management of loyalty are derived.",
"title": ""
},
{
"docid": "5e681caab6212e3f82d482f2ac332a14",
"text": "Task-aware flow schedulers collect task information across the data center to optimize task-level performance. However, the majority of the tasks, which generate short flows and are called tiny tasks, have been largely overlooked by current schedulers. The large number of tiny tasks brings significant overhead to the centralized schedulers, while the existing decentralized schedulers are too complex to fit in commodity switches. In this paper we present OPTAS, a lightweight, commodity-switch-compatible scheduling solution that efficiently monitors and schedules flows for tiny tasks with low overhead. OPTAS monitors system calls and buffer footprints to recognize the tiny tasks, and assigns them with higher priorities than larger ones. The tiny tasks are then transferred in a FIFO manner by adjusting two attributes, namely, the window size and round trip time, of TCP. We have implemented OPTAS as a Linux kernel module, and experiments on our 37-server testbed show that OPTAS is at least 2.2× faster than fair sharing, and 1.2× faster than only assigning tiny tasks with the highest priority.",
"title": ""
},
{
"docid": "d83e90a88f3a59ed09b01112131ded2b",
"text": "Purpose. Sentiment analysis and emotion processing are attracting increasing interest in many fields. Computer and information scientists are developing automated methods for sentiment analysis of online text. Most of the research have focused on identifying sentiment polarity or orientation—whether a document, usually product or movie review, carries a positive or negative sentiment. It is time for researchers to address more sophisticated kinds of sentiment analysis. This paper evaluates a particular linguistic framework called appraisal theory for adoption in manual as well as automatic sentiment analysis of news text. Methodology. The appraisal theory is applied to the analysis of a sample of political news articles reporting on Iraq and economic policies of George W. Bush and Mahmoud Ahmadinejad to assess its utility and to identify challenges in adopting this framework. Findings. The framework was useful in uncovering various aspects of sentiment that should be useful to researchers such as the appraisers and object of appraisal, bias of the appraisers and the author, type of attitude and manner of expressing the sentiment. Problems encountered include difficulty in identifying appraisal phrases and attitude categories because of the subtlety of expression in political news articles, lack of treatment of tense and timeframe, lack of a typology of emotions, and need to identify different types of behaviors (political, verbal and material actions) that reflect sentiment. Value. The study has identified future directions for research in automated sentiment analysis as well as sentiment analysis of online news text. It has also demonstrated how sentiment analysis of news text can be carried out.",
"title": ""
},
{
"docid": "db4b6a75db968868630720f7955d9211",
"text": "Bots have been playing a crucial role in online platform ecosystems, as efficient and automatic tools to generate content and diffuse information to the social media human population. In this chapter, we will discuss the role of social bots in content spreading dynamics in social media. In particular, we will first investigate some differences between diffusion dynamics of content generated by bots, as opposed to humans, in the context of political communication, then study the characteristics of bots behind the diffusion dynamics of social media spam campaigns.",
"title": ""
},
{
"docid": "fedcb2bd51b9fd147681ae23e03c7336",
"text": "Epidemiological studies have revealed the important role that foodstuffs of vegetable origin have to play in the prevention of numerous illnesses. The natural antioxidants present in such foodstuffs, among which the fl avonoids are widely present, may be responsible for such an activity. Flavonoids are compounds that are low in molecular weight and widely distributed throughout the vegetable kingdom. They may be of great utility in states of accute or chronic diarrhoea through the inhibition of intestinal secretion and motility, and may also be benefi cial in the reduction of chronic infl ammatory damage in the intestine, by affording protection against oxidative stress and by preserving mucosal function. For this reason, the use of these agents is recommended in the treatment of infl ammatory bowel disease, in which various factors are involved in extreme immunological reactions, which lead to chronic intestinal infl ammation.",
"title": ""
},
{
"docid": "e5e817d6cadc18d280d912fea42cdd9a",
"text": "Recent discoveries of geographical patterns in microbial distribution are undermining microbiology's exclusively ecological explanations of biogeography and their fundamental assumption that 'everything is everywhere: but the environment selects'. This statement was generally promulgated by Dutch microbiologist Martinus Wilhelm Beijerinck early in the twentieth century and specifically articulated in 1934 by his compatriot, Lourens G. M. Baas Becking. The persistence of this precept throughout twentieth-century microbiology raises a number of issues in relation to its formulation and widespread acceptance. This paper will trace the conceptual history of Beijerinck's claim that 'everything is everywhere' in relation to a more general account of its theoretical, experimental and institutional context. His principle also needs to be situated in relationship to plant and animal biogeography, which, this paper will argue, forms a continuum of thought with microbial biogeography. Finally, a brief overview of the contemporary microbiological research challenging 'everything is everywhere' reveals that philosophical issues from Beijerinck's era of microbiology still provoke intense discussion in twenty-first century investigations of microbial biogeography.",
"title": ""
},
{
"docid": "676445f43b7b8fa44afaa47ff74b176c",
"text": "The study of light at the nanoscale has become a vibrant field of research, as researchers now master the flow of light at length scales far below the optical wavelength, largely surpassing the classical limits imposed by diffraction. Using metallic and dielectric nanostructures precisely sculpted into two-dimensional (2D) and 3D nanoarchitectures, light can be scattered, refracted, confined, filtered, and processed in fascinating new ways that are impossible to achieve with natural materials and in conventional geometries. This control over light at the nanoscale has not only unveiled a plethora of new phenomena but has also led to a variety of relevant applications, including new venues for integrated circuitry, optical computing, solar, and medical technologies, setting high expectations for many novel discoveries in the years to come.",
"title": ""
}
] |
scidocsrr
|
1517db59a31b235a7a32c46df6943d79
|
Virtual AoA and AoD estimation for sparse millimeter wave MIMO channels
|
[
{
"docid": "14fb6228827657ba6f8d35d169ad3c63",
"text": "In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.",
"title": ""
},
{
"docid": "c62f8d08b45a16eb26a45e47a65e69b9",
"text": "In this paper, we propose a feasible beamforming (BF) scheme realized in media access control (MAC) layer following the guidelines of the IEEE 802.15.3c criteria for millimeterwave 60GHz wireless personal area networks (60GHz WPANs). The proposed BF targets to minimize the BF set-up time and mitigates the high path loss of 60GHz WPAN systems. It is based on designed multi-resolution codebooks, which generate three kinds of patterns of different half power beam widths (HPBWs): quasi-omni pattern, sector and beam. These three kinds of patterns are employed in the three stages of the BF protocol, namely device-to-device (DEV-to-DEV) linking, sectorlevel searching and beam-level searching. All the three stages can be completed within one superframe, which minimizes the potential interference to other systems during BF set-up period. In this paper, we show some example codebooks and provide the details of BF procedure. Simulation results show that the setup time of the proposed BF protocol is as small as 2% when compared to the exhaustive searching protocol. The proposed BF is a complete design, it re-uses commands specified in IEEE 802.15.3c, completely compliant to the standard; It has thus been adopted by IEEE 802.15.3c as an optional functionality to realize Giga-bit-per-second (Gbps) communication in WPAN Systems.",
"title": ""
}
] |
[
{
"docid": "c0d7ba264ca5b8a4effeca047f416763",
"text": "We propose a novel dependency-based hybrid tree model for semantic parsing, which converts natural language utterance into machine interpretable meaning representations. Unlike previous state-of-the-art models, the semantic information is interpreted as the latent dependency between the natural language words in our joint representation. Such dependency information can capture the interactions between the semantics and natural language words. We integrate a neural component into our model and propose an efficient dynamicprogramming algorithm to perform tractable inference. Through extensive experiments on the standard multilingual GeoQuery dataset with eight languages, we demonstrate that our proposed approach is able to achieve state-ofthe-art performance across several languages. Analysis also justifies the effectiveness of using our new dependency-based representation.1",
"title": ""
},
{
"docid": "6ee26f725bfb63a6ff72069e48404e68",
"text": "OBJECTIVE\nTo determine which routinely collected exercise test variables most strongly correlate with survival and to derive a fitness risk score that can be used to predict 10-year survival.\n\n\nPATIENTS AND METHODS\nThis was a retrospective cohort study of 58,020 adults aged 18 to 96 years who were free of established heart disease and were referred for an exercise stress test from January 1, 1991, through May 31, 2009. Demographic, clinical, exercise, and mortality data were collected on all patients as part of the Henry Ford ExercIse Testing (FIT) Project. Cox proportional hazards models were used to identify exercise test variables most predictive of survival. A \"FIT Treadmill Score\" was then derived from the β coefficients of the model with the highest survival discrimination.\n\n\nRESULTS\nThe median age of the 58,020 participants was 53 years (interquartile range, 45-62 years), and 28,201 (49%) were female. Over a median of 10 years (interquartile range, 8-14 years), 6456 patients (11%) died. After age and sex, peak metabolic equivalents of task and percentage of maximum predicted heart rate achieved were most highly predictive of survival (P<.001). Subsequent addition of baseline blood pressure and heart rate, change in vital signs, double product, and risk factor data did not further improve survival discrimination. The FIT Treadmill Score, calculated as [percentage of maximum predicted heart rate + 12(metabolic equivalents of task) - 4(age) + 43 if female], ranged from -200 to 200 across the cohort, was near normally distributed, and was found to be highly predictive of 10-year survival (Harrell C statistic, 0.811).\n\n\nCONCLUSION\nThe FIT Treadmill Score is easily attainable from any standard exercise test and translates basic treadmill performance measures into a fitness-related mortality risk score. The FIT Treadmill Score should be validated in external populations.",
"title": ""
},
{
"docid": "bb64d33190d359461a4258e0ed3d3229",
"text": "In this paper, we consider the class of first-order algebraic ordinary differential equations (AODEs), and study their rational solutions in three different approaches. A combinatorial approach gives a degree bound for rational solutions of a class of AODEs which do not have movable poles. Algebraic considerations yield an algorithm for computing rational solutions of quasilinear AODEs. And finally ideas from algebraic geometry combine these results to an algroithm for finding all rational solutions of a class of firstorder AODEs which covers all examples from the collection of Kamke. In particular, parametrizations of algebraic curves play an important role for a transformation of a parametrizable first-order AODE to a quasi-linear differential equation.",
"title": ""
},
{
"docid": "205c1939369c6cc80838f562a57156a5",
"text": "This paper examines the role of the human driver as the primary control element within the traditional driver-vehicle system. Lateral and longitudinal control tasks such as path-following, obstacle avoidance, and headway control are examples of steering and braking activities performed by the human driver. Physical limitations as well as various attributes that make the human driver unique and help to characterize human control behavior are described. Example driver models containing such traits and that are commonly used to predict the performance of the combined driver-vehicle system in lateral and longitudinal control tasks are identified.",
"title": ""
},
{
"docid": "47caefb6e3228160c75f3ae1746248b8",
"text": "A new resistively loaded vee dipole (RVD) is designed and implemented for ultrawide-band short-pulse ground-penetrating radar (GPR) applications. The new RVD is improved in terms of voltage standing wave ratio, gain, and front-to-back ratio while maintaining many advantages of the typical RVD, such as the ability to radiate a short-pulse into a small spot on the ground, a low radar cross section, applicability in an array, etc. The improvements are achieved by curving the arms and modifying the Wu-King loading profile. The curve and the loading profile are designed to decrease the reflection at the drive point of the antenna while increasing the forward gain. The new RVD is manufactured by printing the curved arms on a thin Kapton film and loading them with chip resistors, which approximate the continuous loading profile. The number of resistors is chosen such that the resonant frequency due to the resistor spacing occurs at a frequency higher than the operation bandwidth. The antenna and balun are made in a module by sandwiching them between two blocks of polystyrene foam, attaching a plastic support, and encasing the foam blocks in heat-sealable plastic. The antenna module is mechanically reliable without significant performance degradation. The use of the new RVD module in a GPR system is also demonstrated with an experiment.",
"title": ""
},
{
"docid": "1389323613225897330d250e9349867b",
"text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .",
"title": ""
},
{
"docid": "53371fac3b92afe5bc6c51dccd95fc4b",
"text": "Multi-frequency electrical impedance tomography (EIT) systems require stable voltage controlled current generators that will work over a wide frequency range and with a large variation in load impedance. In this paper we compare the performance of two commonly used designs: the first is a modified Howland circuit whilst the second is based on a current mirror. The output current and the output impedance of both circuits were determined through PSPICE simulation and through measurement. Both circuits were stable over the frequency ranges 1 kHz to 1 MHz. The maximum variation of output current with frequency for the modified Howland circuit was 2.0% and for the circuit based on a current mirror 1.6%. The output impedance for both circuits was greater than 100 kohms for frequencies up to 100 kHz. However, neither circuit achieved this output impedance at 1 MHz. Comparing the results from the two circuits suggests that there is little to choose between them in terms of a practical implementation.",
"title": ""
},
{
"docid": "0e521af53f9faf4fee38843a22ec2185",
"text": "Steering of main beam of radiation at fixed millimeter wave frequency in a Substrate Integrated Waveguide (SIW) Leaky Wave Antenna (LWA) has not been investigated so far in literature. In this paper a Half-Mode Substrate Integrated Waveguide (HMSIW) LWA is proposed which has the capability to steer its main beam at fixed millimeter wave frequency of 24GHz. Beam steering is made feasible by changing the capacitance of the capacitors, connected at the dielectric side of HMSIW. The full wave EM simulations show that the main beam scans from 36° to 57° in the first quadrant.",
"title": ""
},
{
"docid": "db53a67d449cb36053422c5dbc07f8de",
"text": "We propose CAVIA, a meta-learning method for fast adaptation that is scalable, flexible, and easy to implement. CAVIA partitions the model parameters into two parts: context parameters that serve as additional input to the model and are adapted on individual tasks, and shared parameters that are meta-trained and shared across tasks. At test time, the context parameters are updated with one or several gradient steps on a task-specific loss that is backpropagated through the shared part of the network. Compared to approaches that adjust all parameters on a new task (e.g., MAML), CAVIA can be scaled up to larger networks without overfitting on a single task, is easier to implement, and is more robust to the inner-loop learning rate. We show empirically that CAVIA outperforms MAML on regression, classification, and reinforcement learning problems.",
"title": ""
},
{
"docid": "05eb1af3e6838640b6dc5c1c128cc78a",
"text": "Predicting the success of referring expressions (RE) is vital for real-world applications such as navigation systems. Traditionally, research has focused on studying Referring Expression Generation (REG) in virtual, controlled environments. In this paper, we describe a novel study of spatial references from real scenes rather than virtual. First, we investigate how humans describe objects in open, uncontrolled scenarios and compare our findings to those reported in virtual environments. We show that REs in real-world scenarios differ significantly to those in virtual worlds. Second, we propose a novel approach to quantifying image complexity when complete annotations are not present (e.g. due to poor object recognition capabitlities), and third, we present a model for success prediction of REs for objects in real scenes. Finally, we discuss implications for Natural Language Generation (NLG) systems and future directions.",
"title": ""
},
{
"docid": "e4493c56867bfe62b7a96b33fb171fad",
"text": "In the field of agricultural information, the automatic identification and diagnosis of maize leaf diseases is highly desired. To improve the identification accuracy of maize leaf diseases and reduce the number of network parameters, the improved GoogLeNet and Cifar10 models based on deep learning are proposed for leaf disease recognition in this paper. Two improved models that are used to train and test nine kinds of maize leaf images are obtained by adjusting the parameters, changing the pooling combinations, adding dropout operations and rectified linear unit functions, and reducing the number of classifiers. In addition, the number of parameters of the improved models is significantly smaller than that of the VGG and AlexNet structures. During the recognition of eight kinds of maize leaf diseases, the GoogLeNet model achieves a top - 1 average identification accuracy of 98.9%, and the Cifar10 model achieves an average accuracy of 98.8%. The improved methods are possibly improved the accuracy of maize leaf disease, and reduced the convergence iterations, which can effectively improve the model training and recognition efficiency.",
"title": ""
},
{
"docid": "837b9d2834b72c7d917203457aafa421",
"text": "The strongly nonlinear magnetic characteristic of Switched Reluctance Motors (SRMs) makes their torque control a challenging task. In contrast to standard current-based control schemes, we use Model Predictive Control (MPC) and directly manipulate the switches of the dc-link power converter. At each sampling time a constrained finite-time optimal control problem based on a discrete-time nonlinear prediction model is solved yielding a receding horizon control strategy. The control objective is torque regulation while winding currents and converter switching frequency are minimized. Simulations demonstrate that a good closed-loop performance is achieved already for short prediction horizons indicating the high potential of MPC in the control of SRMs.",
"title": ""
},
{
"docid": "ec8684e227bf63ac2314ce3cb17e2e8b",
"text": "Musical genre classification is the automatic classification of audio signals into user defined labels describing pieces of music. A problem inherent to genre classification experiments in music information retrieval research is the use of songs from the same artist in both training and test sets. We show that this does not only lead to overoptimistic accuracy results but also selectively favours particular classification approaches. The advantage of using models of songs rather than models of genres vanishes when applying an artist filter. The same holds true for the use of spectral features versus fluctuation patterns for preprocessing of the audio files.",
"title": ""
},
{
"docid": "52bee48854d8eaca3b119eb71d79c22d",
"text": "In this paper, we present a new combined approach for feature extraction, classification, and context modeling in an iterative framework based on random decision trees and a huge amount of features. A major focus of this paper is to integrate different kinds of feature types like color, geometric context, and auto context features in a joint, flexible and fast manner. Furthermore, we perform an in-depth analysis of multiple feature extraction methods and different feature types. Extensive experiments are performed on challenging facade recognition datasets, where we show that our approach significantly outperforms previous approaches with a performance gain of more than 15% on the most difficult dataset.",
"title": ""
},
{
"docid": "99d1c93150dfc1795970323ec5bb418e",
"text": "People can refer to quantities in a visual scene by using either exact cardinals (e.g. one, two, three) or natural language quantifiers (e.g. few, most, all). In humans, these two processes underlie fairly different cognitive and neural mechanisms. Inspired by this evidence, the present study proposes two models for learning the objective meaning of cardinals and quantifiers from visual scenes containing multiple objects. We show that a model capitalizing on a ‘fuzzy’ measure of similarity is effective for learning quantifiers, whereas the learning of exact cardinals is better accomplished when information about number is provided.",
"title": ""
},
{
"docid": "424a0f5f4a725b85fabb8c7ee19c6e3c",
"text": "The data on dental variability in natural populations of sibling species of common voles (“arvalis” group, genus Microtus) from European and Asian parts of the species’ ranges are summarized using a morphotype-based approach to analysis of dentition. Frequency distributions of the first lower (m1) and the third upper (M3) molar morphotypes are analyzed in about 65 samples of M. rossiaemeridionalis and M. arvalis represented by arvalis and obscurus karyotypic forms. Because of extreme similarity of morphotype dental patterns in the taxa studied, it is impossible to use molar morphotype frequencies for species identification. However, a morphotype-based approach to analysis of dental variability does allow analysis of inter-species comparisons from an evolutionary standpoint. Three patterns of dental complexity are established in the taxa studied: simple, basic (the most typical within the ranges of both species), and complex. In M. rossiaemeridionalis and in M. arvalis obscurus only the basic pattern of dentition occurs. In M. arvalis arvalis, both simple and basic dental patterns are found. Analysis of association of morphotype dental patterns with geographical and environmental variables reveals an increase in the number of complex molars with longitude and latitude: in M. arvalis the pattern of molar complication is more strongly related to longitude, and in M. rossiaemeridionalis—to latitude. Significant decrease in incidence of simple molars with climate continentality and increasing aridity is found in M. arvalis. The simple pattern of dentition is found in M. arvalis arvalis in Spain, along the Atlantic coast of France and on islands thereabout, in northeastern Germany and Kirov region in European Russia. Hypotheses to explain the distribution of populations with different dental patterns within the range of M. arvalis sensu stricto are discussed.",
"title": ""
},
{
"docid": "8b224de0808d3ed64445d8e1d7a1a5b8",
"text": "ASCIMER (Assessing Smart Cities in the Mediterranean Region) is a project developed by the Universidad Politecnica of Madrid (UPM) for the EIBURS call on “Smart City Development: Applying European and International Experience to the Mediterranean Region”. Nowadays, many initiatives aimed at analysing the conception process, deployment methods or outcomes of the -referred asSmart City projects are being developed in multiple fields. Since its conception, the Smart City notion has evolved from the execution of specific projects to the implementation of global strategies to tackle wider city challenges. ASCIMER ́s project takes as a departure point that any kind of Smart City assessment should give response to the real challenges that cities of the 21st century are facing. It provides a comprehensive overview of the available possibilities and relates them to the specific city challenges. A selection of Smart City initiatives will be presented in order to establish relations between the identified city challenges and real Smart Projects designed to solve them. As a result of the project, a Projects Guide has been developed as a tool for the implementation of Smart City projects that efficiently respond to complex and diverse urban challenges without compromising their sustainable development and while improving the quality of life of their citizens.",
"title": ""
},
{
"docid": "087b1951ec35db6de6f4739404277913",
"text": "A possible scenario for the evolution of Television Broadcast is the adoption of 8 K resolution video broadcasting. To achieve the required bit-rates MIMO technologies are an actual candidate. In this scenario, this paper collected electric field levels from a MIMO experimental system for TV broadcasting to tune the parameters of the ITU-R P.1546 propagation model, which has been employed to model VHF and UHF broadcast channels. The parameters are tuned for each polarization alone and for both together. This is done considering multiple reception points and also a larger capturing time interval for a fixed reception site. Significant improvements on the match between the actual and measured link budget are provided by the optimized parameters.",
"title": ""
},
{
"docid": "e6cbd8d32233e7e683b63a5a1a0e91f8",
"text": "Background:Quality of life is an important end point in clinical trials, yet there are few quality of life questionnaires for neuroendocrine tumours.Methods:This international multicentre validation study assesses the QLQ-GINET21 Quality of Life Questionnaire in 253 patients with gastrointestinal neuroendocrine tumours. All patients were requested to complete two quality of life questionnaires – the EORTC Core Quality of Life questionnaire (QLQ-C30) and the QLQ-GINET21 – at baseline, and at 3 and 6 months post-baseline; the psychometric properties of the questionnaire were then analysed.Results:Analysis of QLQ-GINET21 scales confirmed appropriate aggregation of the items, except for treatment-related symptoms, where weight gain showed low correlation with other questions in the scale; weight gain was therefore analysed as a single item. Internal consistency of scales using Cronbach’s α coefficient was >0.7 for all parts of the QLQ-GINET21 at 6 months. Intraclass correlation was >0.85 for all scales. Discriminant validity was confirmed, with values <0.70 for all scales compared with each other.Scores changed in accordance with alterations in performance status and in response to expected clinical changes after therapies. Mean scores were similar for pancreatic and other tumours.Conclusion:The QLQ-GINET21 is a valid and responsive tool for assessing quality of life in the gut, pancreas and liver neuroendocrine tumours.",
"title": ""
},
{
"docid": "1f364472fcf7da9bfc18d9bb8a521693",
"text": "The Cre/lox system is widely used in mice to achieve cell-type-specific gene expression. However, a strong and universally responding system to express genes under Cre control is still lacking. We have generated a set of Cre reporter mice with strong, ubiquitous expression of fluorescent proteins of different spectra. The robust native fluorescence of these reporters enables direct visualization of fine dendritic structures and axonal projections of the labeled neurons, which is useful in mapping neuronal circuitry, imaging and tracking specific cell populations in vivo. Using these reporters and a high-throughput in situ hybridization platform, we are systematically profiling Cre-directed gene expression throughout the mouse brain in several Cre-driver lines, including new Cre lines targeting different cell types in the cortex. Our expression data are displayed in a public online database to help researchers assess the utility of various Cre-driver lines for cell-type-specific genetic manipulation.",
"title": ""
}
] |
scidocsrr
|
00c1de2024ce7901cefa84a1dc69756b
|
Finding the Needle: A Study of the PE32 Rich Header and Respective Malware Triage
|
[
{
"docid": "a1c82e67868ef3426896cdb541371d79",
"text": "Executable packing is the most common technique used by computer virus writers to obfuscate malicious code and evade detection by anti-virus software. Universal unpackers have been proposed that can detect and extract encrypted code from packed executables, therefore potentially revealing hidden viruses that can then be detected by traditional signature-based anti-virus software. However, universal unpackers are computationally expensive and scanning large collections of executables looking for virus infections may take several hours or even days. In this paper we apply pattern recognition techniques for fast detection of packed executables. The objective is to efficiently and accurately distinguish between packed and non-packed executables, so that only executables detected as packed will be sent to an universal unpacker, thus saving a significant amount of processing time. We show that our system achieves very high detection accuracy of packed executables with a low average processing time.",
"title": ""
},
{
"docid": "7a1ae241af3fca6114f016301d7527f8",
"text": "In this paper, we present our reverse engineering results for the Zeus crimeware toolkit which is one of the recent and powerful crimeware tools that emerged in the Internet underground community to control botnets. Zeus has reportedly infected over 3.6 million computers in the United States. Our analysis aims at uncovering the various obfuscation levels and shedding the light on the resulting code. Accordingly, we explain the bot building and installation/infection processes. In addition, we detail a method to extract the encryption key from the malware binary and use that to decrypt the network communications and the botnet configuration information. The reverse engineering insights, together with network traffic analysis, allow for a better understanding of the technologies and behaviors of such modern HTTP botnet crimeware toolkits and opens an opportunity to inject falsified information into the botnet communications which can be used to defame this crimeware toolkit.",
"title": ""
},
{
"docid": "0618529a20e00174369a05077294de5b",
"text": "In this paper we present a case study of the steps leading up to the extraction of the spam bot payload found within a backdoor rootkit known as Backdoor.Rustock.B or Spam-Mailbot.c. Following the extraction of the spam module we focus our analysis on the steps necessary to decrypt the communications between the command and control server and infected hosts. Part of the discussion involves a method to extract the encryption key from within the malware binary and use that to decrypt the communications. The result is a better understanding of an advanced botnet communications scheme.",
"title": ""
}
] |
[
{
"docid": "dec458753778c3e10de8403ed11d08d0",
"text": "The blockchain emerges as an innovative tool which proves to be useful in a number of application scenarios. A number of large industrial players, such as IBM, Microsoft, Intel, and NEC, are currently investing in exploiting the blockchain in order to enrich their portfolio of products. A number of researchers and practitioners speculate that the blockchain technology can change the way we see a number of online applications today. Although it is still early to tell for sure, it is expected that the blockchain will stimulate considerable changes to a large number of products and will positively impact the digital experience of many individuals around the globe. In this tutorial, we overview, detail, and analyze the security provisions of Bitcoin and its underlying blockchain-effectively capturing recently reported attacks and threats in the system. Our contributions go beyond the mere analysis of reported vulnerabilities of Bitcoin; namely, we describe and evaluate a number of countermeasures to deter threats on the system-some of which have already been incorporated in the system. Recall that Bitcoin has been forked multiple times in order to fine-tune the consensus (i.e., the block generation time and the hash function), and the network parameters (e.g., the size of blocks). As such, the results reported in this tutorial are not only restricted to Bitcoin, but equally apply to a number of \"altcoins\" which are basically clones/forks of the Bitcoin source code. Given the increasing number of alternative blockchain proposals, this tutorial extracts the basic security lessons learnt from the Bitcoin system with the aim to foster better designs and analysis of next-generation secure blockchain currencies and technologies.",
"title": ""
},
{
"docid": "37dc52267a4d76ffe53ce97b6d8389fc",
"text": "Ns-2 and its successor ns-3 are discrete-event simulators. Ns-3 is still under development, but offers some interesting characteristics for developers while ns-2 still has a big user base. This paper remarks current differences between both tools from developers point of view. Leaving performance and resources consumption aside, technical issues described in the present paper might help to choose one or another alternative depending of simulation and project management requirements.",
"title": ""
},
{
"docid": "f90967525247030b9da04fc4c37b6c14",
"text": "Vehicle tracking using airborne wide-area motion imagery (WAMI) for monitoring urban environments is very challenging for current state-of-the-art tracking algorithms, compared to object tracking in full motion video (FMV). Characteristics that constrain performance in WAMI to relatively short tracks range from the limitations of the camera sensor array including low frame rate and georegistration inaccuracies, to small target support size, presence of numerous shadows and occlusions from buildings, continuously changing vantage point of the platform, presence of distractors and clutter among other confounding factors. We describe our Likelihood of Features Tracking (LoFT) system that is based on fusing multiple sources of information about the target and its environment akin to a track-before-detect approach. LoFT uses image-based feature likelihood maps derived from a template-based target model, object and motion saliency, track prediction and management, combined with a novel adaptive appearance target update model. Quantitative measures of performance are presented using a set of manually marked objects in both WAMI, namely Columbus Large Image Format (CLIF), and several standard FMV sequences. Comparison with a number of single object tracking systems shows that LoFT outperforms other visual trackers, including state-of-the-art sparse representation and learning based methods, by a significant amount on the CLIF sequences and is competitive on FMV sequences.",
"title": ""
},
{
"docid": "0c3eae28505f1bc8835e118d70bc3367",
"text": "Recent research [3,37,38] has proposed compute accelerators to address the energy efficiency challenge. While these compute accelerators specialize and improve the compute efficiency, they have tended to rely on address-based load/store memory interfaces that closely resemble a traditional processor core. The address-based load/store interface is particularly challenging in data-centric applications that tend to access different software data structures. While accelerators optimize the compute section, the address-based interface leads to wasteful instructions and low memory level parallelism (MLP). We study the benefits of raising the abstraction of the memory interface to data structures.\n We propose DASX (Data Structure Accelerator), a specialized state machine for data fetch that enables compute accelerators to efficiently access data structure elements in iterative program regions. DASX enables the compute accelerators to employ data structure based memory operations and relieves the compute unit from having to generate addresses for each individual object. DASX exploits knowledge of the program's iteration to i) run ahead of the compute units and gather data objects for the compute unit (i.e., compute unit memory operations do not encounter cache misses) and ii) throttle the fetch rate, adaptively tile the dataset based on the locality characteristics and guarantee cache residency. We demonstrate accelerators for three types of data structures, Vector, Key-Value (Hash) maps, and BTrees. We demonstrate the benefits of DASX on data-centric applications which have varied compute kernels but access few regular data structures. DASX achieves higher energy efficiency by eliminating data structure instructions and enabling energy efficient compute accelerators to efficiently access the data elements. We demonstrate that DASX can achieve 4.4x the performance of a multicore system by discovering more parallelism from the data structure.",
"title": ""
},
{
"docid": "047c486e94c217a9ce84cdd57fc647fe",
"text": "There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we describe foundational concepts of explainability and show how they can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.",
"title": ""
},
{
"docid": "46eaa1108cf5027b5427fda8fc9197ff",
"text": "ExpertEyes is a low-cost, open-source package of hardware and software that is designed to provide portable high-definition eyetracking. The project involves several technological innovations, including portability, high-definition video recording, and multiplatform software support. It was designed for challenging recording environments, and all processing is done offline to allow for optimization of parameter estimation. The pupil and corneal reflection are estimated using a novel forward eye model that simultaneously fits both the pupil and the corneal reflection with full ellipses, addressing a common situation in which the corneal reflection sits at the edge of the pupil and therefore breaks the contour of the ellipse. The accuracy and precision of the system are comparable to or better than what is available in commercial eyetracking systems, with a typical accuracy of less than 0.4° and best accuracy below 0.3°, and with a typical precision (SD method) around 0.3° and best precision below 0.2°. Part of the success of the system comes from a high-resolution eye image. The high image quality results from uncasing common digital camcorders and recording directly to SD cards, which avoids the limitations of the analog NTSC format. The software is freely downloadable, and complete hardware plans are available, along with sources for custom parts.",
"title": ""
},
{
"docid": "21909d9d0a741061a65cf06e023f7aa2",
"text": "Integrated magnetics is applied to replace the three-discrete transformers by a single core transformer in a three-phase LLC resonant converter. The magnetic circuit of the integrated transformer is analyzed to derive coupling factors between the phases; these coupling factors are intentionally minimized to realize the magnetic behavior of the three-discrete transformers, with the benefit of eliminating the dead space between them. However, in a practical design, the transformer parameters in a multiphase LLC resonant converter are never exactly identical among the phases, leading to unbalanced current sharing between the paralleled modules. In this regard, a current balancing method is proposed in this paper. The proposed method can improve the current sharing between the paralleled phases relying on a single balancing transformer, and its theory is based on Ampere’s law, by forcing the sum of the three resonant currents to zero. Theoretically, if an ideal balancing transformer has been utilized, it would impose the same effect of connecting the integrated transformer in a solid star connection. However, as the core permeability of the balancing transformer is finite, the unbalanced current cannot be completely suppressed. Nonetheless, utilizing a single balancing transformer has an advantage over the star connection, as it keeps the interleaving structure simple which allows for traditional phase-shedding techniques, and it can be a solution for the other multiphase topologies where realizing a star connection is not feasible. Along with the theoretical discussion, simulation and experimental results are also presented to evaluate the proposed method considering various sources of the unbalance such as a mismatch in: 1) resonant and magnetizing inductances; 2) resonant capacitors; 3) transistor on-resistances of the MOSFETS; and 4) propagation delay of the gate drivers.",
"title": ""
},
{
"docid": "801bc7c6495eb25e0e5d666334554465",
"text": "1. Introduction More than 30 years after Blaise Cronin's seminal paper (Cronin, 1981; see reprint in this book) the metrics community is once again in need of a new theory, this time one for so-called \" altmetrics \". Altmetrics, short for alternative (to citation) metrics —and as such a misnomer— refers to a new group of metrics based (largely) on social media events relating to scholarly communication. The term originated on 29 September 2010 in a tweet by Jason Priem in which he uttered his preference for the word altmetrics in the context of various metrics provided for PLOS journal articles: \" I like the term #articlelevelmetrics, but it fails to imply *diversity* of measures. Lately, I'm liking #altmetrics. \" (Priem, 2010). Although Priem is responsible for coining the term, the idea of measuring broader scientific impact through the web and had been discussed by Cronin and others (e. see also Thelwall's chapter in this book) in the context of webometrics years before: Scholars may be cited formally, or merely mentioned en passant in listservs and others electronic discussion fora, or they may find that they have been included in reading lists or electronic syllabi. Polymorphous mentioning is likely to become a defining feature of Web-based scholarly communication. (Cronin et al., 1998) There will soon be a critical mass of web-based digital objects and usage statistics on which to Priem —co-author of the altmetrics manifesto (Priem, Taraborelli, Groth, & Neylon, 2010) and co-founder of ImpactStory 1 , an online tool aggregating various metrics on the researcher level— and colleagues argued that metrics based on 'traces' of use and production of scholarly output on social media platforms could help to improve scholarly communication and research evaluation. The term altmetrics was introduced out of the need to differentiate these new metrics from traditional citation-based indicators, which the altmetrics movement is seeking to replace or use as an alternative. The altmetrics manifesto and other work by Priem and colleagues appeal to the scientific community and research managers to \" value 1 https://impactstory.org/ 2 all research products \" (Piwowar, 2013), not just journal articles, and to measure impact in a broader sense by looking at more than just citations. The manifesto lists various sources of new metrics that would complement and replace traditional forms of publication, peer review, and citation analysis (Priem et al., 2010). Priem (2014) claimed that with scholarship moving online, former …",
"title": ""
},
{
"docid": "b6f9cc5eece3da40bf17f0c0b3d0bc55",
"text": "In-silico interaction studies on forty two tetranortriterpenoids, which include four classes of compounds azadiratchins, salannins, nimbins and intact limonoids, with actin have been carried out using Autodock Vina and Surflex Dock. The docking scores and predicted hydrogen bonds along with spatial confirmation of the molecules indicate that actin could be a possible target for insect antifeedant studies, and a good correlation has been obtained between the percentage feeding index (PFI) and the binding energy of these molecules. The enhancement of the activity in the photo products and its reduction in microwave products observed in in-vivo studies are well brought out by this study. The study reveals Arg 183 in actin to be the most favoured residue for binding in most compounds whereas Tyr 69 is favoured additionally for salannin and nimbin type of compounds. In the case of limonoids Gln 59 seems to have hydrogen bonding interactions with most of the compounds. The present study reveals that the fit for PFI vs. binding energy is better for individual classes of compounds and can be attributed to the binding of ligand with different residues. This comprehensive in-silico analysis of interaction between actin as a receptor and tetranortriterpenoids may help in the understanding of the mode of action of bioinsecticides, and designing better lead molecules.",
"title": ""
},
{
"docid": "8b66ffe2afae5f1f46b7803d80422248",
"text": "This paper describes the torque production capabilities of electrical machines with planar windings and presents an automated procedure for coils conductors' arrangement. The procedure has been applied on an ironless axial flux slotless permanent magnet machines having stator windings realized using printed circuit board (PCB) coils. An optimization algorithm has been implemented to find a proper arrangement of PCB traces in order to find the best compromise between the maximization of average torque and the minimization of torque ripple. A time-efficient numerical model has been developed to reduce computational load and thus make the optimization based design feasible.",
"title": ""
},
{
"docid": "0892815a2c9fb257faad12ca4c64a47d",
"text": "Evidence indicates that, despite some critical successes, current conservation approaches are not slowing the overall rate of biodiversity loss. The field of synthetic biology, which is capable of altering natural genomes with extremely precise editing, might offer the potential to resolve some intractable conservation problems (e.g., invasive species or pathogens). However, it is our opinion that there has been insufficient engagement by the conservation community with practitioners of synthetic biology. We contend that rapid, large-scale engagement of these two communities is urgently needed to avoid unintended and deleterious ecological consequences. To this point we describe case studies where synthetic biology is currently being applied to conservation, and we highlight the benefits to conservation biologists from engaging with this emerging technology.",
"title": ""
},
{
"docid": "3b6797a212eadcaf13d1f46064735190",
"text": "In this paper, the reconfigurable annular ring slot antenna with circular polarization diversity is proposed for SDMB(satellite digital multimedia broadcasting) system. The proposed antenna consists of a ring slot with tuning stubs. Four PIN diodes are attached to achieve circular polarization diversity. By switching the diodes on or off, the proposed antenna can be operated either RHCP(right hand circular polarization) mode or LHCP(left hand circular polarization) mode. The experimental result shows that the proposed antenna has an impedance bandwidth( VSWR les 2 ) of 2.47~3.04 GHz(570 MHz) at LHCP mode, an impedance bandwidth( VSWR les 2 ) of 2.45~3.01GHz(560 MHz) at RHCP mode, a maximum gain of 3.1dBi at RHCP mode, 4.76dBi at LHCP mode. The 3dB CP bandwidth of about 100 MHz at both RHCP and LHCP mode is achieved at the center frequency 2.63 GHz. The proposed antenna is suitable for application such as mobile satellite communications, WLAN(wireless local area networks), and broadband wireless communication systems.",
"title": ""
},
{
"docid": "98fb1bcf6158af203c0515b64121ead0",
"text": "Resolvers are absolute angle transducers that are usually used for position and speed measurement in permanent magnet motors. An observer that uses the sinusoidal signals of the resolver for this measurement is called an Angle Tracking Observer (ATO). Current designs for such observers are not stable in high acceleration and high-speed applications. This paper introduces a novel hybrid scheme for ATO design, in which a closed-loop LTI observer is combined with a quadrature encoder. Finite gain stability of the proposed design is proven based on the circle theorem in input-output stability theory. Simulation results show that the proposed ATO design is stable in two cases where an LTI observer and an extended Kalman filter are unstable due to high speed and acceleration,. In addition, the tracking accuracy of our hybrid scheme is substantially higher than a single quadrature encoder.",
"title": ""
},
{
"docid": "bfc12c790b5195861ba74f024d7cc9b5",
"text": "Research in emotion regulation has largely focused on how people manage their own emotions, but there is a growing recognition that the ways in which we regulate the emotions of others also are important. Drawing on work from diverse disciplines, we propose an integrative model of the psychological and neural processes supporting the social regulation of emotion. This organizing framework, the 'social regulatory cycle', specifies at multiple levels of description the act of regulating another person's emotions as well as the experience of being a target of regulation. The cycle describes the processing stages that lead regulators to attempt to change the emotions of a target person, the impact of regulation on the processes that generate emotions in the target, and the underlying neural systems.",
"title": ""
},
{
"docid": "a7cc7076d324f33d5e9b40756c5e1631",
"text": "Social learning analytics introduces tools and methods that help improving the learning process by providing useful information about the actors and their activity in the learning system. This study examines the relation between SNA parameters and student outcomes, between network parameters and global course performance, and it shows how visualizations of social learning analytics can help observing the visible and invisible interactions occurring in online distance education. The findings from our empirical study show that future research should further investigate whether there are conditions under which social network parameters are reliable predictors of academic performance, but also advises against relying exclusively in social network parameters for predictive purposes. The findings also show that data visualization is a useful tool for social learning analytics, and how it may provide additional information about actors and their behaviors for decision making in online distance",
"title": ""
},
{
"docid": "a9f11d3439f7e3f2d739ea16d3327d1e",
"text": "Objective: Diabetes is a common, debilitating chronic illness with multiple impacts. The impact on treatment satisfaction, productivity impairment and the symptom experience may be among the most important for patient-reported outcomes. This study developed and validated disease-specific, patient-reported measures for these outcomes that address limitations in currently available measures. Methods: Data was collected from the literature, experts and patients and a conceptual model of the patient-reported impact of diabetes was created. Item pools, based on the conceptual model, were then generated. The items were administered to 991 diabetes patients via a web-based survey to perform item reduction, identify relevant factor structures and assess reliability and validity following an a-priori analysis plan. Results: All validation criteria and hypotheses were met resulting in three new, valid measures: a 21-item Satisfaction Measure (three sub-scales: burden, efficacy and symptoms), a 30-item Symptom Measure and a 14-item Productivity Measure assessing both life and work productivity impairments.Conclusion: This triad of measures captures important components of the multifaceted diabetes patient experience and can be considered as valid, viable options when choosing measures to assess patient-reported outcomes. Addressing these outcomes may assist researchers and clinicians to develop more patient-centered diabetes interventions and care.",
"title": ""
},
{
"docid": "5ae890862d844ce03359624c3cb2012b",
"text": "Spend your time even for only few minutes to read a book. Reading a book will never reduce and waste your time to be useless. Reading, for some people become a need that is to do every day such as spending time for eating. Now, what about you? Do you like to read a book? Now, we will show you a new book enPDFd software architecture in practice second edition that can be a new way to explore the knowledge. When reading this book, you can get one thing to always remember in every reading time, even step by step.",
"title": ""
},
{
"docid": "597d49edde282e49703ba0d9e02e3f1e",
"text": "BACKGROUND\nThe vitamin D receptor (VDR) pathway is important in the prevention and potentially in the treatment of many cancers. One important mechanism of VDR action is related to its interaction with the Wnt/beta-catenin pathway. Agonist-bound VDR inhibits the oncogenic Wnt/beta-catenin/TCF pathway by interacting directly with beta-catenin and in some cells by increasing cadherin expression which, in turn, recruits beta-catenin to the membrane. Here we identify TCF-4, a transcriptional regulator and beta-catenin binding partner as an indirect target of the VDR pathway.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nIn this work, we show that TCF-4 (gene name TCF7L2) is decreased in the mammary gland of the VDR knockout mouse as compared to the wild-type mouse. Furthermore, we show 1,25(OH)2D3 increases TCF-4 at the RNA and protein levels in several human colorectal cancer cell lines, the effect of which is completely dependent on the VDR. In silico analysis of the human and mouse TCF7L2 promoters identified several putative VDR binding elements. Although TCF7L2 promoter reporters responded to exogenous VDR, and 1,25(OH)2D3, mutation analysis and chromatin immunoprecipitation assays, showed that the increase in TCF7L2 did not require recruitment of the VDR to the identified elements and indicates that the regulation by VDR is indirect. This is further confirmed by the requirement of de novo protein synthesis for this up-regulation.\n\n\nCONCLUSIONS/SIGNIFICANCE\nAlthough it is generally assumed that binding of beta-catenin to members of the TCF/LEF family is cancer-promoting, recent studies have indicated that TCF-4 functions instead as a transcriptional repressor that restricts breast and colorectal cancer cell growth. Consequently, we conclude that the 1,25(OH)2D3/VDR-mediated increase in TCF-4 may have a protective role in colon cancer as well as diabetes and Crohn's disease.",
"title": ""
},
{
"docid": "afe0c431852191bc2316d1c5091f239b",
"text": "Dynamic models of pneumatic artificial muscles (PAMs) are important for simulation of the movement dynamics of the PAM-based actuators and also for their control. The simple models of PAMs are geometric models, which can be relatively easy used under certain simplification for obtaining of the static and dynamic characteristics of the pneumatic artificial muscle. An advanced geometric muscle model is used in paper for describing the dynamic behavior of PAM based antagonistic actuator.",
"title": ""
},
{
"docid": "c08bf7c0a77ceea4c62df17cc1cd22b9",
"text": "This paper studies the role of f luctuations in the aggregate consumption–wealth ratio for predicting stock returns. Using U.S. quarterly stock market data, we find that these f luctuations in the consumption–wealth ratio are strong predictors of both real stock returns and excess returns over a Treasury bill rate. We also find that this variable is a better forecaster of future returns at short and intermediate horizons than is the dividend yield, the dividend payout ratio, and several other popular forecasting variables. Why should the consumption–wealth ratio forecast asset returns? We show that a wide class of optimal models of consumer behavior imply that the log consumption–aggregate wealth ~human capital plus asset holdings! ratio summarizes expected returns on aggregate wealth, or the market portfolio. Although this ratio is not observable, we provide assumptions under which its important predictive components for future asset returns may be expressed in terms of observable variables, namely in terms of consumption, asset holdings and labor income. The framework implies that these variables are cointegrated, and that deviations from this shared trend summarize agents’ expectations of future returns on the market portfolio. UNDERSTANDING THE EMPIRICAL LINKAGES between macroeconomic variables and financial markets has long been a goal of financial economics. One reason for the interest in these linkages is that expected excess returns on common stocks appear to vary with the business cycle. This evidence suggests that stock returns should be forecastable by business cycle variables at cyclical frequencies. Indeed, the forecastability of stock returns is well documented. Financial indicators such as the ratios of price to dividends, price to earnings, or dividends to earnings have predictive power for excess returns over a Treasury-bill rate. These financial variables, however, have been most successful at predicting returns over long horizons. Over horizons spanning the * Lettau and Ludvigson are at the Research Department, Federal Reserve Bank of New York. The authors are grateful to Gregory Bauer, John Y. Campbell, Steve Cecchetti, Todd Clark, Michael Cooper, Wayne Ferson, Kenneth French, Owen Lamont, James Stock, Kenneth West, an anonymous referee, Rick Green ~the editor!, and to seminar participants at the NBER Asset Pricing Meeting May 1999, the NBER Summer Institute July 2000, the CEPR European Summer Symposium in Finance July 1999, the University of Amsterdam, the University of Bielefeld, Hunter College, Indiana University, Northwestern University, the University of Rochester, and the New York Federal Reserve for helpful comments. Jeffrey Brown and Claire Liou provided excellent research assistance. The views expressed are those of the authors and do not necessarily ref lect those of the Federal Reserve Bank of New York or the Federal Reserve System. Any errors or omissions are the responsibility of the authors. THE JOURNAL OF FINANCE • VOL. LVI, NO. 3 • JUNE 2001",
"title": ""
}
] |
scidocsrr
|
d5cee6a9c8273585f02c612dee8d2b17
|
What is beautiful is usable
|
[
{
"docid": "d380a5de56265c80309733370c612316",
"text": "Two experiments demonstrated that self-perceptions and social perceptions may persevere after the initial basis for such perceptions has been completely discredited. In both studies subjects first received false feedback, indicating that they had either succeeded or failed on a novel discrimination task and then were thoroughly debriefed concerning the predetermined and random nature of this outcome manipulation. In experiment 2, both the initial outcome manipulation and subsequent debriefing were watched and overheard by observers. Both actors and observers showed substantial perseverance of initial impressions concerning the actors' performance and abilities following a standard \"outcome\" debriefing. \"Process\" debriefing, in which explicit discussion of the perseverance process was provided, generally proved sufficient to eliminate erroneous self-perceptions. Biased attribution processes that might underlie perserverance phenomena and the implications of the present data for the ethical conduct of deception research are discussed.",
"title": ""
}
] |
[
{
"docid": "c39b143861d1e0c371ec1684bb29f4cc",
"text": "Data races are a particularly unpleasant kind of threading bugs. They are hard to find and reproduce -- you may not observe a bug during the entire testing cycle and will only see it in production as rare unexplainable failures. This paper presents ThreadSanitizer -- a dynamic detector of data races. We describe the hybrid algorithm (based on happens-before and locksets) used in the detector. We introduce what we call dynamic annotations -- a sort of race detection API that allows a user to inform the detector about any tricky synchronization in the user program. Various practical aspects of using ThreadSanitizer for testing multithreaded C++ code at Google are also discussed.",
"title": ""
},
{
"docid": "be7b6112f147213511a3c433337c2da7",
"text": "We assessed the physical and chemical stability of docetaxel infusion solutions. Stability of the antineoplastic drug was determined 1.) after reconstitution of the injection concentrate and 2.) after further dilution in two commonly used vehicle‐solutions, 0.9% sodium chloride and 5% dextrose, in PVC bags and polyolefine containers. Chemical stability was measured by using a stability‐indicating HPLC assay with ultraviolet detection. Physical stability was determined by visual inspection. The stability tests revealed that reconstituted docetaxel solutions (= premix solutions) are physico‐chemically stable (at a level ≥ 95% docetaxel) for a minimum of four weeks, independent of the storage temperature (refrigerated, room temperature). Diluted infusion solutions (docetaxel concentration 0.3 mg/ml and 0.9 mg/ml), with either vehicle‐solution, proved physico‐chemically stable (at a level ≥ 95% docetaxel) for a minimum of four weeks, when prepared in polyolefine containers and stored at room temperature. However, diluted infusion solutions exhibited limited physical stability in PVC bags, because docetaxel precipitation occured irregularly, though not before day 5 of storage. In addition, time‐dependent DEHP‐leaching from PVC infusion bags by docetaxel infusion solutions must be considered.",
"title": ""
},
{
"docid": "a1317e75e1616b2922e5df02f69076d9",
"text": "Fixed-length embeddings of words are very useful for a variety of tasks in speech and language processing. Here we systematically explore two methods of computing fixed-length embeddings for variable-length sequences. We evaluate their susceptibility to phonetic and speaker-specific variability on English, a high resource language, and Xitsonga, a low resource language, using two evaluation metrics: ABX word discrimination and ROC-AUC on same-different phoneme n-grams. We show that a simple downsampling method supplemented with length information can be competitive with the variable-length input feature representation on both evaluations. Recurrent autoencoders trained without supervision can yield even better results at the expense of increased computational complexity.",
"title": ""
},
{
"docid": "d935679ba64755efc915cdfd4178f995",
"text": "A dual-band passive radio frequency identification (RFID) tag antenna applicable for a recessed cavity in metallic objects such as heavy equipment, vehicles, aircraft, and containers with long read range is proposed by using an artificial magnetic conductor (AMC) ground plane. The proposed tag antenna consists of a bowtie antenna and a recessed cavity with the AMC ground plane installed on the bottom side of the cavity. The AMC ground plane is utilized to provide dual-band operation at European (869.5 869.7 MHz) and Korean (910 914 MHz) passive UHF RFID bands by replacing the bottom side of the metallic cavity of a PEC-like behavior and, therefore, changing the reflection phase of the ground plane. It is worthwhile to mention that the European and the Korean UHF RFID bands are allocated very closely, and the frequency separation ratio between the two bands is just about 0.045, which is very small. It is demonstrated by experiment that the maximum reading distance of the proposed tag antenna with optimized dimensions can be improved more than 3.1 times at the two RFID bands compared to a commercial RFID tag.",
"title": ""
},
{
"docid": "f55c7479777d1b5c2265369d69c5f789",
"text": "In an object-oriented program, a unit test often consists of a sequence of method calls that create and mutate objects, then use them as arguments to a method under test. It is challenging to automatically generate sequences that are legal and behaviorally-diverse, that is, reaching as many different program states as possible.\n This paper proposes a combined static and dynamic automated test generation approach to address these problems, for code without a formal specification. Our approach first uses dynamic analysis to infer a call sequence model from a sample execution, then uses static analysis to identify method dependence relations based on the fields they may read or write. Finally, both the dynamically-inferred model (which tends to be accurate but incomplete) and the statically-identified dependence information (which tends to be conservative) guide a random test generator to create legal and behaviorally-diverse tests.\n Our Palus tool implements this testing approach. We compared its effectiveness with a pure random approach, a dynamic-random approach (without a static phase), and a static-random approach (without a dynamic phase) on several popular open-source Java programs. Tests generated by Palus achieved higher structural coverage and found more bugs.\n Palus is also internally used in Google. It has found 22 previously-unknown bugs in four well-tested Google products.",
"title": ""
},
{
"docid": "1997b8a0cac1b3beecfd79b3e206d7e4",
"text": "Scatterplots are well established means of visualizing discrete data values with two data variables as a collection of discrete points. We aim at generalizing the concept of scatterplots to the visualization of spatially continuous input data by a continuous and dense plot. An example of a continuous input field is data defined on an n-D spatial grid with respective interpolation or reconstruction of in-between values. We propose a rigorous, accurate, and generic mathematical model of continuous scatterplots that considers an arbitrary density defined on an input field on an n-D domain and that maps this density to m-D scatterplots. Special cases are derived from this generic model and discussed in detail: scatterplots where the n-D spatial domain and the m-D data attribute domain have identical dimension, 1-D scatterplots as a way to define continuous histograms, and 2-D scatterplots of data on 3-D spatial grids. We show how continuous histograms are related to traditional discrete histograms and to the histograms of isosurface statistics. Based on the mathematical model of continuous scatterplots, respective visualization algorithms are derived, in particular for 2-D scatterplots of data from 3-D tetrahedral grids. For several visualization tasks, we show the applicability of continuous scatterplots. Since continuous scatterplots do not only sample data at grid points but interpolate data values within cells, a dense and complete visualization of the data set is achieved that scales well with increasing data set size. Especially for irregular grids with varying cell size, improved results are obtained when compared to conventional scatterplots. Therefore, continuous scatterplots are a suitable extension of a statistics visualization technique to be applied to typical data from scientific computation.",
"title": ""
},
{
"docid": "d7d66f89e5f5f2d6507e0939933b3a17",
"text": "The discarded clam shell waste, fossil and edible oil as biolubricant feedstocks create environmental impacts and food chain dilemma, thus this work aims to circumvent these issues by using activated saltwater clam shell waste (SCSW) as solid catalyst for conversion of Jatropha curcas oil as non-edible sources to ester biolubricant. The characterization of solid catalyst was done by Differential Thermal Analysis-Thermo Gravimetric Analysis (DTATGA), X-Ray Fluorescence (XRF), X-Ray Diffraction (XRD), Brunauer-Emmett-Teller (BET), Field Emission Scanning Electron Microscopy (FESEM) and Fourier Transformed Infrared Spectroscopy (FTIR) analysis. The calcined catalyst was used in the transesterification of Jatropha oil to methyl ester as the first step, and the second stage was involved the reaction of Jatropha methyl ester (JME) with trimethylolpropane (TMP) based on the various process parameters. The formated biolubricant was analyzed using the capillary column (DB-5HT) equipped Gas Chromatography (GC). The conversion results of Jatropha oil to ester biolubricant can be found nearly 96.66%, and the maximum distribution composition mainly contains 72.3% of triester (TE). Keywords—Conversion, ester biolubricant, Jatropha curcas oil, solid catalyst.",
"title": ""
},
{
"docid": "72e1a2bf37495439a12a53f4b842c218",
"text": "A new transmission model of human malaria in a partially immune population with three discrete delays is formulated for variable host and vector populations. These are latent period in the host population, latent period in the vector population and duration of partial immunity. The results of our mathematical analysis indicate that a threshold parameterR0 exists. ForR0 > 1, the expected number of mosquitoes infected from humansRhm should be greater than a certain critical valueR∗hm or should be less thanR∗hm whenR ∗ hm > 1, for a stable endemic equilibrium to exist. We deduce from model analysis that an increase in the period within which partial immunity is lost increases the spread of the disease. Numerically we deduce that treatment of the partially immune humans assists in reducing the severity of the disease and that transmission blocking vaccines would be effective in a partially immune population. Numerical simulations support our analytical conclusions and illustrate possible behaviour scenarios of the model. c © 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ed4050c6934a5a26fc377fea3eefa3bc",
"text": "This paper presents the design of the permanent magnetic system for the wall climbing robot with permanent magnetic tracks. A proposed wall climbing robot with permanent magnetic adhesion mechanism for inspecting the oil tanks is briefly put forward, including the mechanical system architecture. The permanent magnetic adhesion mechanism and the tracked locomotion mechanism are employed in the robot system. By static and dynamic force analysis of the robot, design parameters about adhesion mechanism are derived. Two types of the structures of the permanent magnetic units are given in the paper. The analysis of those two types of structure is also detailed. Finally, two wall climbing robots equipped with those two different magnetic systems are discussed and the experiments are included in the paper.",
"title": ""
},
{
"docid": "aa2e16e6ed5d2610a567e358807834d4",
"text": "As the most prevailing two-factor authentication mechanism, smart-card-based password authentication has been a subject of intensive research in the past two decades, and hundreds of this type of schemes have wave upon wave been proposed. In most of these studies, there is no comprehensive and systematical metric available for schemes to be assessed objectively, and the authors present new schemes with assertions of the superior aspects over previous ones, while overlooking dimensions on which their schemes fare poorly. Unsurprisingly, most of them are far from satisfactory—either are found short of important security goals or lack of critical properties, especially being stuck with the security-usability tension. To overcome this issue, in this work we first explicitly define a security model that can accurately capture the practical capabilities of an adversary and then suggest a broad set of twelve properties framed as a systematic methodology for comparative evaluation, allowing schemes to be rated across a common spectrum. As our main contribution, a new scheme is advanced to resolve the various issues arising from user corruption and server compromise, and it is formally proved secure under the harshest adversary model so far. In particular, by integrating “honeywords”, traditionally the purview of system security, with a “fuzzy-verifier”, our scheme hits “two birds”: it not only eliminates the long-standing security-usability conflict that is considered intractable in the literature, but also achieves security guarantees beyond the conventional optimal security bound.",
"title": ""
},
{
"docid": "11c3b4c63bb9cdc19f542bb477cca191",
"text": "Although there are many motion planning techniques, there is no single one that performs optimally in every environment for every movable object. Rather, each technique has different strengths and weaknesses which makes it best-suited for particular types of situations. Also, since a given environment can consist of vastly different regions, there may not even be a single planner that is well suited for the problem. Ideally, one would use a suite of planners in concert to solve the problem by applying the best-suited planner in each region. In this paper, we propose an automated framework for feature-sensitive motion planning. We use a machine learning approach to characterize and partition C-space into (possibly overlapping) regions that are well suited to one of the planners in our library of roadmap-based motion planning methods. After the best-suited method is applied in each region, their resulting roadmaps are combined to form a roadmap of the entire planning space. We demonstrate on a range of problems that our proposed feature-sensitive approach achieves results superior to those obtainable by any of the individual planners on their own. “A Machine Learning Approach for ...”, Morales et al. TR04-001, Parasol Lab, Texas A&M, February 2004 1",
"title": ""
},
{
"docid": "dc76c7e939d26a6a81a8eb891b5824b7",
"text": "While deeper and wider neural networks are actively pushing the performance limits of various computer vision and machine learning tasks, they often require large sets of labeled data for effective training and suffer from extremely high computational complexity. In this paper, we will develop a new framework for training deep neural networks on datasets with limited labeled samples using cross-network knowledge projection which is able to improve the network performance while reducing the overall computational complexity significantly. Specifically, a large pre-trained teacher network is used to observe samples from the training data. A projection matrix is learned to project this teacher-level knowledge and its visual representations from an intermediate layer of the teacher network to an intermediate layer of a thinner and faster student network to guide and regulate its training process. Both the intermediate layers from the teacher network and the injection layers from the student network are adaptively selected during training by evaluating a joint loss function in an iterative manner. This knowledge projection framework allows us to use crucial knowledge learned by large networks to guide the training of thinner student networks, avoiding over-fitting, achieving better network performance, and significantly reducing the complexity. Extensive experimental results on benchmark datasets have demonstrated that our proposed knowledge projection approach outperforms existing methods, improving accuracy by up to 4% while reducing network complexity by 4 to 10 times, which is very attractive for practical applications of deep neural networks.",
"title": ""
},
{
"docid": "a27a05cb00d350f9021b5c4f609d772c",
"text": "Traffic light detection from a moving vehicle is an important technology both for new safety driver assistance functions as well as for autonomous driving in the city. In this paper we present a machine learning framework for detection of traffic lights that can handle in realtime both day and night situations in a unified manner. A semantic segmentation method is employed to generate traffic light candidates, which are then confirmed and classified by a geometric and color features based classifier. Temporal consistency is enforced by using a tracking by detection method. We evaluate our method on a publicly available dataset recorded at daytime in order to compare to existing methods and we show similar performance. We also present an evaluation on two additional datasets containing more than 50 intersections with multiple traffic lights recorded both at day and during nighttime and we show that our method performs consistently in those situations.",
"title": ""
},
{
"docid": "b9ca1209ce50bf527d68109dbdf7431c",
"text": "The MATLAB model of the analog multiplier based on the sigma delta modulation is developed. Different modes of multiplier are investigated and obtained results are compared with analytical results.",
"title": ""
},
{
"docid": "b4abfa56d69919d264ed9ccb9a8cd2c7",
"text": "Electronic commerce (e-commerce) continues to have a profound impact on the global business environment, but technologies and applications also have begun to focus more on mobile computing, the wireless Web, and mobile commerce. Against this backdrop, mobile banking (m-banking) has emerged as an important distribution channel, with considerable research devoted to its adoption. However, this research stream has lacked a clear roadmap or agenda. Therefore, the present article analyzes and synthesizes existing studies of m-banking adoption and maps the major theories that researchers have used to predict consumer intentions to adopt it. The findings indicate that the m-banking adoption literature is fragmented, though it commonly relies on the technology acceptance model and its modifications, revealing that compatibility (with lifestyle and device), perceived usefulness, and attitude are the most significant drivers of intentions to adopt m-banking services in developed and developing countries. Moreover, the extant literature appears limited by its narrow focus on SMS banking in developing countries; virtually no studies address the use of m-banking applications via smartphones or tablets or consider the consequences of such usage. This study makes several recommendations for continued research in the area of mobile banking.",
"title": ""
},
{
"docid": "0d1193978e4f8be0b78c6184d7ece3fe",
"text": "Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural building blocks [1]. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. At a finer scale of classification within each such class, networks describing more similar systems tend to have more similar features. This occurs presumably because networks representing similar purposes or constructions would be expected to be generated by a shared set of domain specific mechanisms, and it should therefore be possible to classify these networks into categories based on their features at various structural levels. Here we describe and demonstrate a new, hybrid approach that combines manual selection of features of potential interest with existing automated classification methods. In particular, selecting well-known and well-studied features that have been used throughout social network analysis and network science [2, 3] and then classifying with methods such as random forests [4] that are of special utility in the presence of feature collinearity, we find that we achieve higher accuracy, in shorter computation time, with greater interpretability of the network classification results. Past work in the area of network classification has primarily focused on distinguishing networks from different categories using two different broad classes of approaches. In the first approach , network classification is carried out by examining certain specific structural features and investigating whether networks belonging to the same category are similar across one or more dimensions as defined by these features [5, 6, 7, 8]. In other words, in this approach the investigator manually chooses the structural characteristics of interest and more or less manually (informally) determines the regions of the feature space that correspond to different classes. These methods are scalable to large networks and yield results that are easily interpreted in terms of the characteristics of interest, but in practice they tend to lead to suboptimal classification accuracy. In the second approach, network classification is done by using very flexible machine learning classi-fiers that, when presented with a network as an input, classify its category or class as an output To somewhat oversimplify, the first approach relies on manual feature specification followed by manual selection of a classification system, whereas the second approach is its opposite, relying on automated feature detection followed by automated classification. While …",
"title": ""
},
{
"docid": "f4e15eb37843ff4e2938b1b69ab88cb3",
"text": "Static analysis tools are often used by software developers to entail early detection of potential faults, vulnerabilities, code smells, or to assess the source code adherence to coding standards and guidelines. Also, their adoption within Continuous Integration (CI) pipelines has been advocated by researchers and practitioners. This paper studies the usage of static analysis tools in 20 Java open source projects hosted on GitHub and using Travis CI as continuous integration infrastructure. Specifically, we investigate (i) which tools are being used and how they are configured for the CI, (ii) what types of issues make the build fail or raise warnings, and (iii) whether, how, and after how long are broken builds and warnings resolved. Results indicate that in the analyzed projects build breakages due to static analysis tools are mainly related to adherence to coding standards, and there is also some attention to missing licenses. Build failures related to tools identifying potential bugs or vulnerabilities occur less frequently, and in some cases such tools are activated in a \"softer\" mode, without making the build fail. Also, the study reveals that build breakages due to static analysis tools are quickly fixed by actually solving the problem, rather than by disabling the warning, and are often properly documented.",
"title": ""
},
{
"docid": "47db0fdd482014068538a00f7dc826a9",
"text": "Importance\nThe use of palliative care programs and the number of trials assessing their effectiveness have increased.\n\n\nObjective\nTo determine the association of palliative care with quality of life (QOL), symptom burden, survival, and other outcomes for people with life-limiting illness and for their caregivers.\n\n\nData Sources\nMEDLINE, EMBASE, CINAHL, and Cochrane CENTRAL to July 2016.\n\n\nStudy Selection\nRandomized clinical trials of palliative care interventions in adults with life-limiting illness.\n\n\nData Extraction and Synthesis\nTwo reviewers independently extracted data. Narrative synthesis was conducted for all trials. Quality of life, symptom burden, and survival were analyzed using random-effects meta-analysis, with estimates of QOL translated to units of the Functional Assessment of Chronic Illness Therapy-palliative care scale (FACIT-Pal) instrument (range, 0-184 [worst-best]; minimal clinically important difference [MCID], 9 points); and symptom burden translated to the Edmonton Symptom Assessment Scale (ESAS) (range, 0-90 [best-worst]; MCID, 5.7 points).\n\n\nMain Outcomes and Measures\nQuality of life, symptom burden, survival, mood, advance care planning, site of death, health care satisfaction, resource utilization, and health care expenditures.\n\n\nResults\nForty-three RCTs provided data on 12 731 patients (mean age, 67 years) and 2479 caregivers. Thirty-five trials used usual care as the control, and 14 took place in the ambulatory setting. In the meta-analysis, palliative care was associated with statistically and clinically significant improvements in patient QOL at the 1- to 3-month follow-up (standardized mean difference, 0.46; 95% CI, 0.08 to 0.83; FACIT-Pal mean difference, 11.36] and symptom burden at the 1- to 3-month follow-up (standardized mean difference, -0.66; 95% CI, -1.25 to -0.07; ESAS mean difference, -10.30). When analyses were limited to trials at low risk of bias (n = 5), the association between palliative care and QOL was attenuated but remained statistically significant (standardized mean difference, 0.20; 95% CI, 0.06 to 0.34; FACIT-Pal mean difference, 4.94), whereas the association with symptom burden was not statistically significant (standardized mean difference, -0.21; 95% CI, -0.42 to 0.00; ESAS mean difference, -3.28). There was no association between palliative care and survival (hazard ratio, 0.90; 95% CI, 0.69 to 1.17). Palliative care was associated consistently with improvements in advance care planning, patient and caregiver satisfaction, and lower health care utilization. Evidence of associations with other outcomes was mixed.\n\n\nConclusions and Relevance\nIn this meta-analysis, palliative care interventions were associated with improvements in patient QOL and symptom burden. Findings for caregiver outcomes were inconsistent. However, many associations were no longer significant when limited to trials at low risk of bias, and there was no significant association between palliative care and survival.",
"title": ""
},
{
"docid": "cb4adbfa09f4ad217fe1efa9541ab5ab",
"text": "This paper presents an efficient implementation of the Wavenet generation process called Fast Wavenet. Compared to a naı̈ve implementation that has complexity O(2) (L denotes the number of layers in the network), our proposed approach removes redundant convolution operations by caching previous calculations, thereby reducing the complexity to O(L) time. Timing experiments show significant advantages of our fast implementation over a naı̈ve one. While this method is presented for Wavenet, the same scheme can be applied anytime one wants to perform autoregressive generation or online prediction using a model with dilated convolution layers. The code for our method is publicly available.",
"title": ""
},
{
"docid": "38bb20a4be56f408a621b1e9e8e4bf6d",
"text": "In the last five years there have been a large number of new time series classification algorithms proposed in the literature. These algorithms have been evaluated on subsets of the 47 data sets in the University of California, Riverside time series classification archive. The archive has recently been expanded to 85 data sets, over half of which have been donated by researchers at the University of East Anglia. Aspects of previous evaluations have made comparisons between algorithms difficult. For example, several different programming languages have been used, experiments involved a single train/test split and some used normalised data whilst others did not. The relaunch of the archive provides a timely opportunity to thoroughly evaluate algorithms on a larger number of datasets. We have implemented 20 recently proposed algorithms in a common Java framework and compared them against two standard benchmark classifiers (and each other) by performing 100 resampling experiments on each of the 85 datasets. We use these results to test several hypotheses relating to whether the algorithms are significantly more accurate than the benchmarks and each other. Our results indicate that only 9 of these algorithms are significantly more accurate than both benchmarks and that one classifier, the Collective of Transformation Ensembles, is significantly more accurate than all of the others. All of our experiments and results are reproducible: we release all of our code, results and experimental details and we hope these experiments form the basis for more rigorous testing of new algorithms in the future.",
"title": ""
}
] |
scidocsrr
|
e6d0765ddfd119b724166d8eab468ab5
|
Creation of a deep convolutional auto-encoder in Caffe
|
[
{
"docid": "c0d794e7275e7410998115303bf0cf79",
"text": "We present a hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling. When trained on natural images, the layers of our model capture image information in a variety of forms: low-level edges, mid-level edge junctions, high-level object parts and complete objects. To build our model we rely on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches. This makes it possible to learn multiple layers of representation and we show models with 4 layers, trained on images from the Caltech-101 and 256 datasets. When combined with a standard classifier, features extracted from these models outperform SIFT, as well as representations from other feature learning methods.",
"title": ""
}
] |
[
{
"docid": "1274656b97db1f736944c174a174925d",
"text": "In full-duplex systems, due to the strong self-interference signal, system nonlinearities become a significant limiting factor that bounds the possible cancellable self-interference power. In this paper, a self-interference cancellation scheme for full-duplex orthogonal frequency division multiplexing systems is proposed. The proposed scheme increases the amount of cancellable self-interference power by suppressing the distortion caused by the transmitter and receiver nonlinearities. An iterative technique is used to jointly estimate the self-interference channel and the nonlinearity coefficients required to suppress the distortion signal. The performance is numerically investigated showing that the proposed scheme achieves a performance that is less than 0.5dB off the performance of a linear full-duplex system.",
"title": ""
},
{
"docid": "91cc20123f536f764b8125daa5fbe8ae",
"text": "Word embeddings are real-valued word representations able to capture lexical semantics and trained on natural language corpora. Models proposing these representations have gained popularity in the recent years, but the issue of the most adequate evaluation method still remains open. This paper presents an extensive overview of the field of word embeddings evaluation, highlighting main problems and proposing a typology of approaches to evaluation, summarizing 16 intrinsic methods and 12 extrinsic methods. I describe both widely-used and experimental methods, systematize information about evaluation datasets and discuss some key challenges.",
"title": ""
},
{
"docid": "b513ebf0ad309c676ca27f2359a61df7",
"text": "ChaLearn is organizing the Automatic Machine Learning (AutoML) contest 2015, which challenges participants to solve classi cation and regression problems without any human intervention. Participants' code is automatically run on the contest servers to train and test learning machines. However, there is no obligation to submit code; half of the prizes can be won by submitting prediction results only. Datasets of progressively increasing di culty are introduced throughout the six rounds of the challenge. (Participants can enter the competition in any round.) The rounds alternate phases in which learners are tested on datasets participants have not seen (AutoML), and phases in which participants have limited time to tweak their algorithms on those datasets to improve performance (Tweakathon). This challenge will push the state of the art in fully automatic machine learning on a wide range of real-world problems. The platform will remain available beyond the termination of the challenge: http://codalab.org/AutoML.",
"title": ""
},
{
"docid": "a5a4b9667996958cc591da63811e2904",
"text": "Human activity recognition (HAR) is a promising research issue in ubiquitous and wearable computing. However, there are some problems existing in traditional methods: 1) They treat HAR as a single label classification task, and ignore the information from other related tasks, which is helpful for the original task. 2) They need to predesign features artificially, which are heuristic and not tightly related to HAR task. To address these problems, we propose AROMA (human activity recognition using deep multi-task learning). Human activities can be divided into simple and complex activities. They are closely linked. Simple and complex activity recognitions are two related tasks in AROMA. For simple activity recognition task, AROMA utilizes a convolutional neural network (CNN) to extract deep features, which are task dependent and non-handcrafted. For complex activity recognition task, AROMA applies a long short-term memory (LSTM) network to learn the temporal context of activity data. In addition, there is a shared structure between the two tasks, and the object functions of these two tasks are optimized jointly. We evaluate AROMA on two public datasets, and the experimental results show that AROMA is able to yield a competitive performance in both simple and complex activity recognitions.",
"title": ""
},
{
"docid": "272affb51cec7bf4fe0cbe8b10331977",
"text": "During an earthquake, structures are subjected to both horizontal and vertical shaking. Most structures are rather insensitive to variations in the vertical acceleration history and primary considerations are given to the impact of the horizontal shaking on the behavior of structures. In the laboratory, however, most component tests are carried out under uni-directional horizontal loading to simulate earthquake effects rather than bi-directional loading. For example, biaxial loading tests of reinforced concrete (RC) walls constitute less than 0.5% of all quasi-static cyclic tests that have been conducted. Bi-directional tests require larger and more complex test setups than uni-directional tests and therefore should only be pursued if they provide insights and results that cannot be obtained from uni-directional tests. To investigate the influence of bi-directional loading on RC wall performance, this paper reviews results from quasi-static cyclic tests on RC walls that are reported in the literature. Results from uni-directional tests are compared to results from bi-directional tests for walls of different cross sections including rectangular walls, T-shaped walls, and U-shaped walls. The available test data are analyzed with regard to the influence of the loading history on stiffness, strength, deformation capacity and failure mode. Walls with T-shaped and Ushaped cross sections are designed to carry loads in both horizontal directions and thus consideration of the impact of bidirectional loading on behavior should be considered. However, it is also shown that the displacement capacity of walls with rectangular cross sections is typically reduced by 20 to 30% due to bi-directional loading. Further analysis of the test data indicates that the bi-directional loading protocol selected might impact wall strength and stiffness of the test specimen. Based on these findings, future research needs with regard to the response of RC walls subjected to bi-directional loading are provided.",
"title": ""
},
{
"docid": "6c5c6e201e2ae886908aff554866b9ed",
"text": "HDBSCAN: Hierarchical Density-Based Spatial Clustering of Applications with Noise (Campello, Moulavi, and Sander 2013), (Campello et al. 2015). Performs DBSCAN over varying epsilon values and integrates the result to find a clustering that gives the best stability over epsilon. This allows HDBSCAN to find clusters of varying densities (unlike DBSCAN), and be more robust to parameter selection. The library also includes support for Robust Single Linkage clustering (Chaudhuri et al. 2014), (Chaudhuri and Dasgupta 2010), GLOSH outlier detection (Campello et al. 2015), and tools for visualizing and exploring cluster structures. Finally support for prediction and soft clustering is also available.",
"title": ""
},
{
"docid": "31267ecec7222360123bda162c01dd8f",
"text": "This paper briefly overviews progress on the development of MEMS-based micropumps and their applications in drug delivery and other iomedical applications such as micrototal analysis systems ( TAS) or lab-on-a-chip and point of care testing systems (POCT). The focus of the eview is to present key features of micropumps such as actuation methods, working principles, construction, fabrication methods, performance arameters and their medical applications. Micropumps have been categorized as mechanical or non-mechanical based on the method by which ctuation energy is obtained to drive fluid flow. The survey attempts to provide a comprehensive reference for researchers working on design nd development of MEMS-based micropumps and a source for those outside the field who wish to select the best available micropump for a pecific drug delivery or biomedical application. Micropumps for transdermal insulin delivery, artificial sphincter prosthesis, antithrombogenic icropumps for blood transportation, micropump for injection of glucose for diabetes patients and administration of neurotransmitters to neurons nd micropumps for chemical and biological sensing have been reported. Various performance parameters such as flow rate, pressure generated nd size of the micropump have been compared to facilitate selection of appropriate micropump for a particular application. Electrowetting, lectrochemical and ion conductive polymer film (ICPF) actuator micropumps appear to be the most promising ones which provide adequate flow ates at very low applied voltage. Electroosmotic micropumps consume high voltages but exhibit high pressures and are intended for applications here compactness in terms of small size is required along with high-pressure generation. Bimetallic and electrostatic micropumps are smaller n size but exhibit high self-pumping frequency and further research on their design could improve their performance. Micropumps based on iezoelectric actuation require relatively high-applied voltage but exhibit high flow rates and have grown to be the dominant type of micropumps n drug delivery systems and other biomedical applications. Although a lot of progress has been made in micropump research and performance of icropumps has been continuously increasing, there is still a need to incorporate various categories of micropumps in practical drug delivery and iomedical devices and this will continue to provide a substantial stimulus for micropump research and development in future. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5f2b29b87d4d5d9c9eeb176d044c00f3",
"text": "Automated pose estimation is a fundamental task in computer vision. In this paper, we investigate the generic framework of Cascaded Pose Regression (CPR), which demonstrates practical effectiveness in pose estimation on deformable and articulated objects. In particular, we focus on the use of CPR for face alignment by exploring existing techniques and verifying their performances on different public facial datasets. We show that the correct selection of pose-invariant features is critical to encode the geometric arrangement of landmarks and crucial for the overall regressor learnability. Furthermore, by incorporating strategies that are commonly used among the state-of-the-art, we interpret the CPR training procedure as a repeated clustering problem with explicit regressor representation, which is complementary to the original CPR algorithm. In our experiment, the qualitative evaluation of existing alignment techniques demonstrates the success of CPR for facial pose inference that can be conveniently adopted to video detection and tracking applications.",
"title": ""
},
{
"docid": "abea5fcab86877f1d085183a714bc37d",
"text": "In this work, we introduce the challenging problem of joint multi-person pose estimation and tracking of an unknown number of persons in unconstrained videos. Existing methods for multi-person pose estimation in images cannot be applied directly to this problem, since it also requires to solve the problem of person association over time in addition to the pose estimation for each person. We therefore propose a novel method that jointly models multi-person pose estimation and tracking in a single formulation. To this end, we represent body joint detections in a video by a spatio-temporal graph and solve an integer linear program to partition the graph into sub-graphs that correspond to plausible body pose trajectories for each person. The proposed approach implicitly handles occlusion and truncation of persons. Since the problem has not been addressed quantitatively in the literature, we introduce a challenging Multi-Person PoseTrack dataset, and also propose a completely unconstrained evaluation protocol that does not make any assumptions about the scale, size, location or the number of persons. Finally, we evaluate the proposed approach and several baseline methods on our new dataset.",
"title": ""
},
{
"docid": "0186ead8a32677289f73920af5a65d19",
"text": "The tall building is the most dominating symbol of the cities and a human-made marvel that defies gravity by reaching to the clouds. It embodies unrelenting human aspirations to build even higher. It conjures a number of valid questions in our minds. The foremost and fundamental question that is often asked: Why tall buildings? This review paper seeks to answer the question by laying out arguments against and for tall buildings. Then, it provides a brief account of the historic and recent developments of tall buildings including their status during the current economic recession. The paper argues that as cities continue to expand horizontally, to safeguard against their reaching an eventual breaking point, the tall building as a building type is a possible solution by way of conquering vertical space through agglomeration and densification. Case studies of some recently built tall buildings are discussed to illustrate the nature of tall building development in their respective cities. The paper attempts to dispel any discernment about tall buildings as mere pieces of art and architecture by emphasizing their truly speculative, technological, sustainable, and evolving nature. It concludes by projecting a vision of tall buildings and their integration into the cities of the 21st century.",
"title": ""
},
{
"docid": "ad389d8ee2c45746c3a44c7e0f86de40",
"text": "Deep Convolutional Neural Networks (CNN) have recently been shown to outperform previous state of the art approaches for image classification. Their success must in parts be attributed to the availability of large labeled training sets such as provided by the ImageNet benchmarking initiative. When training data is scarce, however, CNNs have proven to fail to learn descriptive features. Recent research shows that supervised pre-training on external data followed by domain-specific fine-tuning yields a significant performance boost when external data and target domain show similar visual characteristics. Transfer-learning from a base task to a highly dissimilar target task, however, has not yet been fully investigated. In this paper, we analyze the performance of different feature representations for classification of paintings into art epochs. Specifically, we evaluate the impact of training set sizes on CNNs trained with and without external data and compare the obtained models to linear models based on Improved Fisher Encodings. Our results underline the superior performance of fine-tuned CNNs but likewise propose Fisher Encodings in scenarios were training data is limited.",
"title": ""
},
{
"docid": "a7202e304c01d07c39b0adf96f4e4930",
"text": "Augmented Reality has attracted interest for its p otential as a platform for new compelling usages. This paper provides an overview of technical challenges in imaging and optics encountered in near-eye optical see-through AR disp lay systems. OCIS codes: (000.4930) Other topics of general interest; (000. 2170) Equipment and techniques",
"title": ""
},
{
"docid": "4d5317c069450b785a77c98581494782",
"text": "at Columbia University for support during the writing of the early draft of paper, and to numerous readers—particularly the three anonymous reviewers—for their suggestions. Opinions and analysis are the author's, and not necessarily those of Microsoft Corporation. Abstract The paper reviews roughly 200 recent studies of mobile (cellular) phone use in the developing world, and identifies major concentrations of research. It categorizes studies along two dimensions. One dimension distinguishes studies of the determinants of mobile adoption from those that assess the impacts of mobile use, and from those focused on the interrelationships between mobile technologies and users. A secondary dimension identifies a subset of studies with a strong economic development perspective. The discussion considers the implications of the resulting review and typology for future research.",
"title": ""
},
{
"docid": "aca306249fc0f628e55ced948643b4e4",
"text": "New ICT technologies are continuously introducing changes in the way in which society generates, shares and access information. This is changing what society expects and requires of education. eLearning is acting as a vector of this change, introducing pervasive transformations in and out of the classroom. But with Learning Management Systems (LMS) users have reached a plateau of productivity and stability. At the same time outside the walled garden of the LMS new transformative tools, services and ways of learning are already in use, within the PLE and PLN paradigms. The stability and maturity of the LMS may become yet another resistance factor working against the introduction of innovations. New tools and trends cannot be ignored, and this is the reason why learning platforms should become open and flexible environments. In the course of this article the reasons for this change and how it may be addressed will be discussed, together with a proposal for architecture based on Moodle.",
"title": ""
},
{
"docid": "4ab881c788f0d819f12094f5b9589135",
"text": "The Global Navigation Satellite Systems (GNSS) suffer from accuracy deterioration and outages in dense urban canyons and are almost unavailable for indoor environments. Nowadays, developing indoor positioning systems has become an attractive research topic due to the increasing demands on ubiquitous positioning. WiFi technology has been studied for many years to provide indoor positioning services. The WiFi indoor localization systems based on machine learning approach are widely used in the literature. These systems attempt to find the perfect match between the user fingerprint and pre-defined set of grid points on the radio map. However, Fingerprints are duplicated from available Access Points (APs) and interference, which increase number of matched patterns with the user's fingerprint. In this research, the Principle Component Analysis (PCA) is utilized to improve the performance and to reduce the computation cost of the WiFi indoor localization systems based on machine learning approach. All proposed methods were developed and physically realized on Android-based smart phone using the IEEE 802.11 WLANs. The experimental setup was conducted in a real indoor environment in both static and dynamic modes. The performance of the proposed method was tested using K-Nearest Neighbors, Decision Tree, Random Forest and Support Vector Machine classifiers. The results show that the performance of the proposed method outperforms other indoor localization reported in the literature. The computation time was reduced by 70% when using Random Forest classifier in the static mode and by 33% when using KNN in the dynamic mode.",
"title": ""
},
{
"docid": "5e2536588d34ab0067af1bd716489531",
"text": "Recommender systems support user decision-making, and explanations of recommendations further facilitate their usefulness. Previous explanation styles are based on similar users, similar items, demographics of users, and contents of items. Contexts, such as usage scenarios and accompanying persons, have not been used for explanations, although they influence user decisions. In this paper, we propose a context style explanation method, presenting contexts suitable for consuming recommended items. The expected impacts of context style explanations are 1) persuasiveness: recognition of suitable context for usage motivates users to consume items, and 2) usefulness: envisioning context helps users to make right choices because the values of items depend on contexts. We evaluate context style persuasiveness and usefulness by a crowdsourcing-based user study in a restaurant recommendation setting. The context style explanation is compared to demographic and content style explanations. We also combine context style and other explanation styles, confirming that hybrid styles improve persuasiveness and usefulness of explanation.",
"title": ""
},
{
"docid": "9a1d6be6fbce508e887ee4e06a932cd2",
"text": "For ranked search in encrypted cloud data, order preserving encryption (OPE) is an efficient tool to encrypt relevance scores of the inverted index. When using deterministic OPE, the ciphertexts will reveal the distribution of relevance scores. Therefore, Wang et al. proposed a probabilistic OPE, called one-to-many OPE, for applications of searchable encryption, which can flatten the distribution of the plaintexts. In this paper, we proposed a differential attack on one-to-many OPE by exploiting the differences of the ordered ciphertexts. The experimental results show that the cloud server can get a good estimate of the distribution of relevance scores by a differential attack. Furthermore, when having some background information on the outsourced documents, the cloud server can accurately infer the encrypted keywords using the estimated distributions.",
"title": ""
},
{
"docid": "9d803b0ce1f1af621466b1d7f97b7edf",
"text": "This research paper addresses the methodology and approaches to managing criminal computer forensic investigations in a law enforcement environment with management controls, operational controls, and technical controls. Management controls cover policy and standard operating procedures (SOP's), methodology, and guidance. Operational controls cover SOP requirements, seizing evidence, evidence handling, best practices, and education, training and awareness. Technical controls cover acquisition and analysis procedures, data integrity, rules of evidence, presenting findings, proficiency testing, and data archiving.",
"title": ""
},
{
"docid": "afebd24ead794065346bebf9df028523",
"text": "Currently Computer based systems are used by most of the universities around the world however, these systems are still paper based which involve physical paper existence. At present most of the universities are suffering obstacles in document management due to using paper based or semi automated systems. The objective of this paper is to present a paperless model for the university management system. A survey is conducted that enlisted some fundamental characteristics required to implement successful paperless environment. It was noted that simply converting paper-based activities to digital ones will not achieve a system without paper. Instead it is required to address complete model and its influencing factors at once. At the last we present a case study that reveals that tools and technologies are available for implementing paperless system but only there interweaving is required in a systematic manner.",
"title": ""
},
{
"docid": "2227e1fc84d1fee067c21b3cad5717aa",
"text": "This paper proposes an adaptive color-guided autoregressive (AR) model for high quality depth recovery from low quality measurements captured by depth cameras. We observe and verify that the AR model tightly fits depth maps of generic scenes. The depth recovery task is formulated into a minimization of AR prediction errors subject to measurement consistency. The AR predictor for each pixel is constructed according to both the local correlation in the initial depth map and the nonlocal similarity in the accompanied high quality color image. We analyze the stability of our method from a linear system point of view, and design a parameter adaptation scheme to achieve stable and accurate depth recovery. Quantitative and qualitative evaluation compared with ten state-of-the-art schemes show the effectiveness and superiority of our method. Being able to handle various types of depth degradations, the proposed method is versatile for mainstream depth sensors, time-of-flight camera, and Kinect, as demonstrated by experiments on real systems.",
"title": ""
}
] |
scidocsrr
|
a706dde23322087bbe787dd36be56edf
|
Model-Based Methods For Steganography And Steganalysis
|
[
{
"docid": "bbf581230ec60c2402651d51e3a37211",
"text": "The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.",
"title": ""
}
] |
[
{
"docid": "f7a15a02c1f5d92d54d408b270ad86f1",
"text": "Davis (2001) developed a cognitive-behavioral model of pathological Internet use (PIU) in which the availability, and awareness of the Internet, psychopathologies such as depression, social anxiety or substance abuse, and situational cues providing reinforcement of Internet usage behaviors, interact to produce maladaptive cognitions(Charlton & Danforth, 2007). This model posits that user's cognition is responsible for PIU, and ineffective and/or behavioral symptoms in turn. To date, there has been no comprehensive study to test Davis' model. Lee, Choi et al. (2007) took the idea of cognitivebehavioral perspective (Davis, 2001) as an approach to developing tests of behavioral symptoms and negative outcomes of PIU respectively. Since their work was focused on developing the two tests, building or testing models of explicating PIU was not the main task. Thus, we attempted to reanalyze their data and to empirically explore and test a model of PIU.",
"title": ""
},
{
"docid": "e776c87ec35d67c6acbdf79d8a5cac0a",
"text": "Continuous deployment speeds up the process of existing agile methods, such as Scrum, and Extreme Programming (XP) through the automatic deployment of software changes to end-users upon passing of automated tests. Continuous deployment has become an emerging software engineering process amongst numerous software companies, such as Facebook, Github, Netflix, and Rally Software. A systematic analysis of software practices used in continuous deployment can facilitate a better understanding of continuous deployment as a software engineering process. Such analysis can also help software practitioners in having a shared vocabulary of practices and in choosing the software practices that they can use to implement continuous deployment. The goal of this paper is to aid software practitioners in implementing continuous deployment through a systematic analysis of software practices that are used by software companies. We studied the continuous deployment practices of 19 software companies by performing a qualitative analysis of Internet artifacts and by conducting follow-up inquiries. In total, we found 11 software practices that are used by 19 software companies. We also found that in terms of use, eight of the 11 software practices are common across 14 software companies. We observe that continuous deployment necessitates the consistent use of sound software engineering practices such as automated testing, automated deployment, and code review.",
"title": ""
},
{
"docid": "ae593e6c1ea6e01093d8226ef219320f",
"text": "Trajectory basis Non-Rigid Structure from Motion (NRSfM) refers to the process of reconstructing the 3D trajectory of each point of a non-rigid object from just their 2D projected trajectories. Reconstruction relies on two factors: (i) the condition of the composed camera & trajectory basis matrix, and (ii) whether the trajectory basis has enough degrees of freedom to model the 3D point trajectory. These two factors are inherently conflicting. Employing a trajectory basis with small capacity has the positive characteristic of reducing the likelihood of an ill-conditioned system (when composed with the camera) during reconstruction. However, this has the negative characteristic of increasing the likelihood that the basis will not be able to fully model the object's “true” 3D point trajectories. In this paper we draw upon a well known result centering around the Reduced Isometry Property (RIP) condition for sparse signal reconstruction. RIP allow us to relax the requirement that the full trajectory basis composed with the camera matrix must be well conditioned. Further, we propose a strategy for learning an over-complete basis using convolutional sparse coding from naturally occurring point trajectory corpora to increase the likelihood that the RIP condition holds for a broad class of point trajectories and camera motions. Finally, we propose an 21 inspired objective for trajectory reconstruction that is able to “adaptively” select the smallest sub-matrix from an over-complete trajectory basis that balances (i) and (ii). We present more practical 3D reconstruction results compared to current state of the art in trajectory basis NRSfM.",
"title": ""
},
{
"docid": "e808606994c3fd8eea1b78e8a3e55b8c",
"text": "We describe a Japanese-English patent parallel corpus created from the Japanese and US patent data provided for the NTCIR-6 patent retrieval task. The corpus contains about 2 million sentence pairs that were aligned automatically. This is the largest Japanese-English parallel corpus, which will be available to the public after the 7th NTCIR workshop meeting. We estimated that about 97% of the sentence pairs were correct alignments and about 90% of the alignments were adequate translations whose English sentences reflected almost perfectly the contents of the corresponding Japanese sentences.",
"title": ""
},
{
"docid": "c757cc329886c1192b82f36c3bed8b7f",
"text": "Though much research has been conducted on Subjectivity and Sentiment Analysis (SSA) during the last decade, little work has focused on Arabic. In this work, we focus on SSA for both Modern Standard Arabic (MSA) news articles and dialectal Arabic microblogs from Twitter. We showcase some of the challenges associated with SSA on microblogs. We adopted a random graph walk approach to extend the Arabic SSA lexicon using ArabicEnglish phrase tables, leading to improvements for SSA on Arabic microblogs. We used different features for both subjectivity and sentiment classification including stemming, part-of-speech tagging, as well as tweet specific features. Our classification features yield results that surpass Arabic SSA results in the literature.",
"title": ""
},
{
"docid": "33ef514ef6ea291ad65ed6c567dbff37",
"text": "In this paper, we present an improved feedforward sequential memory networks (FSMN) architecture, namely Deep-FSMN (DFSMN), by introducing skip connections between memory blocks in adjacent layers. These skip connections enable the information flow across different layers and thus alleviate the gradient vanishing problem when building very deep structure. As a result, DFSMN significantly benefits from these skip connections and deep structure. We have compared the performance of DFSMN to BLSTM both with and without lower frame rate (LFR) on several large speech recognition tasks, including English and Mandarin. Experimental results shown that DFSMN can consistently outperform BLSTM with dramatic gain, especially trained with LFR using CD-Phone as modeling units. In the 20000 hours Fisher (FSH) task, the proposed DFSMN can achieve a word error rate of 9.4% by purely using the cross-entropy criterion and decoding with a 3-gram language model, which achieves a 1.5% absolute improvement compared to the BLSTM. In a 20000 hours Mandarin recognition task, the LFR trained DFSMN can achieve more than 20% relative improvement compared to the LFR trained BLSTM. Moreover, we can easily design the lookahead filter order of the memory blocks in DFSMN to control the latency for real-time applications.",
"title": ""
},
{
"docid": "723aeab499abebfec38bfd8cf8484293",
"text": "Modeling and generating graphs is fundamental for studying networks in biology, engineering, and social sciences. However, modeling complex distributions over graphs and then efficiently sampling from these distributions is challenging due to the non-unique, high-dimensional nature of graphs and the complex, non-local dependencies that exist between edges in a given graph. Here we propose GraphRNN, a deep autoregressive model that addresses the above challenges and approximates any distribution of graphs with minimal assumptions about their structure. GraphRNN learns to generate graphs by training on a representative set of graphs and decomposes the graph generation process into a sequence of node and edge formations, conditioned on the graph structure generated so far. In order to quantitatively evaluate the performance of GraphRNN, we introduce a benchmark suite of datasets, baselines and novel evaluation metrics based on Maximum Mean Discrepancy, which measure distances between sets of graphs. Our experiments show that GraphRNN significantly outperforms all baselines, learning to generate diverse graphs that match the structural characteristics of a target set, while also scaling to graphs 50× larger than previous deep models.",
"title": ""
},
{
"docid": "d38fd67d2e5262aef7655e9d90a432ae",
"text": "In this paper we introduce a multilingual Named Entity Recognition (NER) system that uses statistical modeling techniques. The system identifies and classifies NEs in the Hungarian and English languages by applying AdaBoostM1 and the C4.5 decision tree learning algorithm. We focused on building as large a feature set as possible, and used a split and recombine technique to fully exploit its potentials. This methodology provided an opportunity to train several independent decision tree classifiers based on different subsets of features and combine their decisions in a majority voting scheme. The corpus made for the CoNLL 2003 conference and a segment of Szeged Corpus was used for training and validation purposes. Both of them consist entirely of newswire articles. Our system remains portable across languages without requiring any major modification and slightly outperforms the best system of CoNLL 2003, and achieved a 94.77% F measure for Hungarian. The real value of our approach lies in its different basis compared to other top performing models for English, which makes our system extremely successful when used in combination with CoNLL modells.",
"title": ""
},
{
"docid": "0d59ab6748a16bf4deedfc8bd79e4d71",
"text": "Paget's disease (PD) is a chronic progressive disease of the bone characterized by abnormal bone metabolism affecting either a single bone (monostotic) or many bones (polyostotic) with uncertain etiology. We report a case of PD in a 70-year-old male, which was initially identified as osteonecrosis of the maxilla. Non-drug induced osteonecrosis in PD is rare and very few cases have been reported in the literature.",
"title": ""
},
{
"docid": "6ba91269b707f64d2a45729161f44807",
"text": "The article is related to the development of techniques for automatic recognition of bird species by their sounds. It has been demonstrated earlier that a simple model of one time-varying sinusoid is very useful in classification and recognition of typical bird sounds. However, a large class of bird sounds are not pure sinusoids but have a clear harmonic spectrum structure. We introduce a way to classify bird syllables into four classes by their harmonic structure.",
"title": ""
},
{
"docid": "c0440776fdd2adab39e9a9ba9dd56741",
"text": "Corynebacterium glutamicum is an important industrial metabolite producer that is difficult to genetically engineer. Although the Streptococcus pyogenes (Sp) CRISPR-Cas9 system has been adapted for genome editing of multiple bacteria, it cannot be introduced into C. glutamicum. Here we report a Francisella novicida (Fn) CRISPR-Cpf1-based genome-editing method for C. glutamicum. CRISPR-Cpf1, combined with single-stranded DNA (ssDNA) recombineering, precisely introduces small changes into the bacterial genome at efficiencies of 86-100%. Large gene deletions and insertions are also obtained using an all-in-one plasmid consisting of FnCpf1, CRISPR RNA, and homologous arms. The two CRISPR-Cpf1-assisted systems enable N iterative rounds of genome editing in 3N+4 or 3N+2 days. A proof-of-concept, codon saturation mutagenesis at G149 of γ-glutamyl kinase relieves L-proline inhibition using Cpf1-assisted ssDNA recombineering. Thus, CRISPR-Cpf1-based genome editing provides a highly efficient tool for genetic engineering of Corynebacterium and other bacteria that cannot utilize the Sp CRISPR-Cas9 system.",
"title": ""
},
{
"docid": "28f727bdc27400c1c6daa7f108d8a464",
"text": "This article presents findings from the Children's Test of Nonword Repetition (CNRep). Normative data based on its administration to over 600 children aged between four and nine years are reported. Close developmental links are established between CNRep scores and vocabulary, reading, and comprehensive skills in children during the early school years. The links between nonword repetition and language skills are shown to be consistently higher and more specific than those obtained between language skills and another simple verbal task with a significant phonological memory component, auditory digit span. The psychological mechanisms underpinning these distinctive developmental relationships between nonword repetition and language development are considered.",
"title": ""
},
{
"docid": "bdc6ff2ed295039bb9d86944c49fff13",
"text": "The problem of maximizing influence spread has been widely studied in social networks, because of its tremendous number of applications in determining critical points in a social network for information dissemination. All the techniques proposed in the literature are inherently static in nature, which are designed for social networks with a fixed set of links. However, many forms of social interactions are transient in nature, with relatively short periods of interaction. Any influence spread may happen only during the period of interaction, and the probability of spread is a function of the corresponding interaction time. Furthermore, such interactions are quite fluid and evolving, as a result of which the topology of the underlying network may change rapidly, as new interactions form and others terminate. In such cases, it may be desirable to determine the influential nodes based on the dynamic interaction patterns. Alternatively, one may wish to discover the most likely starting points for a given infection pattern. We will propose methods which can be used both for optimization of information spread, as well as the backward tracing of the source of influence spread. We will present experimental results illustrating the effectiveness of our approach on a number of real data sets.",
"title": ""
},
{
"docid": "8db3f92e38d379ab5ba644ff7a59544d",
"text": "Within American psychology, there has been a recent surge of interest in self-compassion, a construct from Buddhist thought. Self-compassion entails: (a) being kind and understanding toward oneself in times of pain or failure, (b) perceiving one’s own suffering as part of a larger human experience, and (c) holding painful feelings and thoughts in mindful awareness. In this article we review findings from personality, social, and clinical psychology related to self-compassion. First, we define self-compassion and distinguish it from other self-constructs such as self-esteem, self-pity, and self-criticism. Next, we review empirical work on the correlates of self-compassion, demonstrating that self-compassion has consistently been found to be related to well-being. These findings support the call for interventions that can raise self-compassion. We then review the theory and empirical support behind current interventions that could enhance self-compassion including compassionate mind training (CMT), imagery work, the gestalt two-chair technique, mindfulness based stress reduction (MBSR), dialectical behavior therapy (DBT), and acceptance and commitment therapy (ACT). Directions for future research are also discussed.",
"title": ""
},
{
"docid": "3f2d4df1b0ef315ee910636c9439b049",
"text": "Real-Time Line and Disk Light Shading\n Eric Heitz and Stephen Hill\n At SIGGRAPH 2016, we presented a new real-time area lighting technique for polygonal sources. In this talk, we will show how the underlying framework, based on Linearly Transformed Cosines (LTCs), can be extended to support line and disk lights. We will discuss the theory behind these approaches as well as practical implementation tips and tricks concerning numerical precision and performance.\n Physically Based Shading at DreamWorks Animation\n Feng Xie and Jon Lanz\n PDI/DreamWorks was one of the first animation studios to adopt global illumination in production rendering. Concurrently, we have also been developing and applying physically based shading principles to improve the consistency and realism of our material models, while balancing the need for intuitive artistic control required for feature animations.\n In this talk, we will start by presenting the evolution of physically based shading in our films. Then we will present some fundamental principles with respect to importance sampling and energy conservation in our BSDF framework with a pragmatic and efficient approach to transimssion fresnel modeling. Finally, we will present our new set of physically plausible production shaders for our new path tracer, which includes our new hard surface shader, our approach to material layering and some new developments in fabric and glitter shading.\n Volumetric Skin and Fabric Shading at Framestore\n Nathan Walster\n Recent advances in shading have led to the use of free-path sampling to better solve complex light transport within volumetric materials. In this talk, we describe how we have implemented these ideas and techniques within a production environment, their application on recent shows---such as Guardians of the Galaxy Vol. 2 and Alien: Covenant---and the effect this has had on artists' workflow within our studio.\n Practical Multilayered Materials in Call of Duty: Infinite Warfare\n Michał Drobot\n This talk presents a practical approach to multilayer, physically based surface rendering, specifically optimized for Forward+ rendering pipelines. The presented pipeline allows for the creation of complex surface by decomposing them into different mediums, each represented by a simple BRDF/BSSRDF and set of simple, physical macro properties, such as thickness, scattering and absorption. The described model is explained via practical examples of common multilayer materials such as car paint, lacquered wood, ice, and semi-translucent plastics. Finally, the talk describes intrinsic implementation details for achieving a low performance budget for 60 Hz titles as well as supporting multiple rendering modes: opaque, alpha blend, and refractive blend.\n Pixar's Foundation for Materials: PxrSurface and PxrMarschnerHair\n Christophe Hery and Junyi Ling\n Pixar's Foundation Materials, PxrSurface and PxrMarschnerHair, began shipping with RenderMan 21.\n PxrSurface is the standard surface shader developed in the studio for Finding Dory and used more recently for Cars 3 and Coco. This shader contains nine lobes that cover the entire gamut of surface materials for these two films: diffuse, three specular, iridescence, fuzz, subsurface, single scatter and a glass lobe. Each of these BxDF lobes is energy conserving, but conservation is not enforced between lobes on the surface level. We use parameter layering methods to feed a PxrSurface with pre-layered material descriptions. This simultaneously allows us the flexibility of a multilayered shading pipeline together with efficient and consistent rendering behavior.\n We also implemented our individual BxDFs with the latest state-of-the-art techniques. For example, our three specular lobes can be switched between Beckmann and GGX modes. Many compound materials have multiple layers of specular; these lobes interact with each other modulated by the Fresnel effect of the clearcoat layer. We also leverage LEADR mapping to recreate sub-displacement micro features such as metal flakes and clearcoat scratches.\n Another example is that PxrSurface ships with Jensen, d'Eon and Burley diffusion profiles. Additionally, we implemented a novel subsurface model using path-traced volumetric scattering, which represents a significant advancement. It captures zero and single scattering events of subsurface scattering implicit to the path-tracing algorithm. The user can adjust the phase-function of the scattering events and change the extinction profiles, and it also comes with standardized color inversion features for intuitive albedo input. To the best of our knowledge, this is the first commercially available rendering system to model these features and the rendering cost is comparable to classic diffusion subsurface scattering models.\n PxrMarschnerHair implements Marschner's seminal hair illumination model with importance sampling. We also account for the residual energy left after the R, TT, TRT and glint lobes, through a fifth diffuse lobe. We show that this hair surface shader can reproduce dark and blonde hair effectively in a path-traced production context. Volumetric scattering from fiber to fiber changes the perceived hue and saturation of a groom, so we also provide a color inversion scheme to invert input albedos, such that the artistic inputs are straightforward and intuitive.\n Revisiting Physically Based Shading at Imageworks\n Christopher Kulla and Alejandro Conty\n Two years ago, the rendering and shading groups at Sony Imageworks embarked on a project to review the structure of our physically based shaders in an effort to simplify their implementation, improve quality and pave the way to take advantage of future improvements in light transport algorithms.\n We started from classic microfacet BRDF building blocks and investigated energy conservation and artist friendly parametrizations. We continued by unifying volume rendering and subsurface scattering algorithms and put in place a system for medium tracking to improve the setup of nested media. Finally, from all these building blocks, we rebuilt our artist-facing shaders with a simplified interface and a more flexible layering approach through parameter blending.\n Our talk will discuss the details of our various building blocks, what worked and what didn't, as well as some future research directions we are still interested in exploring.",
"title": ""
},
{
"docid": "f73cd33c8dfc9791558b239aede6235b",
"text": "Web clustering engines organize search results by topic, thus offering a complementary view to the flat-ranked list returned by conventional search engines. In this survey, we discuss the issues that must be addressed in the development of a Web clustering engine, including acquisition and preprocessing of search results, their clustering and visualization. Search results clustering, the core of the system, has specific requirements that cannot be addressed by classical clustering algorithms. We emphasize the role played by the quality of the cluster labels as opposed to optimizing only the clustering structure. We highlight the main characteristics of a number of existing Web clustering engines and also discuss how to evaluate their retrieval performance. Some directions for future research are finally presented.",
"title": ""
},
{
"docid": "f9effb8f9a0a2966c5f4bcf8b420177e",
"text": "This paper identifies a new opportunity for improving the efficiency of a processor core: memory access phases of programs. These are dynamic regions of programs where most of the instructions are devoted to memory access or address computation. These occur naturally in programs because of workload properties, or when employing an in-core accelerator, we get induced phases where the code execution on the core is access code. We observe such code requires an OOO core's dataflow and dynamism to run fast and does not execute well on an in-order processor. However, an OOO core consumes much power, effectively increasing energy consumption and reducing the energy efficiency of in-core accelerators.\n We develop an execution model called memory access dataflow (MAD) that encodes dataflow computation, event-condition-action rules, and explicit actions. Using it we build a specialized engine that provides an OOO core's performance but at a fraction of the power. Such an engine can serve as a general way for any accelerator to execute its respective induced phase, thus providing a common interface and implementation for current and future accelerators. We have designed and implemented MAD in RTL, and we demonstrate its generality and flexibility by integration with four diverse accelerators (SSE, DySER, NPU, and C-Cores). Our quantitative results show, relative to in-order, 2-wide OOO, and 4-wide OOO, MAD provides 2.4×, 1.4× and equivalent performance respectively. It provides 0.8×, 0.6× and 0.4× lower energy.",
"title": ""
},
{
"docid": "1f4c22a725fb5cb34bb1a087ba47987e",
"text": "This paper demonstrates key capabilities of Cognitive Database, a novel AI-enabled relational database system which uses an unsupervised neural network model to facilitate semantic queries over relational data. The neural network model, called word embedding, operates on an unstructured view of the database and builds a vector model that captures latent semantic context of database entities of different types. The vector model is then seamlessly integrated into the SQL infrastructure and exposed to the users via a new class of SQL-based analytics queries known as cognitive intelligence (CI) queries. The cognitive capabilities enable complex queries over multi-modal data such as semantic matching, inductive reasoning queries such as analogies, and predictive queries using entities not present in a database. We plan to demonstrate the end-to-end execution flow of the cognitive database using a Spark based prototype. Furthermore, we demonstrate the use of CI queries using a publicaly available enterprise financial dataset (with text and numeric values). A Jupyter Notebook python based implementation will also be presented.",
"title": ""
},
{
"docid": "4c290421dc42c3a5a56c7a4b373063e5",
"text": "In this paper, we provide a graph theoretical framework that allows us to formally define formations of multiple vehicles and the issues arising in uniqueness of graph realizations and its connection to stability of formations. The notion of graph rigidity is crucial in identifying the shape variables of a formation and an appropriate potential function associated with the formation. This allows formulation of meaningful optimization or nonlinear control problems for formation stabilization/tacking, in addition to formal representation of split, rejoin, and reconfiguration maneuvers for multi-vehicle formations. We introduce an algebra that consists of performing some basic operations on graphs which allow creation of larger rigidby-construction graphs by combining smaller rigid subgraphs. This is particularly useful in performing and representing rejoin/split maneuvers of multiple formations in a distributed fashion.",
"title": ""
},
{
"docid": "5cec6746f24246f6e99b1dae06f9a21a",
"text": "Recently there has been arising interest in automatically recognizing nonverbal behaviors that are linked with psychological conditions. Work in this direction has shown great potential for cases such as depression and post-traumatic stress disorder (PTSD), however most of the times gender differences have not been explored. In this paper, we show that gender plays an important role in the automatic assessment of psychological conditions such as depression and PTSD. We identify a directly interpretable and intuitive set of predictive indicators, selected from three general categories of nonverbal behaviors: affect, expression variability and motor variability. For the analysis, we employ a semi-structured virtual human interview dataset which includes 53 video recorded interactions. Our experiments on automatic classification of psychological conditions show that a gender-dependent approach significantly improves the performance over a gender agnostic one.",
"title": ""
}
] |
scidocsrr
|
80d6b94f7905539fed16c883a0fd3d42
|
Deep Complex Networks
|
[
{
"docid": "51048699044d547df7ffd3a0755c76d9",
"text": "Many sequential processing tasks require complex nonlinear transition functions from one step to the next. However, recurrent neural networks with “deep\" transition functions remain difficult to train, even when using Long Short-Term Memory (LSTM) networks. We introduce a novel theoretical analysis of recurrent networks based on Geršgorin’s circle theorem that illuminates several modeling and optimization issues and improves our understanding of the LSTM cell. Based on this analysis we propose Recurrent Highway Networks, which are deep not only in time but also in space, extending the LSTM architecture to larger step-to-step transition depths. Experiments demonstrate that the proposed architecture results in powerful and efficient models benefiting from up to 10 layers in the recurrent transition. On the Penn Treebank language modeling corpus, a single network outperforms all previous ensemble results with a perplexity of 66.0 on the test set. On the larger Hutter Prize Wikipedia dataset, a single network again significantly outperforms all previous results with an entropy of 1.32 bits per character on the test set.",
"title": ""
},
{
"docid": "b16992ec2416b420b2115037c78cfd4b",
"text": "Dictionary learning algorithms or supervised deep convolution networks have considerably improved the efficiency of predefined feature representations such as SIFT. We introduce a deep scattering convolution network, with complex wavelet filters over spatial and angular variables. This representation brings an important improvement to results previously obtained with predefined features over object image databases such as Caltech and CIFAR. The resulting accuracy is comparable to results obtained with unsupervised deep learning and dictionary based representations. This shows that refining image representations by using geometric priors is a promising direction to improve image classification and its understanding.",
"title": ""
}
] |
[
{
"docid": "f5e3014f479556cde21321cf1ce8f9e3",
"text": "Physiological signals are widely used to perform medical assessment for monitoring an extensive range of pathologies, usually related to cardio-vascular diseases. Among these, both PhotoPlethysmoGraphy (PPG) and Electrocardiography (ECG) signals are those more employed. PPG signals are an emerging non-invasive measurement technique used to study blood volume pulsations through the detection and analysis of the back-scattered optical radiation coming from the skin. ECG is the process of recording the electrical activity of the heart over a period of time using electrodes placed on the skin. In the present paper we propose a physiological ECG/PPG \"combo\" pipeline using an innovative bio-inspired nonlinear system based on a reaction-diffusion mathematical model, implemented by means of the Cellular Neural Network (CNN) methodology, to filter PPG signal by assigning a recognition score to the waveforms in the time series. The resulting \"clean\" PPG signal exempts from distortion and artifacts is used to validate for diagnostic purpose an EGC signal simultaneously detected for a same patient. The multisite combo PPG-ECG system proposed in this work overpasses the limitations of the state of the art in this field providing a reliable system for assessing the above-mentioned physiological parameters and their monitoring over time for robust medical assessment. The proposed system has been validated and the results confirmed the robustness of the proposed approach.",
"title": ""
},
{
"docid": "b73f4816e11353d1f7cbf8862dd90de3",
"text": "We propose using relaxed deep supervision (RDS) within convolutional neural networks for edge detection. The conventional deep supervision utilizes the general groundtruth to guide intermediate predictions. Instead, we build hierarchical supervisory signals with additional relaxed labels to consider the diversities in deep neural networks. We begin by capturing the relaxed labels from simple detectors (e.g. Canny). Then we merge them with the general groundtruth to generate the RDS. Finally we employ the RDS to supervise the edge network following a coarse-to-fine paradigm. These relaxed labels can be seen as some false positives that are difficult to be classified. Weconsider these false positives in the supervision, and are able to achieve high performance for better edge detection. Wecompensate for the lack of training images by capturing coarse edge annotations from a large dataset of image segmentations to pretrain the model. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on the well-known BSDS500 dataset (ODS F-score of .792) and obtains superior cross-dataset generalization results on NYUD dataset.",
"title": ""
},
{
"docid": "45c04c80a5e4c852c4e84ba66bd420dd",
"text": "This paper addresses empirically and theoretically a question derived from the chunking theory of memory (Chase & Simon, 1973a, 1973b): To what extent is skilled chess memory limited by the size of short-term memory (about seven chunks)? This question is addressed first with an experiment where subjects, ranking from class A players to grandmasters, are asked to recall up to five positions presented during 5 s each. Results show a decline of percentage of recall with additional boards, but also show that expert players recall more pieces than is predicted by the chunking theory in its original form. A second experiment shows that longer latencies between the presentation of boards facilitate recall. In a third experiment, a Chessmaster gradually increases the number of boards he can reproduce with higher than 70% average accuracy to nine, replacing as many as 160 pieces correctly. To account for the results of these experiments, a revision of the Chase-Simon theory is proposed. It is suggested that chess players, like experts in other recall tasks, use long-term memory retrieval structures (Chase & Ericsson, 1982) or templates in addition to chunks in short-term memory to store information rapidly.",
"title": ""
},
{
"docid": "7de901a988afab3aee99f44d3f98cb46",
"text": "A pulsewidth modulation (PWM) and pulse frequency modulation (PFM) hybrid modulated three-port converter (TPC) interfacing a photovoltaic (PV) source, a storage battery, and a load is proposed for a standalone PV/battery power system. The TPC is derived by integrating a two-phase interleaved boost circuit and a full-bridge LLC resonant circuit. Hence, it features a reduced number of switches, lower cost, and single-stage power conversion between any two of the three ports. With the PWM and PFM hybrid modulation strategy, the dc voltage gain from the PV to the load is wide, the input current ripple is small, and flexible power management among three ports can be easily achieved. Moreover, all primary switches turn ON with zero-voltage switching (ZVS), while all secondary diodes operate with zero-current switching over full operating range, which is beneficial for reducing switching losses, switch voltage stress, and electromagnetic interference. The topology derivation and power transfer analysis are presented. Depending on the resonant states, two different operation modes are identified and explored. Then, main characteristics, including the gain, input current ripple, and ZVS, are analyzed and compared. Furthermore, guidelines for parameter design and optimization are given as well. Finally, a 500-W laboratory prototype is built and tested to verify the effectiveness and advantages of all proposals.",
"title": ""
},
{
"docid": "81e3ff54c7cd97d90108f3a0c838273d",
"text": "Time-of-flight (TOF) cameras are sensors that can measure the depths of scene points, by illuminating the scene with a controlled laser or LED source and then analyzing the reflected light. In this paper, we will first describe the underlying measurement principles of time-of-flight cameras, including: (1) pulsed-light cameras, which measure directly the time taken for a light pulse to travel from the device to the object and back again, and (2) continuous-wave-modulated light cameras, which measure the phase difference between the emitted and received signals, and hence obtain the travel time indirectly. We review the main existing designs, including prototypes as well as commercially available devices. We also review the relevant camera calibration principles, and how they are applied to TOF devices. Finally, we discuss the benefits and challenges of combined TOF and color camera systems.",
"title": ""
},
{
"docid": "76def4ca02a25669610811881531e875",
"text": "The design and implementation of a novel frequency synthesizer based on low phase-noise digital dividers and a direct digital synthesizer is presented. The synthesis produces two low noise accurate tunable signals at 10 and 100 MHz. We report the measured residual phase noise and frequency stability of the syn thesizer and estimate the total frequency stability, which can be expected from the synthesizer seeded with a signal near 11.2 GHz from an ultra-stable cryocooled sapphire oscillator (cryoCSO). The synthesizer residual single-sideband phase noise, at 1-Hz offset, on 10and 100-MHz signals was -135 and -130 dBc/Hz, respectively. The frequency stability contributions of these two sig nals was σ<sub>y</sub> = 9 × 10<sup>-15</sup> and σ<sub>y</sub> = 2.2 × 10<sup>-15</sup>, respectively, at 1-s integration time. The Allan deviation of the total fractional frequency noise on the 10- and 100-MHz signals derived from the synthesizer with the cry oCSO may be estimated, respectively, as σ<sub>y</sub> ≈ 3.6 × 10<sup>-15</sup> τ<sup>-1/2</sup> + 4 × 10<sup>-16</sup> and σ<sub>y</sub> ≈ s 5.2 × 10<sup>-2</sup> × 10<sup>-16</sup> τ<sup>-1/2</sup> + 3 × 10<sup>-16</sup>, respectively, for 1 ≤ τ <; 10<sup>4</sup>s. We also calculate the coherence function (a figure of merit for very long baseline interferometry in radio astronomy) for observation frequencies of 100, 230, and 345 GHz, when using the cry oCSO and a hydrogen maser. The results show that the cryoCSO offers a significant advantage at frequencies above 100 GHz.",
"title": ""
},
{
"docid": "e6107ac6d0450bb1ce4dab713e6dcffa",
"text": "Enterprises collect a large amount of personal data about their customers. Even though enterprises promise privacy to their customers using privacy statements or P3P, there is no methodology to enforce these promises throughout and across multiple enterprises. This article describes the Platform for Enterprise Privacy Practices (E-P3P), which defines technology for privacy-enabled management and exchange of customer data. Its comprehensive privacy-specific access control language expresses restrictions on the access to personal data, possibly shared between multiple enterprises. E-P3P separates the enterprise-specific deployment policy from the privacy policy that covers the complete life cycle of collected data. E-P3P introduces a viable separation of duty between the three “administrators” of a privacy system: The privacy officer designs and deploys privacy policies, the security officer designs access control policies, and the customers can give consent while selecting opt-in and opt-out choices. To appear in2nd Workshop on Privacy Enhancing Technologies , Lecture Notes in Computer Science. Springer Verlag, 2002. Copyright c © Springer",
"title": ""
},
{
"docid": "053218d2f92ec623daa403a55aba8c74",
"text": "Yoga is an age-old traditional Indian psycho-philosophical-cultural method of leading one's life, that alleviates stress, induces relaxation and provides multiple health benefits to the person following its system. It is a method of controlling the mind through the union of an individual's dormant energy with the universal energy. Commonly practiced yoga methods are 'Pranayama' (controlled deep breathing), 'Asanas' (physical postures) and 'Dhyana' (meditation) admixed in varying proportions with differing philosophic ideas. A review of yoga in relation to epilepsy encompasses not only seizure control but also many factors dealing with overall quality-of-life issues (QOL). This paper reviews articles related to yoga and epilepsy, seizures, EEG, autonomic changes, neuro-psychology, limbic system, arousal, sleep, brain plasticity, motor performance, brain imaging studies, and rehabilitation. There is a dearth of randomized, blinded, controlled studies related to yoga and seizure control. A multi-centre, cross-cultural, preferably blinded (difficult for yoga), well-randomized controlled trial, especially using a single yogic technique in a homogeneous population such as Juvenile myoclonic epilepsy is justified to find out how yoga affects seizure control and QOL of the person with epilepsy.",
"title": ""
},
{
"docid": "0506a7f5dddf874487c90025dff0bc7d",
"text": "This paper presents a low-power decision-feedback equalizer (DFE) receiver front-end and a two-step minimum bit-error-rate (BER) adaptation algorithm. A high energy efficiency of 0.46 mW/Gbps is made possible by the combination of a direct-feedback finite-impulse-response (FIR) DFE, an infinite-impulse-response (IIR) DFE, and a clock-and-data recovery (CDR) circuit with adjustable timing offsets. Based on this architecture, the power-hungry stages used in prior DFE receivers such as the continuous-time linear equalizer (CTLE), the current-mode summing circuit for a multitap DFE, and the fast selection logic for a loop-unrolling DFE can all be removed. A two-step adaptation algorithm that finds the equalizer coefficients minimizing the BER is described. First, an extra data sampler with adjustable voltage and timing offsets measures the single-bit response (SBR) of the channel and coarsely tunes the initial coefficient values in the foreground. Next, the same circuit measures the eye-opening and bit-error rates and fine tunes the coefficients in background using a stochastic hill-climbing algorithm. A prototype DFE receiver fabricated in a 65-nm LP/RF CMOS dissipates 2.3 mW and demonstrates measured eye-opening values of 174 mV pp and 0.66 UIpp while operating at 5 Gb/s with a -15-dB loss channel.",
"title": ""
},
{
"docid": "806a83d17d242a7fd5272862158db344",
"text": "Solar power has become an attractive alternative of electricity energy. Solar cells that form the basis of a solar power system are mainly based on multicrystalline silicon. A set of solar cells are assembled and interconnected into a large solar module to offer a large amount of electricity power for commercial applications. Many defects in a solar module cannot be visually observed with the conventional CCD imaging system. This paper aims at defect inspection of solar modules in electroluminescence (EL) images. The solar module charged with electrical current will emit infrared light whose intensity will be darker for intrinsic crystal grain boundaries and extrinsic defects including micro-cracks, breaks and finger interruptions. The EL image can distinctly highlight the invisible defects but also create a random inhomogeneous background, which makes the inspection task extremely difficult. The proposed method is based on independent component analysis (ICA), and involves a learning and a detection stage. The large solar module image is first divided into small solar cell subimages. In the training stage, a set of defect-free solar cell subimages are used to find a set of independent basis images using ICA. In the inspection stage, each solar cell subimage under inspection is reconstructed as a linear combination of the learned basis images. The coefficients of the linear combination are used as the feature vector for classification. Also, the reconstruction error between the test image and its reconstructed image from the ICA basis images is also evaluated for detecting the presence of defects. Experimental results have shown that the image reconstruction with basis images distinctly outperforms the ICA feature extraction approach. It can achieve a mean recognition rate of 93.4% for a set of 80 test samples.",
"title": ""
},
{
"docid": "8bbff097ecdf6ede66bf13c985501fd4",
"text": "In this paper, we present a practical algorithm for calibrating a magnetometer for the presence of magnetic disturbances and for magnetometer sensor errors. To allow for combining the magnetometer measurements with inertial measurements for orientation estimation, the algorithm also corrects for misalignment between the magnetometer and the inertial sensor axes. The calibration algorithm is formulated as the solution to a maximum likelihood problem, and the computations are performed offline. The algorithm is shown to give good results using data from two different commercially available sensor units. Using the calibrated magnetometer measurements in combination with the inertial sensors to determine the sensor’s orientation is shown to lead to significantly improved heading estimates.",
"title": ""
},
{
"docid": "6b698146f5fbd2335e3d7bdfd39e8e4f",
"text": "Neural network models of early sensory processing typically reduce the dimensionality of streaming input data. Such networks learn the principal subspace, in the sense of principal component analysis, by adjusting synaptic weights according to activity-dependent learning rules. When derived from a principled cost function, these rules are nonlocal and hence biologically implausible. At the same time, biologically plausible local rules have been postulated rather than derived from a principled cost function. Here, to bridge this gap, we derive a biologically plausible network for subspace learning on streaming data by minimizing a principled cost function. In a departure from previous work, where cost was quantified by the representation, or reconstruction, error, we adopt a multidimensional scaling cost function for streaming data. The resulting algorithm relies only on biologically plausible Hebbian and anti-Hebbian local learning rules. In a stochastic setting, synaptic weights converge to a stationary state, which projects the input data onto the principal subspace. If the data are generated by a nonstationary distribution, the network can track the principal subspace. Thus, our result makes a step toward an algorithmic theory of neural computation.",
"title": ""
},
{
"docid": "eacf295c0cbd52599a1567c6d4193007",
"text": "Search Ranking and Recommendations are fundamental problems of crucial interest to major Internet companies, including web search engines, content publishing websites and marketplaces. However, despite sharing some common characteristics a one-size-fits-all solution does not exist in this space. Given a large difference in content that needs to be ranked, personalized and recommended, each marketplace has a somewhat unique challenge. Correspondingly, at Airbnb, a short-term rental marketplace, search and recommendation problems are quite unique, being a two-sided marketplace in which one needs to optimize for host and guest preferences, in a world where a user rarely consumes the same item twice and one listing can accept only one guest for a certain set of dates. In this paper we describe Listing and User Embedding techniques we developed and deployed for purposes of Real-time Personalization in Search Ranking and Similar Listing Recommendations, two channels that drive 99% of conversions. The embedding models were specifically tailored for Airbnb marketplace, and are able to capture guest's short-term and long-term interests, delivering effective home listing recommendations. We conducted rigorous offline testing of the embedding models, followed by successful online tests before fully deploying them into production.",
"title": ""
},
{
"docid": "4fee0cba7a71b074db0bcf922cc111ae",
"text": "The ascendance of emotion theory, recent advances in cognitive science and neuroscience, and increasingly important findings from developmental psychology and learning make possible an integrative account of the nature and etiology of anxiety and its disorders. This model specifies an integrated set of triple vulnerabilities: a generalized biological (heritable) vulnerability, a generalized psychological vulnerability based on early experiences in developing a sense of control over salient events, and a more specific psychological vulnerability in which one learns to focus anxiety on specific objects or situations. The author recounts the development of anxiety and related disorders based on these triple vulnerabilities and discusses implications for the classification of emotional disorders.",
"title": ""
},
{
"docid": "5a912359338b6a6c011e0d0a498b3e8d",
"text": "Learning Granger causality for general point processes is a very challenging task. In this paper, we propose an effective method, learning Granger causality, for a special but significant type of point processes — Hawkes process. According to the relationship between Hawkes process’s impact function and its Granger causality graph, our model represents impact functions using a series of basis functions and recovers the Granger causality graph via group sparsity of the impact functions’ coefficients. We propose an effective learning algorithm combining a maximum likelihood estimator (MLE) with a sparsegroup-lasso (SGL) regularizer. Additionally, the flexibility of our model allows to incorporate the clustering structure event types into learning framework. We analyze our learning algorithm and propose an adaptive procedure to select basis functions. Experiments on both synthetic and real-world data show that our method can learn the Granger causality graph and the triggering patterns of the Hawkes processes simultaneously.",
"title": ""
},
{
"docid": "c6c57133b22e61136b6812b341bb7ccc",
"text": "A novel broadband high gain vertical planar printed antenna is presented. The proposed antenna is composed of a bowtie-shaped electric dipole, a loop antenna that works as a magnetic dipole, a microstrip-to-coplanar stripline transition balun, and the H-shaped resonator (HSR) structures that are used for gain enhancement, all of which are printed in the same plane perpendicular to the ground. Prototypes of the HSR-loaded antenna and the unloaded one were fabricated and measured. Analyses and comparisons of the proposed antenna are made. The HSR-loaded antenna has a wide impedance bandwidth of 51.9%, ranging from 2.31 GHz to 3.93 GHz and a stable gain of about 7.4-9.7 dBi over the whole operating band. Compared with the unloaded structure, a gain increment of about 0.5-6.0 dB over the whole band is obtained. The good electrical performances make the antenna suitable for the wireless communication systems and phased arrays.",
"title": ""
},
{
"docid": "27d9675f4296f455ade2c58b7f7567e8",
"text": "In recent years, sharing economy has been growing rapidly. Meanwhile, understanding why people participate in sharing economy emerges as a rising concern. Given that research on sharing economy is scarce in the information systems literature, this paper aims to enrich the theoretical development in this area by testing different dimensions of convenience and risk that may influence people’s participation intention in sharing economy. We will also examine the moderate effects of two regulatory foci (i.e., promotion focus and prevention focus) on participation intention. The model will be tested with data of Uber users. Results of the study will help researchers and practitioners better understand people’s behavior in sharing economy.",
"title": ""
},
{
"docid": "be1b9731df45408571e75d1add5dfe9c",
"text": "We investigate a new commonsense inference task: given an event described in a short free-form text (“X drinks coffee in the morning”), a system reasons about the likely intents (“X wants to stay awake”) and reactions (“X feels alert”) of the event’s participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people’s intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.",
"title": ""
},
{
"docid": "5db646ed693f0889ac05fa614c4d6084",
"text": "Traditional fact checking by experts and analysts cannot keep pace with the volume of newly created information. It is important and necessary, therefore, to enhance our ability to computationally determine whether some statement of fact is true or false. We view this problem as a link-prediction task in a knowledge graph, and show that a new model of the top discriminative predicate paths is able to understand the meaning of some statement and accurately determine its veracity. We evaluate our approach by examining thousands of claims related to history, geography, biology, and politics using a public, million node knowledge graph extracted from Wikipedia and PubMedDB. Not only does our approach significantly outperform related models, we also find that the discriminative predicate path model is easily interpretable and provides sensible reasons for the final determination.",
"title": ""
},
{
"docid": "ee9ca88d092538a399d192cf1b9e9df6",
"text": "The new user problem in recommender systems is still challenging, and there is not yet a unique solution that can be applied in any domain or situation. In this paper we analyze viable solutions to the new user problem in collaborative filtering (CF) that are based on the exploitation of user personality information: (a) personality-based CF, which directly improves the recommendation prediction model by incorporating user personality information, (b) personality-based active learning, which utilizes personality information for identifying additional useful preference data in the target recommendation domain to be elicited from the user, and (c) personality-based cross-domain recommendation, which exploits personality information to better use user preference data from auxiliary domains which can be used to compensate the lack of user preference data in the target domain. We benchmark the effectiveness of these methods on large datasets that span several domains, namely movies, music and books. Our results show that personality-aware methods achieve performance improvements that range from 6 to 94 % for users completely new to the system, while increasing the novelty of the recommended items by 3–40 % with respect to the non-personalized popularity baseline. We also discuss the limitations of our approach and the situations in which the proposed methods can be better applied, hence providing guidelines for researchers and practitioners in the field.",
"title": ""
}
] |
scidocsrr
|
284f5c4d15b7b912871824ff8373c852
|
PanoContext: A Whole-Room 3D Context Model for Panoramic Scene Understanding
|
[
{
"docid": "7e0f2bc2db0947489fa7e348f8c21f2c",
"text": "In coming to understand the world-in learning concepts, acquiring language, and grasping causal relations-our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?",
"title": ""
}
] |
[
{
"docid": "702470e5d2f64a2c987a082e22f544db",
"text": "Deep learning (DL) advances state-of-the-art reinforcement learning (RL), by incorporating deep neural networks in learning representations from the input to RL. However, the conventional deep neural network architecture is limited in learning representations for multi-task RL (MT-RL), as multiple tasks can refer to different kinds of representations. In this paper, we thus propose a novel deep neural network architecture, namely generalization tower network (GTN), which can achieve MT-RL within a single learned model. Specifically, the architecture of GTN is composed of both horizontal and vertical streams. In our GTN architecture, horizontal streams are used to learn representation shared in similar tasks. In contrast, the vertical streams are introduced to be more suitable for handling diverse tasks, which encodes hierarchical shared knowledge of these tasks. The effectiveness of the introduced vertical stream is validated by experimental results. Experimental results further verify that our GTN architecture is able to advance the state-of-the-art MT-RL, via being tested on 51 Atari games.",
"title": ""
},
{
"docid": "d59d49083f896c01e8b8649f3a35b4c1",
"text": "This paper presents a wideband FMCW MIMO radar sensor capable of working in the frequency range between 120 GHz and 140 GHz. The sensor is based on a radar chipset fabricated in SiGe technology and uses a MIMO approach to improve the angular resolution. The MIMO operation is implemented by time domain multiplexing of the transmitters. The radar is capable of producing 2D images by using FFT processing and a delay-and-sum beamformer. This paper presents the overall radar system design together with the image reconstruction algorithms as well as first imaging results.",
"title": ""
},
{
"docid": "e82e4599a7734c9b0292a32f551dd411",
"text": "Generating a text abstract from a set of documents remains a challenging task. The neural encoder-decoder framework has recently been exploited to summarize single documents, but its success can in part be attributed to the availability of large parallel data automatically acquired from the Web. In contrast, parallel data for multi-document summarization are scarce and costly to obtain. There is a pressing need to adapt an encoder-decoder model trained on single-document summarization data to work with multiple-document input. In this paper, we present an initial investigation into a novel adaptation method. It exploits the maximal marginal relevance method to select representative sentences from multi-document input, and leverages an abstractive encoder-decoder model to fuse disparate sentences to an abstractive summary. The adaptation method is robust and itself requires no training data. Our system compares favorably to state-of-the-art extractive and abstractive approaches judged by automatic metrics and human assessors.",
"title": ""
},
{
"docid": "9bec22bcbf1ab3071d65dd8b41d3cf51",
"text": "Omni-directional mobile platforms have the ability to move instantaneously in any direction from any configuration. As such, it is important to have a mathematical model of the platform, especially if the platform is to be used as an autonomous vehicle. Autonomous behaviour requires that the mobile robot choose the optimum vehicle motion in different situations for object/collision avoidance and task achievement. This paper develops and verifies a mathematical model of a mobile robot platform that implements mecanum wheels to achieve omni-directionality. The mathematical model will be used to achieve optimum autonomous control of the developed mobile robot as an office service robot. Omni-directional mobile platforms have improved performance in congested environments and narrow aisles, such as those found in factory workshops, offices, warehouses, hospitals, etc.",
"title": ""
},
{
"docid": "cf6d0e1b0fd5a258fdcdb5a9fe8d2b65",
"text": "UNLABELLED\nPrevious studies have shown that resistance training with restricted venous blood flow (Kaatsu) results in significant strength gains and muscle hypertrophy. However, few studies have examined the concurrent vascular responses following restrictive venous blood flow training protocols.\n\n\nPURPOSE\nThe purpose of this study was to examine the effects of 4 wk of handgrip exercise training, with and without venous restriction, on handgrip strength and brachial artery flow-mediated dilation (BAFMD).\n\n\nMETHODS\nTwelve participants (mean +/- SD: age = 22 +/- 1 yr, men = 5, women = 7) completed 4 wk of bilateral handgrip exercise training (duration = 20 min, intensity = 60% of the maximum voluntary contraction, cadence = 15 grips per minute, frequency = three sessions per week). During each session, venous blood flow was restricted in one arm (experimental (EXP) arm) using a pneumatic cuff placed 4 cm proximal to the antecubital fossa and inflated to 80 mm Hg for the duration of each exercise session. The EXP and the control (CON) arms were randomly selected. Handgrip strength was measured using a hydraulic hand dynamometer. Brachial diameters and blood velocity profiles were assessed, using Doppler ultrasonography, before and after 5 min of forearm occlusion (200 mm Hg) before and at the end of the 4-wk exercise.\n\n\nRESULTS\nAfter exercise training, handgrip strength increased 8.32% (P = 0.05) in the CON arm and 16.17% (P = 0.05) in the EXP arm. BAFMD increased 24.19% (P = 0.0001) in the CON arm and decreased 30.36% (P = 0.0001) in the EXP arm.\n\n\nCONCLUSIONS\nThe data indicate handgrip training combined with venous restriction results in superior strength gains but reduced BAFMD compared with the nonrestricted arm.",
"title": ""
},
{
"docid": "6cc8164c14c6a95617590e66817c0db7",
"text": "nor fazila k & ku Halim kH. 2012. Effects of soaking on yield and quality of agarwood oil. The aims of this study were to investigate vaporisation temperature of agarwood oil, determine enlargement of wood pore size, analyse chemical components in soaking solvents and examine the chemical composition of agarwood oil extracted from soaked and unsoaked agarwood. Agarwood chips were soaked in two different acids, namely, sulphuric and lactic acids for 168 hours at room temperature (25 °C). Effects of soaking were determined using thermogravimetric analysis (TGA), scanning electron microscope (SEM) and gas chromatography-mass spectrum analysis. With regard to TGA curve, a small portion of weight loss was observed between 110 and 200 °C for agarwood soaked in lactic acid. SEM micrograph showed that the lactic acid-soaked agarwood demonstrated larger pore size. High quality agarwood oil was obtained from soaked agarwood. In conclusion, agarwood soaked in lactic acid with concentration of 0.1 M had the potential to reduce the vaporisation temperature of agarwood oil and enlarge the pore size of wood, hence, improving the yield and quality of agarwood oil.",
"title": ""
},
{
"docid": "b8a7eb324085eef83f88185b9544d5b5",
"text": "The research in the area of game accessibility has grown significantly since the last time it was examined in 2005. This paper examines the body of work between 2005 and 2010. We selected a set of papers on topics we felt represented the scope of the field, but were not able to include all papers on the subject. A summary of the research we examined is provided, along with suggestions for future work in game accessibility. It is hoped that this summary will prompt others to perform further research in this area.",
"title": ""
},
{
"docid": "ed9c0cdb74950bf0f1288931707b9d08",
"text": "Introduction This chapter reviews the theoretical and empirical literature on the concept of credibility and its areas of application relevant to information science and technology, encompassing several disciplinary approaches. An information seeker's environment—the Internet, television, newspapers, schools, libraries, bookstores, and social networks—abounds with information resources that need to be evaluated for both their usefulness and their likely level of accuracy. As people gain access to a wider variety of information resources, they face greater uncertainty regarding who and what can be believed and, indeed, who or what is responsible for the information they encounter. Moreover, they have to develop new skills and strategies for determining how to assess the credibility of an information source. Historically, the credibility of information has been maintained largely by professional knowledge workers such as editors, reviewers, publishers, news reporters, and librarians. Today, quality control mechanisms are evolving in such a way that a vast amount of information accessed through a wide variety of systems and resources is out of date, incomplete, poorly organized, or simply inaccurate (Janes & Rosenfeld, 1996). Credibility has been examined across a number of fields ranging from communication, information science, psychology, marketing, and the management sciences to interdisciplinary efforts in human-computer interaction (HCI). Each field has examined the construct and its practical significance using fundamentally different approaches, goals, and presuppositions, all of which results in conflicting views of credibility and its effects. The notion of credibility has been discussed at least since Aristotle's examination of ethos and his observations of speakers' relative abilities to persuade listeners. Disciplinary approaches to investigating credibility systematically developed only in the last century, beginning within the field of communication. A landmark among these efforts was the work of Hovland and colleagues (Hovland, Jannis, & Kelley, 1953; Hovland & Weiss, 1951), who focused on the influence of various characteristics of a source on a recipient's message acceptance. This work was followed by decades of interest in the relative credibility of media involving comparisons between newspapers, radio, television, Communication researchers have tended to focus on sources and media, viewing credibility as a perceived characteristic. Within information science, the focus is on the evaluation of information, most typically instantiated in documents and statements. Here, credibility has been viewed largely as a criterion for relevance judgment, with researchers focusing on how information seekers assess a document's likely level of This brief account highlights an often implicit focus on varying objects …",
"title": ""
},
{
"docid": "81b9bc89940dfd93d4c10ad011ba6d68",
"text": "The emerging vehicular applications demand for a lot more computing and communication capacity to excel in their compute-intensive and latency-sensitive tasks. Fog computing, which focuses on moving computing resources to the edge of networks, complements cloud computing by solving the latency constraints and reducing ingress traffic to the cloud. This paper presents a visionary concept on vehicular fog computing that turns connected vehicles into mobile fog nodes and utilises mobility of vehicles for providing cost-effective and on-demand fog computing for vehicular applications. Besides system design, this paper also discusses the remained technical challenges.",
"title": ""
},
{
"docid": "1a39b10cfdcae83004a1f3248df18ab2",
"text": "This chapter discusses the task of topic segmentation: automatically dividing single long recordings or transcripts into shorter, topically coherent segments. First, we look at the task itself, the applications which require it, and some ways to evaluate accuracy. We then explain the most influential approaches – generative and discriminative, supervised and unsupervised – and discuss their application in particular domains.",
"title": ""
},
{
"docid": "6dc474889704ce2cecdaa5bb2a15137b",
"text": "ions. The experts often redistributed the extracted domain concepts according to their domain view. For example, two subclasses identified for Pr tein belong to different domains, molecular biology and bioinformatics, and have to be placed in the corresponding hierarchies accordingly. Such abstractions need to be still manually created according to the ontology engineers view on the domain. However, the abstraction step is considerably supported if the expert has an overview of relevant domain concepts. Support. The curators considered the extracted ontologies as a useful start for deriving a domain ontology. Several complex structures could be directly included in a final ontology (e.g., theSitehierarchy in Figure 7.4), or provided helpful hints on how certain concepts interrelate. The most appreciated contribution was that the learned ontologies even suggested new additions for the manually built ontologies.",
"title": ""
},
{
"docid": "1f7f0b82bf5822ee51313edfd1cb1593",
"text": "With the promise of meeting future capacity demands, 3-D massive-MIMO/full dimension multiple-input-multiple-output (FD-MIMO) systems have gained much interest in recent years. Apart from the huge spectral efficiency gain, 3-D massive-MIMO/FD-MIMO systems can also lead to significant reduction of latency, simplified multiple access layer, and robustness to interference. However, in order to completely extract the benefits of the system, accurate channel state information is critical. In this paper, a channel estimation method based on direction of arrival (DoA) estimation is presented for 3-D millimeter wave massive-MIMO orthogonal frequency division multiplexing (OFDM) systems. To be specific, the DoA is estimated using estimation of signal parameter via rotational invariance technique method, and the root mean square error of the DoA estimation is analytically characterized for the corresponding MIMO-OFDM system. An ergodic capacity analysis of the system in the presence of DoA estimation error is also conducted, and an optimum power allocation algorithm is derived. Furthermore, it is shown that the DoA-based channel estimation achieves a better performance than the traditional linear minimum mean squared error estimation in terms of ergodic throughput and minimum chordal distance between the subspaces of the downlink precoders obtained from the underlying channel and the estimated channel.",
"title": ""
},
{
"docid": "beb59e93d6e9e4d27cba95b428faec19",
"text": "Landslides cause lots of damage to life and property world over. There has been research in machine-learning that aims to predict landslides based on the statistical analysis of historical landslide events and its triggering factors. However, prediction of landslides suffers from a class-imbalance problem as landslides and land-movement are very rare events. In this paper, we apply state-of-the-art techniques to correct the class imbalance in landslide datasets. More specifically, to overcome the class-imbalance problem, we use different synthetic and oversampling techniques to a real-world landslide data collected from the Chandigarh - Manali highway. Also, we apply several machine-learning algorithms to the landslide data set for predicting landslides and evaluating our algorithms. Different algorithms have been assessed using techniques like the area under the ROC curve (AUC) and sensitivity index (d'). Results suggested that random forest algorithm performed better compared to other classification techniques like neural networks, logistic regression, support vector machines, and decision trees. Furthermore, among class-imbalance methods, the Synthetic Minority Oversampling Technique with iterative partitioning filter (SMOTE-IPF) performed better than other techniques. We highlight the implications of our results and methods for predicting landslides in the real world.",
"title": ""
},
{
"docid": "63fa6565372b88315ccac15d6d8f0695",
"text": "This paper proposes a novel method for the prediction of stock market closing price. Many researchers have contributed in this area of chaotic forecast in their ways. Data mining techniques can be used more in financial markets to make qualitative decisions for investors. Fundamental and technical analyses are the traditional approaches so far. ANN is a popular way to identify unknown and hidden patterns in data is used for share market prediction. A multilayered feed-forward neural network is built by using combination of data and textual mining. The Neural Network is trained on the stock quotes and extracted key phrases using the Backpropagation Algorithm which is used to predict share market closing price. This paper is an attempt to determine whether the BSE market news in combination with the historical quotes can efficiently help in the calculation of the BSE closing index for a given trading day.",
"title": ""
},
{
"docid": "748abc573febb27f9b9eae92ec68fff7",
"text": "In this paper we develop a computational model of adaptation and spatial vision for realistic tone reproduction. The model is based on a multiscale representation of pattern, luminance, and color processing in the human visual system. We incorporate the model into a tone reproduction operator that maps the vast ranges of radiances found in real and synthetic scenes into the small fixed ranges available on conventional display devices such as CRT’s and printers. The model allows the operator to address the two major problems in realistic tone reproduction: wide absolute range and high dynamic range scenes can be displayed; and the displayed images match our perceptions of the scenes at both threshold and suprathreshold levels to the degree possible given a particular display device. Although in this paper we apply our visual model to the tone reproduction problem, the model is general and can be usefully applied to image quality metrics, image compression methods, and perceptually-based image synthesis algorithms. CR Categories: I.3.0 [Computer Graphics]: General;",
"title": ""
},
{
"docid": "b101ab8f2242e85ccd7948b0b3ffe9b4",
"text": "This paper describes a language-independent model for multi-class sentiment analysis using a simple neural network architecture of five layers (Embedding, Conv1D, GlobalMaxPooling and two Fully-Connected). The advantage of the proposed model is that it does not rely on language-specific features such as ontologies, dictionaries, or morphological or syntactic pre-processing. Equally important, our system does not use pre-trained word2vec embeddings which can be costly to obtain and train for some languages. In this research, we also demonstrate that oversampling can be an effective approach for correcting class imbalance in the data. We evaluate our methods on three publicly available datasets for English, German and Arabic, and the results show that our system’s performance is comparable to, or even better than, the state of the art for these datasets. We make our source-code publicly available.",
"title": ""
},
{
"docid": "7c254a96816b8ad1aa68a9a4927b3764",
"text": "The purpose of this study is to explore cost and management accounting practices utilized by manufacturing companies operating in Istanbul, Turkey. The sample of the study consists of 61 companies, containing both small and medium-sized enterprises, and large companies. The data collection methodology of the study is questionnaire survey. The content of the questionnaire survey is based on several previous studies. The major findings of the study are as follows: the most widely used product costing method is job costing; the complexity in production poses as the highest ranking difficulty in product costing; the most widely used three overhead allocation bases are prime costs, units produced, and direct labor cost; pricing decisions is the most important area where costing information is used; overall mean of the ratio of overhead to total cost is 34.48 percent for all industries; and the most important three management accounting practices are budgeting, planning and control, and cost-volume-profit analysis. Furthermore, decreasing profitability, increasing costs and competition, and economic crises are the factors, which increase the perceived importance of cost accounting. The findings indicate that companies perceive traditional management accounting tools still important. However, new management accounting practices such as strategic planning, and transfer pricing are perceived less important than traditional ones. Therefore, companies need to improve themselves in this aspect.",
"title": ""
},
{
"docid": "f734f6059c849c88e5b53d3584bf0a97",
"text": "In three studies (two representative nationwide surveys, N = 1,007, N = 682; and one experimental, N = 76) we explored the effects of exposure to hate speech on outgroup prejudice. Following the General Aggression Model, we suggest that frequent and repetitive exposure to hate speech leads to desensitization to this form of verbal violence and subsequently to lower evaluations of the victims and greater distancing, thus increasing outgroup prejudice. In the first survey study, we found that lower sensitivity to hate speech was a positive mediator of the relationship between frequent exposure to hate speech and outgroup prejudice. In the second study, we obtained a crucial confirmation of these effects. After desensitization training individuals were less sensitive to hate speech and more prejudiced toward hate speech victims than their counterparts in the control condition. In the final study, we replicated several previous effects and additionally found that the effects of exposure to hate speech on prejudice were mediated by a lower sensitivity to hate speech, and not by lower sensitivity to social norms. Altogether, our studies are the first to elucidate the effects of exposure to hate speech on outgroup prejudice.",
"title": ""
},
{
"docid": "c6478751e51c295811c9994f734f1336",
"text": "Optical character recognition (OCR) has made great progress in recent years due to the introduction of recognition engines based on recurrent neural networks, in particular the LSTM architecture. This paper describes a new, open-source line recognizer combining deep convolutional networks and LSTMs, implemented in PyTorch and using CUDA kernels for speed. Experimental results are given comparing the performance of different combinations of geometric normalization, 1D LSTM, deep convolutional networks, and 2D LSTM networks. An important result is that while deep hybrid networks without geometric text line normalization outperform 1D LSTM networks with geometric normalization, deep hybrid networks with geometric text line normalization still outperform all other networks. The best networks achieve a throughput of more than 100 lines per second and test set error rates on UW3 of 0.25%.",
"title": ""
}
] |
scidocsrr
|
c6add222ec1379d091debe55d250ce7e
|
Flattening curved documents in images
|
[
{
"docid": "58f2677cb0be82c83a34c632f14e12dc",
"text": "Image warping is a common problem when one scans or photocopies a document page from a thick bound volume, resulting in shading and curved text lines in the spine area of the bound volume. This will not only impair readability, but will also reduce the OCR accuracy. Further to our earlier attempt to correct such images, this paper proposes a simpler connected component analysis and regression technique. Compared to our earlier method, the present system is computationally less expensive and is resolution independent too. The implementation of the new system and improvement of OCR accuracy are presented in this paper.",
"title": ""
}
] |
[
{
"docid": "9f933f59d2a7852d1ce5dc986d056928",
"text": "The fundamental tradeoff between the rates at which energy and reliable information can be transmitted over a single noisy line is studied. Engineering inspiration for this problem is provided by powerline communication, RFID systems, and covert packet timing systems as well as communication systems that scavenge received energy. A capacity-energy function is defined and a coding theorem is given. The capacity-energy function is a non-increasing concave cap function. Capacity-energy functions for several channels are computed.",
"title": ""
},
{
"docid": "646195fbcfff0e2451e06d147ff7b389",
"text": "This paper presents the first complete, integrated and end-to-end solution for ad hoc cloud computing environments. Ad hoc clouds harvest resources from existing sporadically available, non-exclusive (i.e. Primarily used for some other purpose) and unreliable infrastructures. In this paper we discuss the problems ad hoc cloud computing solves and outline our architecture which is based on BOINC.",
"title": ""
},
{
"docid": "07e82c630ead780ad9e2382a1f713290",
"text": "The telecommunication world is evolving towards networks and different services. It is necessary to ensure interoperability between different networks to provide seamless and on-demand services. As the number of services and users in Internet Protocol Multimedia Subsystem (IMS) keeps increasing, network virtualization and cloud computing technologies seem to be a good alternative for Mobile Virtual Network Operators (MVNOs) in order to provide better services to customers and save cost and time. Cloud computing known as an IT environment that includes all elements of the IT and network stack, enabling the development, delivery, and consumption of Cloud Services. In this paper, we will present the challenges and issues of these emerging technologies. The first part of this paper describes Cloud computing as the networks of the future. It presents an overview of some works in this area. Some concepts like cloud services and Service oriented Architecture designed to facilitate rapid prototyping and deployment of on demand services that enhance flexibility, communication performance, robustness, and scalability are detailed. The second part exposes SOA and its concept, the third one deals with virtualization. Keywordscloud computing; services; SOA architecture; virtualization, IMS",
"title": ""
},
{
"docid": "ab5d9d6d389c38742047bb634e67d6a4",
"text": "We explore Deep Reinforcement Learning in a parameterized action space. Specifically, we investigate how to achieve sample-efficient end-to-end training in these tasks. We propose a new compact architecture for the tasks where the parameter policy is conditioned on the output of the discrete action policy. We also propose two new methods based on the state-of-the-art algorithms Trust Region Policy Optimization (TRPO) and Stochastic Value Gradient (SVG) to train such an architecture. We demonstrate that these methods outperform the state of the art method, Parameterized Action DDPG, on test domains.",
"title": ""
},
{
"docid": "62d1574e23fcf07befc54838ae2887c1",
"text": "Digital images are widely used and numerous application in different scientific fields use digital image processing algorithms where image segmentation is a common task. Thresholding represents one technique for solving that task and Kapur's and Otsu's methods are well known criteria often used for selecting thresholds. Finding optimal threshold values represents a hard optimization problem and swarm intelligence algorithms have been successfully used for solving such problems. In this paper we adjusted recent elephant herding optimization algorithm for multilevel thresholding by Kapur's and Otsu's method. Performance was tested on standard benchmark images and compared with four other swarm intelligence algorithms. Elephant herding optimization algorithm outperformed other approaches from literature and it was more robust.",
"title": ""
},
{
"docid": "82592f60e0039089e3c16d9534780ad5",
"text": "A model for grey-tone image enhancement using the concept of fuzzy sets is suggested. It involves primary enhancement, smoothing, and then final enhancement. The algorithm for both the primary and final enhancements includes the extraction of fuzzy properties corresponding to pixels and then successive applications of the fuzzy operator \"contrast intensifier\" on the property plane. The three different smoothing techniques considered in the experiment are defocussing, averaging, and max-min rule over the neighbors of a pixel. The reduction of the \"index of fuzziness\" and \"entropy\" for different enhanced outputs (corresponding to different values of fuzzifiers) is demonstrated for an English script input. Enhanced output as obtained by histogram modification technique is also presented for comparison.",
"title": ""
},
{
"docid": "57d5b63c8ad062e1c15b1037e9973b28",
"text": "SCADA systems are widely used in critical infrastructure sectors, including electricity generation and distribution, oil and gas production and distribution, and water treatment and distribution. SCADA process control systems are typically isolated from the internet via firewalls. However, they may still be subject to illicit cyber penetrations and may be subject to cyber threats from disgruntled insiders. We have developed a set of command injection, data injection, and denial of service attacks which leverage the lack of authentication in many common control system communication protocols including MODBUS, DNP3, and EtherNET/IP. We used these exploits to aid in development of a neural network based intrusion detection system which monitors control system physical behavior to detect artifacts of command and response injection attacks. Finally, we present intrusion detection accuracy results for our neural network based IDS which includes input features derived from physical properties of the control system.",
"title": ""
},
{
"docid": "7dcc7cdff8a9196c716add8a1faf0203",
"text": "Power modulators for compact, repetitive systems are continually faced with new requirements as the corresponding system objectives increase. Changes in pulse rate frequency or number of pulses significantly impact the design of the power conditioning system. In order to meet future power supply requirements, we have developed several high voltage (HV) capacitor charging power supplies (CCPS). This effort focuses on a volume of 6\" x 6\" x 14\" and a weight of 25 lbs. The primary focus was to increase the effective capacitor charge rate, or power output, for the given size and weight. Although increased power output was the principal objective, efficiency and repeatability were also considered. A number of DC-DC converter topologies were compared to determine the optimal design. In order to push the limits of output power, numerous resonant converter parameters were examined. Comparisons of numerous topologies, HV transformers and rectifiers, and switching frequency ranges are presented. The impacts of the control system and integration requirements are also considered.",
"title": ""
},
{
"docid": "3d2f52df1d4f422bed841c8c105963de",
"text": "Illustrative volume visualization frequently employs non-photorealistic rendering techniques to enhance important features or to suppress unwanted details. However, it is difficult to integrate multiple nonphotorealistic rendering approaches into a single framework due to great differences in the individual methods and their parameters. In this paper, we present the concept of style transfer functions. Our approach enables flexible data-driven illumination which goes beyond using the transfer function to just assign colors and opacities. An image-based lighting model uses sphere maps to represent non-photorealistic rendering styles. Style transfer functions allow us to combine a multitude of different shading styles in a single rendering. We extend this concept with a technique for curvature-controlled style contours and an illustrative transparency model. Our implementation of the presented methods allows interactive generation of high-quality volumetric illustrations.",
"title": ""
},
{
"docid": "d9617ed486a1b5488beab08652f736e0",
"text": "The paper shows how Combinatory Categorial Grammar (CCG) can be adapted to take advantage of the extra resourcesensitivity provided by the Categorial Type Logic framework. The resulting reformulation, Multi-Modal CCG, supports lexically specified control over the applicability of combinatory rules, permitting a universal rule component and shedding the need for language-specific restrictions on rules. We discuss some of the linguistic motivation for these changes, define the Multi-Modal CCG system and demonstrate how it works on some basic examples. We furthermore outline some possible extensions and address computational aspects of Multi-Modal CCG.",
"title": ""
},
{
"docid": "b691e07aa54a48fe7e3a2c6f6cf3754a",
"text": "We study fundamental aspects related to the efficient processing of the SPARQL query language for RDF, proposed by the W3C to encode machine-readable information in the Semantic Web. Our key contributions are (i) a complete complexity analysis for all operator fragments of the SPARQL query language, which -- as a central result -- shows that the SPARQL operator Optional alone is responsible for the PSpace-completeness of the evaluation problem, (ii) a study of equivalences over SPARQL algebra, including both rewriting rules like filter and projection pushing that are well-known from relational algebra optimization as well as SPARQL-specific rewriting schemes, and (iii) an approach to the semantic optimization of SPARQL queries, built on top of the classical chase algorithm. While studied in the context of a theoretically motivated set semantics, almost all results carry over to the official, bag-based semantics and therefore are of immediate practical relevance.",
"title": ""
},
{
"docid": "4cfcbac8ec942252b79f2796fa7490f0",
"text": "Over the next few years the amount of biometric data being at the disposal of various agencies and authentication service providers is expected to grow significantly. Such quantities of data require not only enormous amounts of storage but unprecedented processing power as well. To be able to face this future challenges more and more people are looking towards cloud computing, which can address these challenges quite effectively with its seemingly unlimited storage capacity, rapid data distribution and parallel processing capabilities. Since the available literature on how to implement cloud-based biometric services is extremely scarce, this paper capitalizes on the most important challenges encountered during the development work on biometric services, presents the most important standards and recommendations pertaining to biometric services in the cloud and ultimately, elaborates on the potential value of cloud-based biometric solutions by presenting a few existing (commercial) examples. In the final part of the paper, a case study on fingerprint recognition in the cloud and its integration into the e-learning environment Moodle is presented.",
"title": ""
},
{
"docid": "eee687e5c110bbfdd447b7a58444f34e",
"text": "We present a \"scale-and-stretch\" warping method that allows resizing images into arbitrary aspect ratios while preserving visually prominent features. The method operates by iteratively computing optimal local scaling factors for each local region and updating a warped image that matches these scaling factors as closely as possible. The amount of deformation of the image content is guided by a significance map that characterizes the visual attractiveness of each pixel; this significance map is computed automatically using a novel combination of gradient and salience-based measures. Our technique allows diverting the distortion due to resizing to image regions with homogeneous content, such that the impact on perceptually important features is minimized. Unlike previous approaches, our method distributes the distortion in all spatial directions, even when the resizing operation is only applied horizontally or vertically, thus fully utilizing the available homogeneous regions to absorb the distortion. We develop an efficient formulation for the nonlinear optimization involved in the warping function computation, allowing interactive image resizing.",
"title": ""
},
{
"docid": "0efe2a685756f01b7cf0c202c47006fc",
"text": "Designers and evaluators of immersive virtual reality systems have many ideas concerning how virtual reality can facilitate learning. However, we have little information concerning which of virtual reality's features provide the most leverage for enhancing understanding or how to customize those affordances for different learning environments. In part, this reflects the truly complex nature of learning. Features of a learning environment do not act in isolation; other factors such as the concepts or skills to be learned, individual characteristics, the learning experience, and the interaction experience all play a role in shaping the learning process and its outcomes. Through Project Science Space, we have been trying to identify, use, and evaluate immersive virtual reality's affordances as a means to facilitate the mastery of complex, abstract concepts. In doing so, we are beginning to understand the interplay between virtual reality's features and other important factors in shaping the learning process and learning outcomes for this type of material. In this paper, we present a general model that describes how we think these factors work together and discuss some of the lessons we are learning about virtual reality's affordances in the context of this model for complex conceptual learning.",
"title": ""
},
{
"docid": "70c33dda7076e182ab2440e1f37186f7",
"text": "A loss of subchannel orthogonality due to timevariant multipath channels in orthogonal frequency-division multiplexing (OFDM) systems leads to interchannel interference (ICI) which increases the error floor in proportion to the Doppler frequency. In this paper, a simple frequency-domain equalization technique which can compensate for the effect of ICI in a multipath fading channel is proposed. In this technique, the equalization of the received OFDM signal is achieved by using the assumption that the channel impulse response (CIR) varies in a linear fashion during a block period and by compensating for the ICI terms that significantly affect the bit-error rate (BER) performance.",
"title": ""
},
{
"docid": "3d5fc1dbcf60edb18b59039ebc11ec1e",
"text": "A multimedia sensor network is a sensor network that consists of at least one sensor outputting multimedia data (video, audio, etc.). This kind of sensor network is a fairly new research domain. Hence, techniques for processing complex events in the context of multimedia sensor networks remain underdeveloped as they have rarely been considered previously. In this paper, we identify the requirements of a suitable language for processing complex events in multimedia sensor networks. The requirements are illustrated here through a motivating scenario related to \"Smart Home Automation\" application. We thoroughly survey existing studies and discuss them with respect to the considered requirements. The conclusions show that no existing language can fully address all the necessary requirements. A discussion about the needed features for processing complex events in multimedia sensor networks is also given here.",
"title": ""
},
{
"docid": "8cdd54a8bd288692132b57cb889b2381",
"text": "This research deals with the soft computing methodology of fuzzy cognitive map (FCM). Here a mathematical description of FCM is presented and a new methodology based on fuzzy logic techniques for developing the FCM is examined. The capability and usefulness of FCM in modeling complex systems and the application of FCM to modeling and describing the behavior of a heat exchanger system is presented. The applicability of FCM to model the supervisor of complex systems is discussed and the FCM-supervisor for evaluating the performance of a system is constructed; simulation results are presented and discussed.",
"title": ""
},
{
"docid": "988503b179c3b49d60f2e9eeda6c45c2",
"text": "Previous work has shown that high quality phrasal paraphrases can be extracted from bilingual parallel corpora. However, it is not clear whether bitexts are an appropriate resource for extracting more sophisticated sentential paraphrases, which are more obviously learnable from monolingual parallel corpora. We extend bilingual paraphrase extraction to syntactic paraphrases and demonstrate its ability to learn a variety of general paraphrastic transformations, including passivization, dative shift, and topicalization. We discuss how our model can be adapted to many text generation tasks by augmenting its feature set, development data, and parameter estimation routine. We illustrate this adaptation by using our paraphrase model for the task of sentence compression and achieve results competitive with state-of-the-art compression systems.",
"title": ""
},
{
"docid": "5f9da666504ade5b661becfd0a648978",
"text": "cefe.cnrs-mop.fr Under natural selection, individuals tend to adapt to their local environmental conditions, resulting in a pattern of LOCAL ADAPTATION (see Glossary). Local adaptation can occur if the direction of selection changes for an allele among habitats (antagonistic environmental effect), but it might also occur if the intensity of selection at several loci that are maintained as polymorphic by recurrent mutations covaries negatively among habitats. These two possibilities have been clearly identified in the related context of the evolution of senescence but have not have been fully appreciated in empirical and theoretical studies of local adaptation [1,2].",
"title": ""
},
{
"docid": "d1929611e91eb547c7dfb3c5f76701be",
"text": "In this paper we built an automatic system for performance reviews of a new approach of facial expression recognition FER which is essentially based on Histogram of Oriented Gradient and Normalized Cross Correlation. The performance was evaluated by varying two parameters: face resolution and colour space of images. Results show that good recognition can be attained using low resolution, in particular 64 by 64 pixels in RGB images.",
"title": ""
}
] |
scidocsrr
|
941bb11b29636c75ef2051900b9df2f4
|
BROADBAND ASYMMETRICAL MULTI-SECTION COUPLED LINE WILKINSON POWER DIVIDER WITH UNEQUAL POWER DIVIDING RATIO
|
[
{
"docid": "a4f9d30c707237f3c3eacaab9c6be523",
"text": "This paper presents the design of a novel power-divider circuit with an unequal power-dividing ratio. Unlike the conventional approaches, the characteristic impedance values of all the branch lines involved are independent of the dividing ratio. The electrical lengths of the line sections are the only circuit parameters to be adjusted. Moreover, the proposed structure does not require impedance transformers at the two output ports. By the introduction of a transmission line between one of the output ports and the isolation resistor, a flexible layout design with reduced parasitic coupling is achieved. For verification, the measured results of a 2 : 1 and a 4 : 1 power-divider circuits operating at 1 GHz are given. A relative bandwidth of over 20% is obtained based on a return loss and port isolation requirement of -20 dB.",
"title": ""
}
] |
[
{
"docid": "9ce6375549c9c328ce34c1f24f93ab7f",
"text": "Interest in social interactions, neighborhood effects, and social dynamics in the last several years has seen a revival. Unfortunately, little progress has been made on empirical estimation of such interactions and testing for their presence, on the development of policy interventions which work through social interactions, or on the evaluation of such interventions because several basic identification and estimation problems have not been seriously confronted. Nevertheless, most of these problems are in principle solvable and methods for identifying social interactions and estimating their magnitudes are available and are outlined in this paper. These methods address simultaneity, correlated unobservables, errors-in-variables, and endogenous group membership problems. Moreover, while policy interventions with presumed effects on social interactions have not been well-designed thus far, at least to measure social interactions per se, this problem is not inherent and several policy interventions are suggested which could work primarily through social interactions and whose evaluation could establish their magnitudes. Interest in social interactions, neighborhood effects, and social dynamics in the last several years has seen a revival. One reason is the widespread perception that many social indicators in the U.S. have worsened. The increase in wage and income inequality is one of the most prominent of these trends; a decline in the earnings and incomes of those at the bottom of the distribution is another, separate trend concerning absolute rather than relative changes; and an increase in the concentration of poverty and racial segregation is another. While there is a often a tendency to assume that everything is getting worse when this is not correct--a view usefully countered by Jencks (1992)--it is unquestionable that some measures of social well-being have deteriorated. That inequality, concentration of poverty, segregation--and their continued persistence over time--might be a partial result of social interactions--that is, direct non-market interactions between individuals--that lead to low-level equilibria, or “traps,” is an old idea that saw its last major discussion in the 1960s and early 1970s. That period saw extensive discussions of the notion of a culture of poverty from which the poor cannot escape (Lewis, 1966), of externalities in housing markets which lead to prisoner's dilemmas and Pareto-inferior housing equilibria (Davis and Whinston, 1961), of segregation as a natural sorting and self-reinforcing mechanism (Schelling, 1971), and of peer group effects in schools (Coleman, 1966). The recent revival of interest in such models has come from a variety of sources. In sociology, the work of Wilson (1987) almost single-handedly brought the concept of neighborhood effects and role models back into general discussion, a discussion which has spilled over into all the social science disciplines. In economics, the work of Romer (1986), Lucas (1988), and others on the externalities in 1 See also Pollak (1976) for a well-known study of interdependent preferences, which is another form of social interaction. 2 technology and human capital investment that promote economic growth has spilled over into more microeconomic concerns with neighborhoods, income inequality, and the like (Benabou, 1993,1996; Brock and Durlauf, 1995, forthcoming; Durlauf, 1996a,1996b to cite the most influential works among many). The growth of game theory in economics has also led to branch dealing with the development of social norms and conventions as a natural outcome of group interactions (Young, 1996). The new theoretical literature on these issues in economics has spawned a number of papers which demonstrate that, under specified conditions and model assumptions, certain policy interventions can be shown to possibly counter the effects of undesirable social interactions and can have social-welfare-improving consequences (e.g., Benabou, 1996). In many cases, these interventions have been shown to permit an escape from the low-level equilibria resulting from those social interactions. A natural question is whether there is any empirical evidence that these, or other policy interventions that might be considered, would have the effects hypothesized, and for the reasons hypothesized, if they were in fact implemented. The answer to this question, in turn, naturally leads to an investigation of whether there have been any policy interventions in the past which have had, either intentionally or unintentionally, effects which have operated directly or indirectly on social interactions, and have been shown to have positive effects of one kind or another. This is the motivating issue for this paper. Answering these questions necessarily requires addressing the prior issue of whether the existence of social interactions can be detected with empirical analysis in the first place, which is",
"title": ""
},
{
"docid": "aeb9a3b1de003f87f6260f1cbe1e16d9",
"text": "As learning environments are gaining in features and in complexity, the e-learning industry is more and more interested in features easing teachers’ work. Learning design being a critical and time consuming task could be facilitated by intelligent components helping teachers build their learning activities. The Intelligent Learning Design Recommendation System (ILD-RS) is such a software component, designed to recommend learning paths during the learning design phase in a Learning Management System (LMS). Although ILD-RS exploits several parameters which are sometimes subject to controversy, such as learning styles and teaching styles, the main interest of the component lies on its algorithm based on Markov decision processes that takes into account the teacher’s use to refine its accuracy.",
"title": ""
},
{
"docid": "4e791e4367b5ef9ff4259a87b919cff7",
"text": "Considerable attention has been paid to dating the earliest appearance of hominins outside Africa. The earliest skeletal and artefactual evidence for the genus Homo in Asia currently comes from Dmanisi, Georgia, and is dated to approximately 1.77–1.85 million years ago (Ma)1. Two incisors that may belong to Homo erectus come from Yuanmou, south China, and are dated to 1.7 Ma2; the next-oldest evidence is an H. erectus cranium from Lantian (Gongwangling)—which has recently been dated to 1.63 Ma3—and the earliest hominin fossils from the Sangiran dome in Java, which are dated to about 1.5–1.6 Ma4. Artefacts from Majuangou III5 and Shangshazui6 in the Nihewan basin, north China, have also been dated to 1.6–1.7 Ma. Here we report an Early Pleistocene and largely continuous artefact sequence from Shangchen, which is a newly discovered Palaeolithic locality of the southern Chinese Loess Plateau, near Gongwangling in Lantian county. The site contains 17 artefact layers that extend from palaeosol S15—dated to approximately 1.26 Ma—to loess L28, which we date to about 2.12 Ma. This discovery implies that hominins left Africa earlier than indicated by the evidence from Dmanisi. An Early Pleistocene artefact assemblage from the Chinese Loess Plateau indicates that hominins had left Africa by at least 2.1 million years ago, and occupied the Loess Plateau repeatedly for a long time.",
"title": ""
},
{
"docid": "1ed656aa46b9b79a3ae91bc6aa848190",
"text": "STUDY OBJECTIVES\nSome, but not all, researchers report that obstructive sleep apnea (OSA) patients experience increased depressive symptoms. Many psychological symptoms of OSA are explained in part by other OSA comorbidities (age, hypertension, body mass). People who use more passive and less active coping report more depressive symptoms. We examined relationships between coping and depressive symptoms in OSA.\n\n\nSETTING\nN/A.\n\n\nDESIGN/PARTICIPANTS\n64 OSA (respiratory disturbance index (RDI) > or = 15) patients were studied with polysomnography and completed Ways of Coping (WC), Profile of Mood States (POMS), Center for Epidemiological Studies-Depression (CESD) scales. WC was consolidated into Approach (active) and Avoidance (passive) factors. Data were analyzed using SPSS 9.0 regression with CESD as the dependent variable and WC Approach and Avoidance as the independent variables.\n\n\nINTERVENTIONS\nN/A.\n\n\nMEASUREMENTS AND RESULTS\nWC Approach factor (B=-1.105, beta=-.317, p=.009) was negatively correlated and WC Avoidance factor (B=1.353, beta=.376, p=.007) was positively correlated with CESD scores. These factors explained an additional 8% of CESD variance (p<.001) beyond that explained by the covariates: demographic variables, RDI, and fatigue (as measured by the POMS).\n\n\nCONCLUSIONS\nMore passive and less active coping was associated with more depressive symptoms in OSA patients. The extent of depression experienced by OSA patients may not be due solely to effects of OSA itself. Choice of coping strategies may help determine who will experience more depressive symptoms.",
"title": ""
},
{
"docid": "0c5b8a0948386484c5cd96f3413444f2",
"text": "Applications accelerated by field-programmable gate arrays (FPGAs) often require pipelined floating-point accumulators with a variety of different trade-offs. Although previous work has introduced numerous floating-point accumulation architectures, few cores are available for public use, which forces designers to use fixed-point implementations or vendor-provided cores that are not portable and are often not optimized for the desired set of trade-offs. In this article, we combine and extend previous floating-point accumulator architectures into a configurable, open-source core, referred to as the unified accumulator architecture (UAA), which enables designers to choose between different trade-offs for different applications. UAA is portable across FPGAs and allows designers to specialize the underlying adder core to take advantage of device-specific optimizations. By providing an extensible, open-source implementation, we hope for the research community to extend the provided core with new architectures and optimizations.",
"title": ""
},
{
"docid": "6997284b9a3b8c8e7af639e92399db46",
"text": "Research into rehabilitation robotics has grown rapidly and the number of therapeutic rehabilitation robots has expanded dramatically during the last two decades. Robotic rehabilitation therapy can deliver high-dosage and high-intensity training, making it useful for patients with motor disorders caused by stroke or spinal cord disease. Robotic devices used for motor rehabilitation include end-effector and exoskeleton types; herein, we review the clinical use of both types. One application of robot-assisted therapy is improvement of gait function in patients with stroke. Both end-effector and the exoskeleton devices have proven to be effective complements to conventional physiotherapy in patients with subacute stroke, but there is no clear evidence that robotic gait training is superior to conventional physiotherapy in patients with chronic stroke or when delivered alone. In another application, upper limb motor function training in patients recovering from stroke, robot-assisted therapy was comparable or superior to conventional therapy in patients with subacute stroke. With end-effector devices, the intensity of therapy was the most important determinant of upper limb motor recovery. However, there is insufficient evidence for the use of exoskeleton devices for upper limb motor function in patients with stroke. For rehabilitation of hand motor function, either end-effector and exoskeleton devices showed similar or additive effects relative to conventional therapy in patients with chronic stroke. The present evidence supports the use of robot-assisted therapy for improving motor function in stroke patients as an additional therapeutic intervention in combination with the conventional rehabilitation therapies. Nevertheless, there will be substantial opportunities for technical development in near future.",
"title": ""
},
{
"docid": "8f13fbf6de0fb0685b4a39ee5f3bb415",
"text": "This review presents one of the eight theories of the quality of life (QOL) used for making the SEQOL (self-evaluation of quality of life) questionnaire or the quality of life as realizing life potential. This theory is strongly inspired by Maslow and the review furthermore serves as an example on how to fulfill the demand for an overall theory of life (or philosophy of life), which we believe is necessary for global and generic quality-of-life research. Whereas traditional medical science has often been inspired by mechanical models in its attempts to understand human beings, this theory takes an explicitly biological starting point. The purpose is to take a close view of life as a unique entity, which mechanical models are unable to do. This means that things considered to be beyond the individual's purely biological nature, notably the quality of life, meaning in life, and aspirations in life, are included under this wider, biological treatise. Our interpretation of the nature of all living matter is intended as an alternative to medical mechanism, which dates back to the beginning of the 20th century. New ideas such as the notions of the human being as nestled in an evolutionary and ecological context, the spontaneous tendency of self-organizing systems for realization and concord, and the central role of consciousness in interpreting, planning, and expressing human reality are unavoidable today in attempts to scientifically understand all living matter, including human life.",
"title": ""
},
{
"docid": "17b7930531d63d51e33c714a072acbe8",
"text": "Amortized variational inference (AVI) replaces instance-specific local inference with a global inference network. While AVI has enabled efficient training of deep generative models such as variational autoencoders (VAE), recent empirical work suggests that inference networks can produce suboptimal variational parameters. We propose a hybrid approach, to use AVI to initialize the variational parameters and run stochastic variational inference (SVI) to refine them. Crucially, the local SVI procedure is itself differentiable, so the inference network and generative model can be trained end-to-end with gradient-based optimization. This semi-amortized approach enables the use of rich generative models without experiencing the posterior-collapse phenomenon common in training VAEs for problems like text generation. Experiments show this approach outperforms strong autoregressive and variational baselines on standard text and image datasets.",
"title": ""
},
{
"docid": "006347cd3839d9fabd983e7cc379322d",
"text": "Recent progress in both Artificial Intelligence (AI) and Robotics have enabled the development of general purpose robot platforms that are capable of executing a wide variety of complex, temporally extended service tasks in open environments. This article introduces a novel, custom-designed multi-robot platform for research on AI, robotics, and especially Human-Robot Interaction (HRI) for service robots. Called BWIBots, the robots were designed as a part of the Building-Wide Intelligence (BWI) project at the University of Texas at Austin. The article begins with a description of, and justification for, the hardware and software design decisions underlying the BWIBots, with the aim of informing the design of such platforms in the future. It then proceeds to present an overview of various research contributions that have enabled the BWIBots to better (i) execute action sequences to complete user requests, (ii) efficiently ask questions to resolve user requests, (iii) understand human commands given in natural language, and (iv) understand human intention from afar. The article concludes with a look forward towards future research opportunities and applications enabled by the BWIBot platform.",
"title": ""
},
{
"docid": "8dbe7ed9d801c7c39d583de6ebef9908",
"text": "We propose a novel approach for content based color image classification using Support Vector Machine (SVM). Traditional classification approaches deal poorly on content based image classification tasks being one of the reasons of high dimensionality of the feature space. In this paper, color image classification is done on features extracted from histograms of color components. The benefit of using color image histograms are better efficiency, and insensitivity to small changes in camera view-point i.e. translation and rotation. As a case study for validation purpose, experimental trials were done on a database of about 500 images divided into four different classes has been reported and compared on histogram features for RGB, CMYK, Lab, YUV, YCBCR, HSV, HVC and YIQ color spaces. Results based on the proposed approach are found encouraging in terms of color image classification accuracy.",
"title": ""
},
{
"docid": "b5df3d884385b8c4e65c42d8ee3a3b1b",
"text": "Reinforcement learning (RL) makes it possible to train agents capable of achieving sophisticated goals in complex and uncertain environments. A key difficulty in reinforcement learning is specifying a reward function for the agent to optimize. Traditionally, imitation learning in RL has been used to overcome this problem. Unfortunately, hitherto imitation learning methods tend to require that demonstrations are supplied in the first-person: the agent is provided with a sequence of states and a specification of the actions that it should have taken. While powerful, this kind of imitation learning is limited by the relatively hard problem of collecting first-person demonstrations. Humans address this problem by learning from third-person demonstrations: they observe other humans perform tasks, infer the task, and accomplish the same task themselves. In this paper, we present a method for unsupervised third-person imitation learning. Here third-person refers to training an agent to correctly achieve a simple goal in a simple environment when it is provided a demonstration of a teacher achieving the same goal but from a different viewpoint; and unsupervised refers to the fact that the agent receives only these third-person demonstrations, and is not provided a correspondence between teacher states and student states. Our methods primary insight is that recent advances from domain confusion can be utilized to yield domain agnostic features which are crucial during the training process. To validate our approach, we report successful experiments on learning from third-person demonstrations in a pointmass domain, a reacher domain, and inverted pendulum.",
"title": ""
},
{
"docid": "88d87dacef186e648fb648fdea37e4bc",
"text": "Two-factor authentication (TFA), enabled by hardware tokens and personal devices, is gaining momentum. The security of TFA schemes relies upon a human-memorable password p drawn from some implicit dictionary D and a t-bit device-generated one-time PIN z. Compared to password-only authentication, TFA reduces the probability of adversary’s online guessing attack to 1/(|D| ∗ 2) (and to 1/2 if the password p is leaked). However, known TFA schemes do not improve security in the face of offline dictionary attacks, because an adversary who compromises the service and learns a (salted) password hash can still recover the password with O(|D|) amount of effort. This password might be reused by the user at another site employing password-only authentication. We present a suite of efficient novel TFA protocols which improve upon password-only authentication by a factor of 2 with regards to both the online guessing attack and the offline dictionary attack. To argue the security of the presented protocols, we first provide a formal treatment of TFA schemes in general. The TFA protocols we present enable utilization of devices that are connected to the client over several channel types, formed using manual PIN entry, visual QR code capture, wireless communication (Bluetooth or WiFi), and combinations thereof. Utilizing these various communication settings we design, implement, and evaluate the performance of 13 different TFA mechanisms, and we analyze them with respect to security, usability (manual effort needed beyond typing a password), and deployability (need for additional hardware or software), showing consistent advantages over known TFA schemes.",
"title": ""
},
{
"docid": "a16a66d4eac400a328b7ea7276d37ed4",
"text": "In this paper, we analyze the impact of Layout Dependent Effect (LDE) observed on MOSFETs. It is shown that changing the Layout have an impact on MOSFET device parameters and reliability. Here, we studied the Well Proximity Effect (WPE), Length of diffusion (LOD) and Oxide Spacing Effect (OSE) impacts on device MOSFET parameters and reliability. We also analyzed SiGe impacts on LDE, since it is commonly used to boost device performance.",
"title": ""
},
{
"docid": "c05b2317f529d79a2d05223c249549b6",
"text": "PURPOSE\nThis study presents a two-degree customized animated stimulus developed to evaluate smooth pursuit in children and investigates the effect of its predetermined characteristics (stimulus type and size) in an adult population. Then, the animated stimulus is used to evaluate the impact of different pursuit motion paradigms in children.\n\n\nMETHODS\nTo study the effect of animating a stimulus, eye movement recordings were obtained from 20 young adults while the customized animated stimulus and a standard dot stimulus were presented moving horizontally at a constant velocity. To study the effect of using a larger stimulus size, eye movement recordings were obtained from 10 young adults while presenting a standard dot stimulus of different size (1° and 2°) moving horizontally at a constant velocity. Finally, eye movement recordings were obtained from 12 children while the 2° customized animated stimulus was presented after three different smooth pursuit motion paradigms. Performance parameters, including gains and number of saccades, were calculated for each stimulus condition.\n\n\nRESULTS\nThe animated stimulus produced in young adults significantly higher velocity gain (mean: 0.93; 95% CI: 0.90-0.96; P = .014), position gain (0.93; 0.85-1; P = .025), proportion of smooth pursuit (0.94; 0.91-0.96, P = .002), and fewer saccades (5.30; 3.64-6.96, P = .008) than a standard dot (velocity gain: 0.87; 0.82-0.92; position gain: 0.82; 0.72-0.92; proportion smooth pursuit: 0.87; 0.83-0.90; number of saccades: 7.75; 5.30-10.46). In contrast, changing the size of a standard dot stimulus from 1° to 2° did not have an effect on smooth pursuit in young adults (P > .05). Finally, smooth pursuit performance did not significantly differ in children for the different motion paradigms when using the animated stimulus (P > .05).\n\n\nCONCLUSIONS\nAttention-grabbing and more dynamic stimuli, such as the developed animated stimulus, might potentially be useful for eye movement research. Finally, with such stimuli, children perform equally well irrespective of the motion paradigm used.",
"title": ""
},
{
"docid": "225e7b608d06d218144853b900d40fd1",
"text": "Deep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our deep transfer learning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120). In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2% - 10% using a single model. Codes and models are available at https://github.com/ZYYSzj/Selective-Joint-Fine-tuning.",
"title": ""
},
{
"docid": "259dab2aa5c11934ab728f6b779a9310",
"text": "Language identification, as the task of determining the language a given text is written in, has progressed substantially in recent decades. However, three main issues remain still unresolved: (i) distinction of similar languages, (ii) detection of multilingualism in a single document, and (iii) identifying the language of short texts. In this paper, we describe our work on the development of a benchmark to encourage further research in these three directions, set forth an evaluation framework suitable for the task, and make a dataset of annotated tweets publicly available for research purposes. We also describe the shared task we organized to validate and assess the evaluation framework and dataset with systems submitted by seven different participants, and analyze the performance of these systems. The evaluation of the results submitted by the participants of the shared task helped us shed some light on the shortcomings of state-of-the-art language identification systems, and gives insight into the extent to which the brevity, multilingualism, and language similarity found in texts exacerbate the performance of language identifiers. Our dataset with nearly 35,000 tweets and the evaluation framework provide researchers and practitioners with suitable resources to further study the aforementioned issues on language identification within a common setting that enables to compare results with one another.",
"title": ""
},
{
"docid": "7f27e9b29e6ed2800ef850e6022d29ba",
"text": "In 2004, the US Center for Disease Control (CDC) published a paper showing that there is no link between the age at which a child is vaccinated with MMR and the vaccinated children's risk of a subsequent diagnosis of autism. One of the authors, William Thompson, has now revealed that statistically significant information was deliberately omitted from the paper. Thompson first told Dr S Hooker, a researcher on autism, about the manipulation of the data. Hooker analysed the raw data from the CDC study afresh. He confirmed that the risk of autism among African American children vaccinated before the age of 2 years was 340% that of those vaccinated later.",
"title": ""
},
{
"docid": "dfdb7ab4a1ce74695757442b83f246fe",
"text": "In this paper, we propose an O(N) time distributed algorithm for computing betweenness centralities of all nodes in the network where N is the number of nodes. Our distributed algorithm is designed under the widely employed CONGEST model in the distributed computing community which limits each message only contains O(log N) bits. To our best knowledge, this is the first linear time deterministic distributed algorithm for computing the betweenness centralities in the published literature. We also give a lower bound for distributively computing the betweenness centrality under the CONGEST model as Ω(D+N/ log N) where D is the diameter of the network. This implies that our distributed algorithm is nearly optimal.",
"title": ""
},
{
"docid": "2fd16e94706bec951c2e194974249c42",
"text": "This paper presents a novel design of ternary logic inverters using carbon nanotube FETs (CNTFETs). Multiple-valued logic (MVL) circuits have attracted substantial interest due to the capability of increasing information content per unit area. In the past extensive design techniques for MVL circuits (especially ternary logic inverters) have been proposed for implementation in CMOS technology. In CNTFET device, the threshold voltage of the transistor can be controlled by controlling the chirality vector (i.e. the diameter); in this paper this feature is exploited to design ternary logic inverters. New designs are proposed and compared with existing CNTFET-based designs. Extensive simulation results using SPICE demonstrate that power delay product is improved by 300% comparing to the conventional ternary gate design.",
"title": ""
},
{
"docid": "cb2f5ac9292df37860b02313293d2f04",
"text": "How can web services that depend on user generated content discern fake social engagement activities by spammers from legitimate ones? In this paper, we focus on the social site of YouTube and the problem of identifying bad actors posting inorganic contents and inflating the count of social engagement metrics. We propose an effective method, Leas (Local Expansion at Scale), and show how the fake engagement activities on YouTube can be tracked over time by analyzing the temporal graph based on the engagement behavior pattern between users and YouTube videos. With the domain knowledge of spammer seeds, we formulate and tackle the problem in a semi-supervised manner — with the objective of searching for individuals that have similar pattern of behavior as the known seeds — based on a graph diffusion process via local spectral subspace. We offer a fast, scalable MapReduce deployment adapted from the localized spectral clustering algorithm. We demonstrate the effectiveness of our deployment at Google by achieving a manual review accuracy of 98% on YouTube Comments graph in practice. Comparing with the state-of-the-art algorithm CopyCatch, Leas achieves 10 times faster running time on average. Leas is now actively in use at Google, searching for daily deceptive practices on YouTube’s engagement graph spanning over a",
"title": ""
}
] |
scidocsrr
|
2702eec221beb81933ca083859a15efd
|
On-line Random Forests
|
[
{
"docid": "d2ca6e3dbf35d1205eaeece0adb5646f",
"text": "The Self-Organizing Map (SOM) is one of the best known and most popular neural network-based data analysis tools. Many variants of the SOM have been proposed, like the Neural Gas by Martinetz and Schulten, the Growing Cell Structures by Fritzke, and the Tree-Structured SOM by Koikkalainen and Oja. The purpose of such variants is either to make a more flexible topology, suitable for complex data analysis problems or to reduce the computational requirements of the SOM, especially the time-consuming search for the best-matching unit in large maps. We propose here a new variant called the Evolving Tree which tries to combine both of these advantages. The nodes are arranged in a tree topology that is allowed to grow when any given branch receives a lot of hits from the training vectors. The search for the best matching unit and its neighbors is conducted along the tree and is therefore very efficient. A comparison experiment with high dimensional real world data shows that the performance of the proposed method is better than some classical variants of SOM.",
"title": ""
},
{
"docid": "bdf81fccbfa77dadcad43699f815475e",
"text": "The objective of this paper is classifying images by the object categories they contain, for example motorbikes or dolphins. There are three areas of novelty. First, we introduce a descriptor that represents local image shape and its spatial layout, together with a spatial pyramid kernel. These are designed so that the shape correspondence between two images can be measured by the distance between their descriptors using the kernel. Second, we generalize the spatial pyramid kernel, and learn its level weighting parameters (on a validation set). This significantly improves classification performance. Third, we show that shape and appearance kernels may be combined (again by learning parameters on a validation set).\n Results are reported for classification on Caltech-101 and retrieval on the TRECVID 2006 data sets. For Caltech-101 it is shown that the class specific optimization that we introduce exceeds the state of the art performance by more than 10%.",
"title": ""
}
] |
[
{
"docid": "cc76afb929bdffe1b084843a6b267602",
"text": "Software applications continue to grow in terms of the number of features they offer, making personalization increasingly important. Research has shown that most users prefer the control afforded by an adaptable approach to personalization rather than a system-controlled adaptive approach. Both types of approaches offer advantages and disadvantages. No study, however, has compared the efficiency of the two approaches. In two controlled lab studies, we measured the efficiency of static, adaptive and adaptable interfaces in the context of pull-down menus. These menu conditions were implemented as split menus, in which the top four items remained static, were adaptable by the subject, or adapted according to the subject’s frequently and recently used items. The results of Study 1 showed that a static split menu was significantly faster than an adaptive split menu. Also, when the adaptable split menu was not the first condition presented to subjects, it was significantly faster than the adaptive split menu, and not significantly different from the static split menu. The majority of users preferred the adaptable menu overall. Several implications for personalizing user interfaces based on these results are discussed. One question which arose after Study 1 was whether prior exposure to the menus and task has an effect on the efficiency of the adaptable menus. A second study was designed to follow-up on the theory that prior exposure to different types of menu layouts influences a user’s willingness to customize. Though the observed power of this study was low and no statistically significant effect of type of exposure was found, a possible trend arose: that exposure to an adaptive interface may have a positive impact on the user’s willingness to customize. This and other secondary results are discussed, along with several areas for future work. The research presented in this thesis should be seen as an initial step towards a more thorough comparison of adaptive and adaptable interfaces, and should provide motivation for further development of adaptable interaction techniques.",
"title": ""
},
{
"docid": "d4f28f36cb55cd2b01a85baeec4ea4a0",
"text": "Reconstruction of complex auricular malformations is one of the longest surgical technique to master, because it requires an extremely detailed analysis of the anomaly and of the skin potential, as well as a to learn how to carve a complex 3D structure in costal cartilage. Small anomalies can be taken care of by any plastic surgeon, providing that he/she is aware of all the refinements of ear surgery. In this chapter, we analyze retrospectively 30 years of auricular reconstruction, ranging from small anomalies to microtia (2500 cases), excluding aesthetics variants such as prominent ears.",
"title": ""
},
{
"docid": "833c110e040311909aa38b05e457b2af",
"text": "The scyphozoan Aurelia aurita (Linnaeus) s. l., is a cosmopolitan species-complex which blooms seasonally in a variety of coastal and shelf sea environments around the world. We hypothesized that ephyrae of Aurelia sp.1 are released from the inner part of the Jiaozhou Bay, China when water temperature is below 15°C in late autumn and winter. The seasonal occurrence, growth, and variation of the scyphomedusa Aurelia sp.1 were investigated in Jiaozhou Bay from January 2011 to December 2011. Ephyrae occurred from May through June with a peak abundance of 2.38 ± 0.56 ind/m3 in May, while the temperature during this period ranged from 12 to 18°C. The distribution of ephyrae was mainly restricted to the coastal area of the bay, and the abundance was higher in the dock of the bay than at the other inner bay stations. Young medusae derived from ephyrae with a median diameter of 9.74 ± 1.7 mm were present from May 22. Growth was rapid from May 22 to July 2 with a maximum daily growth rate of 39%. Median diameter of the medusae was 161.80 ± 18.39 mm at the beginning of July. In August, a high proportion of deteriorated specimens was observed and the median diameter decreased. The highest average abundance is 0.62 ± 1.06 ind/km2 in Jiaozhou Bay in August. The abundance of Aurelia sp.1 medusae was low from September and then decreased to zero. It is concluded that water temperature is the main driver regulating the life cycle of Aurelia sp.1 in Jiaozhou Bay.",
"title": ""
},
{
"docid": "e770e39ff5986516c366260aedba0c61",
"text": "ConvNets, or Convolutional Neural Networks (CNN), are state-of-the-art classification algorithms, achieving near-human performance in visual recognition [1]. New trends such as augmented reality demand always-on visual processing in wearable devices. Yet, advanced ConvNets achieving high recognition rates are too expensive in terms of energy as they require substantial data movement and billions of convolution computations. Today, state-of-the-art mobile GPU's and ConvNet accelerator ASICs [2][3] only demonstrate energy-efficiencies of 10's to several 100's GOPS/W, which is one order of magnitude below requirements for always-on applications. This paper introduces the concept of hierarchical recognition processing, combined with the Envision platform: an energy-scalable ConvNet processor achieving efficiencies up to 10TOPS/W, while maintaining recognition rate and throughput. Envision hereby enables always-on visual recognition in wearable devices.",
"title": ""
},
{
"docid": "d648cdf8423f3ae447f027feb97b02e1",
"text": "This paper proposes a new idea that uses Wikipedia categories as answer types and defines candidate sets inside Wikipedia. The focus of a given question is searched in the hierarchy of Wikipedia main pages. Our searching strategy combines head-noun matching and synonym matching provided in semantic resources. The set of answer candidates is determined by the entry hierarchy in Wikipedia and the hyponymy hierarchy in WordNet. The experimental results show that the approach can find candidate sets in a smaller size but achieve better performance especially for ARTIFACT and ORGANIZATION types, where the performance is better than state-of-the-art Chinese factoid QA systems.",
"title": ""
},
{
"docid": "80c745ee8535d9d53819ced4ad8f996d",
"text": "Wireless Sensor Networks (WSN) are vulnerable to various sensor faults and faulty measurements. This vulnerability hinders efficient and timely response in various WSN applications, such as healthcare. For example, faulty measurements can create false alarms which may require unnecessary intervention from healthcare personnel. Therefore, an approach to differentiate between real medical conditions and false alarms will improve remote patient monitoring systems and quality of healthcare service afforded by WSN. In this paper, a novel approach is proposed to detect sensor anomaly by analyzing collected physiological data from medical sensors. The objective of this method is to effectively distinguish false alarms from true alarms. It predicts a sensor value from historic values and compares it with the actual sensed value for a particular instance. The difference is compared against a threshold value, which is dynamically adjusted, to ascertain whether the sensor value is anomalous. The proposed approach has been applied to real healthcare datasets and compared with existing approaches. Experimental results demonstrate the effectiveness of the proposed system, providing high Detection Rate (DR) and low False Positive Rate (FPR).",
"title": ""
},
{
"docid": "33b69d9f867ccb55835c80a2f5b80c91",
"text": "The volume of RDF data continues to grow over the past decade and many known RDF datasets have billions of triples. A grant challenge of managing this huge RDF data is how to access this big RDF data efficiently. A popular approach to addressing the problem is to build a full set of permutations of (S, P, O) indexes. Although this approach has shown to accelerate joins by orders of magnitude, the large space overhead limits the scalability of this approach and makes it heavyweight. In this paper, we present TripleBit, a fast and compact system for storing and accessing RDF data. The design of TripleBit has three salient features. First, the compact design of TripleBit reduces both the size of stored RDF data and the size of its indexes. Second, TripleBit introduces two auxiliary index structures, ID-Chunk matrix and ID-Predicate bit matrix, to minimize the number of index selection during query evaluation. Third, its query processor dynamically generates an optimal execution ordering for join queries, leading to fast query execution and effective reduction on the size of intermediate results. Our experiments show that TripleBit outperforms RDF-3X, MonetDB, BitMat on LUBM, UniProt and BTC 2012 benchmark queries and it offers orders of mangnitude performance improvement for some complex join queries.",
"title": ""
},
{
"docid": "c7c103a48a80ffee561a120913855758",
"text": "We study parameter estimation in Nonlinear Factor Analysis (NFA) where the generative model is parameterized by a deep neural network. Recent work has focused on learning such models using inference (or recognition) networks; we identify a crucial problem when modeling large, sparse, highdimensional datasets – underfitting. We study the extent of underfitting, highlighting that its severity increases with the sparsity of the data. We propose methods to tackle it via iterative optimization inspired by stochastic variational inference (Hoffman et al. , 2013) and improvements in the sparse data representation used for inference. The proposed techniques drastically improve the ability of these powerful models to fit sparse data, achieving state-of-the-art results on a benchmark textcount dataset and excellent results on the task of top-N recommendation.",
"title": ""
},
{
"docid": "50c0ebb4a984ea786eb86af9849436f3",
"text": "We systematically reviewed school-based skills building behavioural interventions for the prevention of sexually transmitted infections. References were sought from 15 electronic resources, bibliographies of systematic reviews/included studies and experts. Two authors independently extracted data and quality-assessed studies. Fifteen randomized controlled trials (RCTs), conducted in the United States, Africa or Europe, met the inclusion criteria. They were heterogeneous in terms of intervention length, content, intensity and providers. Data from 12 RCTs passed quality assessment criteria and provided evidence of positive changes in non-behavioural outcomes (e.g. knowledge and self-efficacy). Intervention effects on behavioural outcomes, such as condom use, were generally limited and did not demonstrate a negative impact (e.g. earlier sexual initiation). Beneficial effect on at least one, but never all behavioural outcomes assessed was reported by about half the studies, but this was sometimes limited to a participant subgroup. Sexual health education for young people is important as it increases knowledge upon which to make decisions about sexual behaviour. However, a number of factors may limit intervention impact on behavioural outcomes. Further research could draw on one of the more effective studies reviewed and could explore the effectiveness of 'booster' sessions as young people move from adolescence to young adulthood.",
"title": ""
},
{
"docid": "73ed034395a1552a5a1bdf7f40c61099",
"text": "Orbits combines a visual display and an eye motion sensor to allow a user to select between options by tracking a cursor with the eyes as the cursor travels in a circular path around each option. Using an off-the-shelf Jins MEME pair of eyeglasses, we present a pilot study that suggests that the eye movement required for Orbits can be sensed using three electrodes: one in the nose bridge and one in each nose pad. For forced choice binary selection, we achieve a 2.6 bits per second (bps) input rate at 250ms per input. We also inntroduce Head Orbits, where the user fixates the eyes on a target and moves the head in synchrony with the orbiting target. Measuring only the relative movement of the eyes in relation to the head, this method achieves a maximum rate of 2.0 bps at 500ms per input. Finally, we combine the two techniques together with a gyro to create an interface with a maximum input rate of 5.0 bps.",
"title": ""
},
{
"docid": "724c74408f59edaf1b1b4859ccd43ee9",
"text": "Motion sickness is a common disturbance occurring in healthy people as a physiological response to exposure to motion stimuli that are unexpected on the basis of previous experience. The motion can be either real, and therefore perceived by the vestibular system, or illusory, as in the case of visual illusion. A multitude of studies has been performed in the last decades, substantiating different nauseogenic stimuli, studying their specific characteristics, proposing unifying theories, and testing possible countermeasures. Several reviews focused on one of these aspects; however, the link between specific nauseogenic stimuli and the unifying theories and models is often not clearly detailed. Readers unfamiliar with the topic, but studying a condition that may involve motion sickness, can therefore have difficulties to understand why a specific stimulus will induce motion sickness. So far, this general audience struggles to take advantage of the solid basis provided by existing theories and models. This review focuses on vestibular-only motion sickness, listing the relevant motion stimuli, clarifying the sensory signals involved, and framing them in the context of the current theories.",
"title": ""
},
{
"docid": "ffd04d534aefbfb00879fed5c8480dd7",
"text": "This paper deals with the mechanical construction and static strength analysis of an axial flux permanent magnet machine with segmented armature torus topology, which consists of two external rotors and an inner stator. In order to conduct the three dimensional magnetic flux, the soft magnetic composites is used to manufacture the stator segments and the rotor yoke. On the basis of the detailed electromagnetic analysis, the main geometric dimensions of the machine are determined, which is also the precondition of the mechanical construction. Through the application of epoxy with high thermal conductivity and high mechanical strength, the independent segments of the stator are bounded together with the liquid-cooling system, which makes a high electrical load possible. Due to the unavoidable errors in the manufacturing and montage, there might be large force between the rotors and the stator. Thus, the rotor is held with a rotor carrier made from aluminum alloy with high elastic modulus and the form of the rotor carrier is optimized, in order to reduce the axial deformation. In addition, the shell and the shaft are designed and the choice of bearings is discussed. Finally, the strain and deformation of different parts are analyzed with the help of finite element method to validate the mechanical construction.",
"title": ""
},
{
"docid": "6af82b74f0c5f78a013aba63e1ad08b1",
"text": "Background/Objective:Many studies have identified early-life risk factors for subsequent childhood overweight/obesity, but few have evaluated how they combine to influence risk of childhood overweight/obesity. We examined associations, individually and in combination, of potentially modifiable risk factors in the first 1000 days after conception with childhood adiposity and risk of overweight/obesity in an Asian cohort.Methods:Six risk factors were examined: maternal pre-pregnancy overweight/obesity (body mass index (BMI) ⩾25 kg m−2), paternal overweight/obesity at 24 months post delivery, maternal excessive gestational weight gain, raised maternal fasting glucose during pregnancy (⩾5.1 mmol l−1), breastfeeding duration <4 months and early introduction of solid foods (<4 months). Associations between number of risk factors and adiposity measures (BMI, waist-to-height ratio (WHtR), sum of skinfolds (SSFs), fat mass index (FMI) and overweight/obesity) at 48 months were assessed using multivariable regression models.Results:Of 858 children followed up at 48 months, 172 (19%) had none, 274 (32%) had 1, 244 (29%) had 2, 126 (15%) had 3 and 42 (5%) had ⩾4 risk factors. Adjusting for confounders, significant graded positive associations were observed between number of risk factors and adiposity outcomes at 48 months. Compared with children with no risk factors, those with four or more risk factors had s.d. unit increases of 0.78 (95% confidence interval 0.41–1.15) for BMI, 0.79 (0.41–1.16) for WHtR, 0.46 (0.06–0.83) for SSF and 0.67 (0.07–1.27) for FMI. The adjusted relative risk of overweight/obesity in children with four or more risk factors was 11.1(2.5–49.1) compared with children with no risk factors. Children exposed to maternal pre-pregnancy (11.8(9.8–13.8)%) or paternal overweight status (10.6(9.6-11.6)%) had the largest individual predicted probability of child overweight/obesity.Conclusions:Early-life risk factors added cumulatively to increase childhood adiposity and risk of overweight/obesity. Early-life and preconception intervention programmes may be more effective in preventing overweight/obesity if they concurrently address these multiple modifiable risk factors.",
"title": ""
},
{
"docid": "d469d31d26d8bc07b9d8dfa8ce277e47",
"text": "BACKGROUND/PURPOSE\nMorbidity in children treated with appendicitis results either from late diagnosis or negative appendectomy. A Prospective analysis of efficacy of Pediatric Appendicitis Score for early diagnosis of appendicitis in children was conducted.\n\n\nMETHODS\nIn the last 5 years, 1,170 children aged 4 to 15 years with abdominal pain suggestive of acute appendicitis were evaluated prospectively. Group 1 (734) were patients with appendicitis and group 2 (436) nonappendicitis. Multiple linear logistic regression analysis of all clinical and investigative parameters was performed for a model comprising 8 variables to form a diagnostic score.\n\n\nRESULTS\nLogistic regression analysis yielded a model comprising 8 variables, all statistically significant, P <.001. These variables in order of their diagnostic index were (1) cough/percussion/hopping tenderness in the right lower quadrant of the abdomen (0.96), (2) anorexia (0.88), (3) pyrexia (0.87), (4) nausea/emesis (0.86), (5) tenderness over the right iliac fossa (0.84), (6) leukocytosis (0.81), (7) polymorphonuclear neutrophilia (0.80) and (8) migration of pain (0.80). Each of these variables was assigned a score of 1, except for physical signs (1 and 5), which were scored 2 to obtain a total of 10. The Pediatric Appendicitis Score had a sensitivity of 1, specificity of 0.92, positive predictive value of 0.96, and negative predictive value of 0.99.\n\n\nCONCLUSION\nPediatric appendicitis score is a simple, relatively accurate diagnostic tool for accessing an acute abdomen and diagnosing appendicitis in children.",
"title": ""
},
{
"docid": "70e6148316bd8915afd8d0908fb5ab0d",
"text": "We consider the problem of using a large unla beled sample to boost performance of a learn ing algorithm when only a small set of labeled examples is available In particular we con sider a problem setting motivated by the task of learning to classify web pages in which the description of each example can be partitioned into two distinct views For example the de scription of a web page can be partitioned into the words occurring on that page and the words occurring in hyperlinks that point to that page We assume that either view of the example would be su cient for learning if we had enough labeled data but our goal is to use both views together to allow inexpensive unlabeled data to augment a much smaller set of labeled ex amples Speci cally the presence of two dis tinct views of each example suggests strategies in which two learning algorithms are trained separately on each view and then each algo rithm s predictions on new unlabeled exam ples are used to enlarge the training set of the other Our goal in this paper is to provide a PAC style analysis for this setting and more broadly a PAC style framework for the general problem of learning from both labeled and un labeled data We also provide empirical results on real web page data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice This paper is to appear in the Proceedings of the Conference on Computational Learning Theory This research was supported in part by the DARPA HPKB program under contract F and by NSF National Young Investigator grant CCR INTRODUCTION In many machine learning settings unlabeled examples are signi cantly easier to come by than labeled ones One example of this is web page classi cation Suppose that we want a program to electronically visit some web site and download all the web pages of interest to us such as all the CS faculty member pages or all the course home pages at some university To train such a system to automatically classify web pages one would typically rely on hand labeled web pages These labeled examples are fairly expensive to obtain because they require human e ort In contrast the web has hundreds of millions of unlabeled web pages that can be inexpensively gathered using a web crawler Therefore we would like our learning algorithm to be able to take as much advantage of the unlabeled data as possible This web page learning problem has an interesting feature Each example in this domain can naturally be described using several di erent kinds of information One kind of information about a web page is the text appearing on the document itself A second kind of information is the anchor text attached to hyperlinks pointing to this page from other pages on the web The two problem characteristics mentioned above availability of both labeled and unlabeled data and the availability of two di erent kinds of information about examples suggest the following learning strat egy Using an initial small set of labeled examples nd weak predictors based on each kind of information for instance we might nd that the phrase research inter ests on a web page is a weak indicator that the page is a faculty home page and we might nd that the phrase my advisor on a link is an indicator that the page being pointed to is a faculty page Then attempt to bootstrap from these weak predictors using unlabeled data For instance we could search for pages pointed to with links having the phrase my advisor and use them as probably positive examples to further train a learning algorithm based on the words on the text page and vice versa We call this type of bootstrapping co training and it has a close connection to bootstrapping from incomplete data in the Expectation Maximization setting see for instance The question this raises is is there any reason to believe co training will help Our goal is to address this question by developing a PAC style theoretical framework to better understand the issues involved in this approach We also give some preliminary empirical results on classifying university web pages see Section that are encouraging in this context More broadly the general question of how unlabeled examples can be used to augment labeled data seems a slippery one from the point of view of standard PAC as sumptions We address this issue by proposing a notion of compatibility between a data distribution and a target function Section and discuss how this relates to other approaches to combining labeled and unlabeled data Section",
"title": ""
},
{
"docid": "152182336e620ee94f24e3865b7b377f",
"text": "In Theory III we characterize with a mix of theory and experiments the generalization properties of Stochastic Gradient Descent in overparametrized deep convolutional networks. We show that Stochastic Gradient Descent (SGD) selects with high probability solutions that 1) have zero (or small) empirical error, 2) are degenerate as shown in Theory II and 3) have maximum generalization. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 123 1216. H.M. is supported in part by ARO Grant W911NF-15-10385.",
"title": ""
},
{
"docid": "a245aca07bd707ee645cf5cb283e7c5e",
"text": "The paradox of blunted parathormone (PTH) secretion in patients with severe hypomagnesemia has been known for more than 20 years, but the underlying mechanism is not deciphered. We determined the effect of low magnesium on in vitro PTH release and on the signals triggered by activation of the calcium-sensing receptor (CaSR). Analogous to the in vivo situation, PTH release from dispersed parathyroid cells was suppressed under low magnesium. In parallel, the two major signaling pathways responsible for CaSR-triggered block of PTH secretion, the generation of inositol phosphates, and the inhibition of cAMP were enhanced. Desensitization or pertussis toxin-mediated inhibition of CaSR-stimulated signaling suppressed the effect of low magnesium, further confirming that magnesium acts within the axis CaSR-G-protein. However, the magnesium binding site responsible for inhibition of PTH secretion is not identical with the extracellular ion binding site of the CaSR, because the magnesium deficiency-dependent signal enhancement was not altered on CaSR receptor mutants with increased or decreased affinity for calcium and magnesium. By contrast, when the magnesium affinity of the G alpha subunit was decreased, CaSR activation was no longer affected by magnesium. Thus, the paradoxical block of PTH release under magnesium deficiency seems to be mediated through a novel mechanism involving an increase in the activity of G alpha subunits of heterotrimeric G-proteins.",
"title": ""
},
{
"docid": "16cbc21b3092a5ba0c978f0cf38710ab",
"text": "A major challenge to the problem of community question answering is the lexical and semantic gap between the sentence representations. Some solutions to minimize this gap includes the introduction of extra parameters to deep models or augmenting the external handcrafted features. In this paper, we propose a novel attentive recurrent tensor network for solving the lexical and semantic gap in community question answering. We introduce token-level and phrase-level attention strategy that maps input sequences to the output using trainable parameters. Further, we use the tensor parameters to introduce a 3-way interaction between question, answer and external features in vector space. We introduce simplified tensor matrices with L2 regularization that results in smooth optimization during training. The proposed model achieves state-of-the-art performance on the task of answer sentence selection (TrecQA and WikiQA datasets) while outperforming the current state-of-the-art on the tasks of best answer selection (Yahoo! L4) and answer triggering task (WikiQA).",
"title": ""
},
{
"docid": "9b9425132e89d271ed6baa0dbc16b941",
"text": "Although personalized recommendation has been investigated for decades, the wide adoption of Latent Factor Models (LFM) has made the explainability of recommendations a critical issue to both the research community and practical application of recommender systems. For example, in many practical systems the algorithm just provides a personalized item recommendation list to the users, without persuasive personalized explanation about why such an item is recommended while another is not. Unexplainable recommendations introduce negative effects to the trustworthiness of recommender systems, and thus affect the effectiveness of recommendation engines. In this work, we investigate explainable recommendation in aspects of data explainability, model explainability, and result explainability, and the main contributions are as follows: 1. Data Explainability: We propose Localized Matrix Factorization (LMF) framework based Bordered Block Diagonal Form (BBDF) matrices, and further applied this technique for parallelized matrix factorization. 2. Model Explainability: We propose Explicit Factor Models (EFM) based on phrase-level sentiment analysis, as well as dynamic user preference modeling based on time series analysis. In this work, we extract product features and user opinions towards different features from large-scale user textual reviews based on phrase-level sentiment analysis techniques, and introduce the EFM approach for explainable model learning and recommendation. 3. Economic Explainability: We propose the Total Surplus Maximization (TSM) framework for personalized recommendation, as well as the model specification in different types of online applications. Based on basic economic concepts, we provide the definitions of utility, cost, and surplus in the application scenario of Web services, and propose the general framework of web total surplus calculation and maximization.",
"title": ""
},
{
"docid": "c175910d1809ad6dc073f79e4ca15c0c",
"text": "The Global Positioning System (GPS) double-difference carrier-phase data are biased by an integer number of cycles. In this contribution a new method is introduced that enables very fast integer least-squares estimation of the ambiguities. The method makes use of an ambiguity transformation that allows one to reformulate the original ambiguity estimation problem as a new problem that is much easier to solve. The transformation aims at decorrelating the least-squares ambiguities and is based on an integer approximation of the conditional least-squares transformation. And through a flattening of the typical discontinuity in the GPS-spectrum of conditional variances of the ambiguities, the transformation returns new ambiguities that show a dramatic improvement in precision in comparison with the original double-difference ambiguities.",
"title": ""
}
] |
scidocsrr
|
f74d810878292ab625f67c5e2ffaeabc
|
Eye Gaze Correction with a Single Webcam Based on Eye-Replacement
|
[
{
"docid": "b29947243b1ad21b0529a6dd8ef3c529",
"text": "We define a multiresolution spline technique for combining two or more images into a larger image mosaic. In this procedure, the images to be splined are first decomposed into a set of band-pass filtered component images. Next, the component images in each spatial frequency hand are assembled into a corresponding bandpass mosaic. In this step, component images are joined using a weighted average within a transition zone which is proportional in size to the wave lengths represented in the band. Finally, these band-pass mosaic images are summed to obtain the desired image mosaic. In this way, the spline is matched to the scale of features within the images themselves. When coarse features occur near borders, these are blended gradually over a relatively large distance without blurring or otherwise degrading finer image details in the neighborhood of th e border.",
"title": ""
}
] |
[
{
"docid": "757441e95be19ca4569c519fb35adfb7",
"text": "Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.",
"title": ""
},
{
"docid": "ca072e97f8a5486347040aeaa7909d60",
"text": "Camera-based stereo-vision provides cost-efficient vision capabilities for robotic systems. The objective of this paper is to examine the performance of stereo-vision as means to enable a robotic inspection cell for haptic quality testing with the ability to detect relevant information related to the inspection task. This information comprises the location and 3D representation of a complex object under inspection as well as the location and type of quality features which are subject to the inspection task. Among the challenges is the low-distinctiveness of features in neighboring area, inconsistent lighting, similar colors as well as low intra-class variances impeding the retrieval of quality characteristics. The paper presents the general outline of the vision chain as well as performance analysis of various algorithms for relevant steps in the machine vision chain thus indicating the capabilities and drawbacks of a camera-based stereo-vision for flexible use in complex machine vision tasks.",
"title": ""
},
{
"docid": "dfbe5a92d45d4081910b868d78a904d0",
"text": "Actuation is essential for artificial machines to interact with their surrounding environment and to accomplish the functions for which they are designed. Over the past few decades, there has been considerable progress in developing new actuation technologies. However, controlled motion still represents a considerable bottleneck for many applications and hampers the development of advanced robots, especially at small length scales. Nature has solved this problem using molecular motors that, through living cells, are assembled into multiscale ensembles with integrated control systems. These systems can scale force production from piconewtons up to kilonewtons. By leveraging the performance of living cells and tissues and directly interfacing them with artificial components, it should be possible to exploit the intricacy and metabolic efficiency of biological actuation within artificial machines. We provide a survey of important advances in this biohybrid actuation paradigm.",
"title": ""
},
{
"docid": "a999bf3da879dde7fc2acb8794861daf",
"text": "Most OECD Member countries have sought to renew their systems and structures of public management in the last 10-15 years. Some started earlier than others and the emphasis will vary among Member countries according to their historic traditions and institutions. There is no single best model of public management, but what stands out most clearly is the extent to which countries have pursued and are pursuing broadly common approaches to public management reform. This is most probably because countries have been responding to essentially similar pressures to reform.",
"title": ""
},
{
"docid": "fc70a1820f838664b8b51b5adbb6b0db",
"text": "This paper presents a method for identifying an opinion with its holder and topic, given a sentence from online news media texts. We introduce an approach of exploiting the semantic structure of a sentence, anchored to an opinion bearing verb or adjective. This method uses semantic role labeling as an intermediate step to label an opinion holder and topic using data from FrameNet. We decompose our task into three phases: identifying an opinion-bearing word, labeling semantic roles related to the word in the sentence, and then finding the holder and the topic of the opinion word among the labeled semantic roles. For a broader coverage, we also employ a clustering technique to predict the most probable frame for a word which is not defined in FrameNet. Our experimental results show that our system performs significantly better than the baseline.",
"title": ""
},
{
"docid": "03ddd583496d561d6e5389b97db61916",
"text": "A spatial outlier is a spatially referenced object whose non-spatial attribute values are significantly different from the values of its neighborhood. Identification of spatial outliers can lead to the discovery of unexpected, interesting, and useful spatial patterns for further analysis. One drawback of existing methods is that normal objects tend to be falsely detected as spatial outliers when their neighborhood contains true spatial outliers. In this paper, we propose a suite of spatial outlier detection algorithms to overcome this disadvantage. We formulate the spatial outlier detection problem in a general way and design algorithms which can accurately detect spatial outliers. In addition, using a real-world census data set, we demonstrate that our approaches can not only avoid detecting false spatial outliers but also find true spatial outliers ignored by existing methods.",
"title": ""
},
{
"docid": "89013222fccc85c1321020153b8a416b",
"text": "The objective of this paper is to summarize the work that has been developed by the authors for the last several years, in order to demonstrate that the Theory of Characteristic Modes can be used to perform a systematic design of different types of antennas. Characteristic modes are real current modes that can be computed numerically for conducting bodies of arbitrary shape. Since characteristic modes form a set of orthogonal functions, they can be used to expand the total current on the surface of the body. However, this paper shows that what makes characteristic modes really attractive for antenna design is the physical insight they bring into the radiating phenomena taking place in the antenna. The resonance frequency of modes, as well as their radiating behavior, can be determined from the information provided by the eigenvalues associated with the characteristic modes. Moreover, by studying the current distribution of modes, an optimum feeding arrangement can be found in order to obtain the desired radiating behavior.",
"title": ""
},
{
"docid": "b59d49106614382cf97f276529d1ddd1",
"text": "core microarchitecture B. Sinharoy J. A. Van Norstrand R. J. Eickemeyer H. Q. Le J. Leenstra D. Q. Nguyen B. Konigsburg K. Ward M. D. Brown J. E. Moreira D. Levitan S. Tung D. Hrusecky J. W. Bishop M. Gschwind M. Boersma M. Kroener M. Kaltenbach T. Karkhanis K. M. Fernsler The POWER8i processor is the latest RISC (Reduced Instruction Set Computer) microprocessor from IBM. It is fabricated using the company’s 22-nm Silicon on Insulator (SOI) technology with 15 layers of metal, and it has been designed to significantly improve both single-thread performance and single-core throughput over its predecessor, the POWER7A processor. The rate of increase in processor frequency enabled by new silicon technology advancements has decreased dramatically in recent generations, as compared to the historic trend. This has caused many processor designs in the industry to show very little improvement in either single-thread or single-core performance, and, instead, larger numbers of cores are primarily pursued in each generation. Going against this industry trend, the POWER8 processor relies on a much improved core and nest microarchitecture to achieve approximately one-and-a-half times the single-thread performance and twice the single-core throughput of the POWER7 processor in several commercial applications. Combined with a 50% increase in the number of cores (from 8 in the POWER7 processor to 12 in the POWER8 processor), the result is a processor that leads the industry in performance for enterprise workloads. This paper describes the core microarchitecture innovations made in the POWER8 processor that resulted in these significant performance benefits.",
"title": ""
},
{
"docid": "ae6d36ccbf79ae6f62af3a62ef3e3bb2",
"text": "This paper presents a new neural network system called the Evolving Tree. This network resembles the Self-Organizing map, but deviates from it in several aspects, which are desirable in many analysis tasks. First of all the Evolving Tree grows automatically, so the user does not have to decide the network’s size before training. Secondly the network has a hierarchical structure, which makes network training and use computationally very efficient. Test results with both synthetic and actual data show that the Evolving Tree works quite well.",
"title": ""
},
{
"docid": "fd543534d6a9cf10abb2f073cec41fdb",
"text": "Article history: Available online 26 October 2012 We present an O ( √ n log n)-approximation algorithm for the problem of finding the sparsest spanner of a given directed graph G on n vertices. A spanner of a graph is a sparse subgraph that approximately preserves distances in the original graph. More precisely, given a graph G = (V , E) with nonnegative edge lengths d : E → R 0 and a stretch k 1, a subgraph H = (V , E H ) is a k-spanner of G if for every edge (s, t) ∈ E , the graph H contains a path from s to t of length at most k · d(s, t). The previous best approximation ratio was Õ (n2/3), due to Dinitz and Krauthgamer (STOC ’11). We also improve the approximation ratio for the important special case of directed 3-spanners with unit edge lengths from Õ ( √ n ) to O (n1/3 log n). The best previously known algorithms for this problem are due to Berman, Raskhodnikova and Ruan (FSTTCS ’10) and Dinitz and Krauthgamer. The approximation ratio of our algorithm almost matches Dinitz and Krauthgamer’s lower bound for the integrality gap of a natural linear programming relaxation. Our algorithm directly implies an O (n1/3 log n)-approximation for the 3-spanner problem on undirected graphs with unit lengths. An easy O ( √ n )-approximation algorithm for this problem has been the best known for decades. Finally, we consider the Directed Steiner Forest problem: given a directed graph with edge costs and a collection of ordered vertex pairs, find a minimum-cost subgraph that contains a path between every prescribed pair. We obtain an approximation ratio of O (n2/3+ ) for any constant > 0, which improves the O (n · min(n4/5,m2/3)) ratio due to Feldman, Kortsarz and Nutov (JCSS’12). © 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "337c0573c1d60c141e925e182fdd1cb8",
"text": "Autonomous systems, often realized as multi-agent systems, are envisioned to deal with uncertain and dynamic environments. They are applied in dangerous situations, e.g. as rescue robots or to relieve humans from complex and tedious tasks like driving a car or infrastructure maintenance. But in order to further improve the technology a generic measurement and benchmarking of autonomy is required. Within this paper we present an improved understanding of autonomous systems. Based on this foundation we introduce our concept of a multi-dimensional autonomy metric framework that especially takes into account multisystem environments. Finally, our approach is illustrated by means of an example.",
"title": ""
},
{
"docid": "587f1510411636090bc192b1b9219b58",
"text": "Creativity can be considered one of the key competencies for the twenty-first century. It provides us with the capacity to deal with the opportunities and challenges that are part of our complex and fast-changing world. The question as to what facilitates creative cognition-the ability to come up with creative ideas, problem solutions and products-is as old as the human sciences, and various means to enhance creative cognition have been studied. Despite earlier scientific studies demonstrating a beneficial effect of music on cognition, the effect of music listening on creative cognition has remained largely unexplored. The current study experimentally tests whether listening to specific types of music (four classical music excerpts systematically varying on valance and arousal), as compared to a silence control condition, facilitates divergent and convergent creativity. Creativity was higher for participants who listened to 'happy music' (i.e., classical music high on arousal and positive mood) while performing the divergent creativity task, than for participants who performed the task in silence. No effect of music was found for convergent creativity. In addition to the scientific contribution, the current findings may have important practical implications. Music listening can be easily integrated into daily life and may provide an innovative means to facilitate creative cognition in an efficient way in various scientific, educational and organizational settings when creative thinking is needed.",
"title": ""
},
{
"docid": "b4a8541c2870ea3d91819c0c0de68ad3",
"text": "The paper will describe various types of security issues which include confidentality, integrity and availability of data. There exists various threats to security issues traffic analysis, snooping, spoofing, denial of service attack etc. The asymmetric key encryption techniques may provide a higher level of security but compared to the symmetric key encryption Although we have existing techniques symmetric and assymetric key cryptography methods but there exists security concerns. A brief description of proposed framework is defined which uses the random combination of public and private keys. The mechanisms includes: Integrity, Availability, Authentication, Nonrepudiation, Confidentiality and Access control which is achieved by private-private key model as the user is restricted both at sender and reciever end which is restricted in other models. A review of all these systems is described in this paper.",
"title": ""
},
{
"docid": "e74d1eb4f1d5c45989aff2cb0e79a83e",
"text": "Environmental audio tagging is a newly proposed task to predict the presence or absence of a specific audio event in a chunk. Deep neural network (DNN) based methods have been successfully adopted for predicting the audio tags in the domestic audio scene. In this paper, we propose to use a convolutional neural network (CNN) to extract robust features from mel-filter banks (MFBs), spectrograms or even raw waveforms for audio tagging. Gated recurrent unit (GRU) based recurrent neural networks (RNNs) are then cascaded to model the long-term temporal structure of the audio signal. To complement the input information, an auxiliary CNN is designed to learn on the spatial features of stereo recordings. We evaluate our proposed methods on Task 4 (audio tagging) of the Detection and Classification of Acoustic Scenes and Events 2016 (DCASE 2016) challenge. Compared with our recent DNN-based method, the proposed structure can reduce the equal error rate (EER) from 0.13 to 0.11 on the development set. The spatial features can further reduce the EER to 0.10. The performance of the end-to-end learning on raw waveforms is also comparable. Finally, on the evaluation set, we get the state-of-the-art performance with 0.12 EER while the performance of the best existing system is 0.15 EER.",
"title": ""
},
{
"docid": "046c9eaa6fc9a6516982477e1a02f6d0",
"text": "Imperfections in healthcare revenue cycle management systems cause discrepancies between submitted claims and received payments. This paper presents a method for deriving attributional rules that can be used to support the preparation and screening of claims prior to their submission to payers. The method starts with unsupervised analysis of past payments to determine normal levels of payments for services. Then, supervised machine learning is used to derive sets of attributional rules for predicting potential discrepancies in claims. New claims can be then classified using the created models. The method was tested on a subset of Obstetrics claims for payment submitted by one hospital to Medicaid. One year of data was used to create models, which were tested using the following year's data. Results indicate that rule-based models are able to detect abnormal claims prior to their submission.",
"title": ""
},
{
"docid": "03c03dcdc15028417e699649291a2317",
"text": "The unique characteristics of origami to realize 3-D shape from 2-D patterns have been fascinating many researchers and engineers. This paper presents a fabrication of origami patterned fabric wheels that can deform and change the radius of the wheels. PVC segments are enclosed in the fabrics to build a tough and foldable structure. A special cable driven mechanism was designed to allow the wheels to deform while rotating. A mobile robot with two origami wheels has been built and tested to show that it can deform its wheels to overcome various obstacles.",
"title": ""
},
{
"docid": "c767a9b6808b4556c6f55dd406f8eb0d",
"text": "BACKGROUND\nInterest in mindfulness has increased exponentially, particularly in the fields of psychology and medicine. The trait or state of mindfulness is significantly related to several indicators of psychological health, and mindfulness-based therapies are effective at preventing and treating many chronic diseases. Interest in mobile applications for health promotion and disease self-management is also growing. Despite the explosion of interest, research on both the design and potential uses of mindfulness-based mobile applications (MBMAs) is scarce.\n\n\nOBJECTIVE\nOur main objective was to study the features and functionalities of current MBMAs and compare them to current evidence-based literature in the health and clinical setting.\n\n\nMETHODS\nWe searched online vendor markets, scientific journal databases, and grey literature related to MBMAs. We included mobile applications that featured a mindfulness-based component related to training or daily practice of mindfulness techniques. We excluded opinion-based articles from the literature.\n\n\nRESULTS\nThe literature search resulted in 11 eligible matches, two of which completely met our selection criteria-a pilot study designed to evaluate the feasibility of a MBMA to train the practice of \"walking meditation,\" and an exploratory study of an application consisting of mood reporting scales and mindfulness-based mobile therapies. The online market search eventually analyzed 50 available MBMAs. Of these, 8% (4/50) did not work, thus we only gathered information about language, downloads, or prices. The most common operating system was Android. Of the analyzed apps, 30% (15/50) have both a free and paid version. MBMAs were devoted to daily meditation practice (27/46, 59%), mindfulness training (6/46, 13%), assessments or tests (5/46, 11%), attention focus (4/46, 9%), and mixed objectives (4/46, 9%). We found 108 different resources, of which the most used were reminders, alarms, or bells (21/108, 19.4%), statistics tools (17/108, 15.7%), audio tracks (15/108, 13.9%), and educational texts (11/108, 10.2%). Daily, weekly, monthly statistics, or reports were provided by 37% (17/46) of the apps. 28% (13/46) of them permitted access to a social network. No information about sensors was available. The analyzed applications seemed not to use any external sensor. English was the only language of 78% (39/50) of the apps, and only 8% (4/50) provided information in Spanish. 20% (9/46) of the apps have interfaces that are difficult to use. No specific apps exist for professionals or, at least, for both profiles (users and professionals). We did not find any evaluations of health outcomes resulting from the use of MBMAs.\n\n\nCONCLUSIONS\nWhile a wide selection of MBMAs seem to be available to interested people, this study still shows an almost complete lack of evidence supporting the usefulness of those applications. We found no randomized clinical trials evaluating the impact of these applications on mindfulness training or health indicators, and the potential for mobile mindfulness applications remains largely unexplored.",
"title": ""
},
{
"docid": "af82ea560b98535f3726be82a2d23536",
"text": "Influence Maximization is an extensively-studied problem that targets at selecting a set of initial seed nodes in the Online Social Networks (OSNs) to spread the influence as widely as possible. However, it remains an open challenge to design fast and accurate algorithms to find solutions in large-scale OSNs. Prior Monte-Carlo-simulation-based methods are slow and not scalable, while other heuristic algorithms do not have any theoretical guarantee and they have been shown to produce poor solutions for quite some cases. In this paper, we propose hop-based algorithms that can easily scale to millions of nodes and billions of edges. Unlike previous heuristics, our proposed hop-based approaches can provide certain theoretical guarantees. Experimental evaluations with real OSN datasets demonstrate the efficiency and effectiveness of our algorithms.",
"title": ""
},
{
"docid": "6259b792713367345374d437f37abdb0",
"text": "SWOT analysis (Strength, Weakness, Opportunity, and Threat) has been in use since the 1960s as a tool to assist strategic planning in various types of enterprises including those in the construction industry. Whilst still widely used, the approach has called for improvements to make it more helpful in strategic management. The project described in this paper aimed to study whether the process to convert a SWOT analysis into a strategic plan could be assisted with some simple rationally quantitative model, as an augmented SWOT analysis. By utilizing the mathematical approaches including the quantifying techniques, the “Maximum Subarray” method, and fuzzy mathematics, one or more Heuristic Rules are derived from a SWOT analysis. These Heuristic Rules bring into focus the most influential factors concerning a strategic planning situation, and thus inform strategic analysts where particular consideration should be given. A case study conducted in collaboration with a Chinese international construction company showed that the new SWOT approach is more helpful to strategic planners. The paper provides an augmented SWOT analysis approach for strategists to conduct strategic planning in the construction industry. It also contributes fresh insights into strategic planning by introducing rationally analytic processes to improve the SWOT analysis.",
"title": ""
},
{
"docid": "415f6ca35f6ea8a9f2db938c61cf74f6",
"text": "Camptothecin (CPT) belongs to a group of monoterpenoidindole alkaloids (TIAs) and its derivatives such as irinothecan and topothecan have been widely used worldwide for the treatment of cancer, giving rise to rapidly increasing market demands. Genes from Catharanthus roseus encoding strictosidine synthase (STR) and geraniol 10-hydroxylase (G10H), were separately and simultaneously introduced into Ophiorrhiza pumila hairy roots. Overexpression of individual G10H (G lines) significantly improved CPT production with respect to non-transgenic hairy root cultures (NC line) and single STR overexpressing lines (S lines), indicating that G10H plays a more important role in stimulating CPT accumulation than STR in O. pumila. Furthermore, co-overexpression of G10H and STR genes (SG Lines) caused a 56% increase on the yields of CPT compared to NC line and single gene transgenic lines, showed that simultaneous introduction of G10H and STR can produce a synergistic effect on CPT biosynthesis in O. pumila. The MTT assay results indicated that CPT extracted from different lines showed similar anti-tumor activity, suggesting that transgenic O. pumila hairy root lines could be an alternative approach to obtain CPT. To our knowledge, this is the first report on the enhancement of CPT production in O. pumila employing a metabolic engineering strategy.",
"title": ""
}
] |
scidocsrr
|
59ca938ae8ea1e6625d4603f8b0bb594
|
Green Manufacturing: An Evaluation of Environmentally Sustainable Manufacturing Practices and Their Impact on Competitive Outcomes
|
[
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] |
[
{
"docid": "07d8ec2d95e09e6efb822430d7001b62",
"text": "This paper presents a multilevel inverter that has been conceptualized to reduce component count, particularly for a large number of output levels. It comprises floating input dc sources alternately connected in opposite polarities with one another through power switches. Each input dc level appears in the stepped load voltage either individually or in additive combinations with other input levels. This approach results in reduced number of power switches as compared to classical topologies. The working principle of the proposed topology is demonstrated with the help of a single-phase five-level inverter. The topology is investigated through simulations and validated experimentally on a laboratory prototype. An exhaustive comparison of the proposed topology is made against the classical cascaded H-bridge topology.",
"title": ""
},
{
"docid": "e9c63b9432ec5ce8a5809db4579ef691",
"text": "Do digital games and play mean the same things for different people? This article presents the results of a three-year study in which we sought for new ways to approach digital games cultures and playing practices. First, we present the research process in brief and emphasise the importance of merging different kinds of methods and materials in the study of games cultures. Second, we introduce a gaming mentality heuristics that is not dedicated to a certain domain or genre of games, addressing light casual and light social gaming motivations as well as more dedicated ones in a joint framework. Our analysis reveals that, in contrast to common belief, the majority of digital gaming takes place between ‘casual relaxing’ and ‘committed entertaining’, where the multiplicity of experiences, feelings, and understandings that people have about their playing and digital games is wide-ranging. Digital gaming is thus found to be a multi-faceted social and cultural phenomenon which can be understood, practiced and used in various ways.",
"title": ""
},
{
"docid": "c58df0eeece5147cce15bcf49f76ba94",
"text": "Recent research has shown that a glial cell of astrocyte underpins a self-repair mechanism in the human brain, where spiking neurons provide direct and indirect feedbacks to presynaptic terminals. These feedbacks modulate the synaptic transmission probability of release (PR). When synaptic faults occur, the neuron becomes silent or near silent due to the low PR of synapses; whereby the PRs of remaining healthy synapses are then increased by the indirect feedback from the astrocyte cell. In this paper, a novel hardware architecture of Self-rePAiring spiking Neural NEtwoRk (SPANNER) is proposed, which mimics this self-repairing capability in the human brain. This paper demonstrates that the hardware can self-detect and self-repair synaptic faults without the conventional components for the fault detection and fault repairing. Experimental results show that SPANNER can maintain the system performance with fault densities of up to 40%, and more importantly SPANNER has only a 20% performance degradation when the self-repairing architecture is significantly damaged at a fault density of 80%.",
"title": ""
},
{
"docid": "92ec1f93124ddfa1faa1d7a3ab371935",
"text": "We introduce a novel evolutionary algorithm (EA) with a semantic network-based representation. For enabling this, we establish new formulations of EA variation operators, crossover and mutation, that we adapt to work on semantic networks. The algorithm employs commonsense reasoning to ensure all operations preserve the meaningfulness of the networks, using ConceptNet and WordNet knowledge bases. The algorithm can be classified as a novel memetic algorithm (MA), given that (1) individuals represent pieces of information that undergo evolution, as in the original sense of memetics as it was introduced by Dawkins; and (2) this is different from existing MA, where the word “memetic” has been used as a synonym for local refinement after global optimization. For evaluating the approach, we introduce an analogical similarity-based fitness measure that is computed through structure mapping. This setup enables the open-ended generation of networks analogous to a given base network.",
"title": ""
},
{
"docid": "503ee34a874dd5367cde8b284e7fc63c",
"text": "Miller (1956) summarized evidence that people can remember about seven chunks in short-term memory (STM) tasks. However, that number was meant more as a rough estimate and a rhetorical device than as a real capacity limit. Others have since suggested that there is a more precise capacity limit, but that it is only three to five chunks. The present target article brings together a wide variety of data on capacity limits suggesting that the smaller capacity limit is real. Capacity limits will be useful in analyses of information processing only if the boundary conditions for observing them can be carefully described. Four basic conditions in which chunks can be identified and capacity limits can accordingly be observed are: (1) when information overload limits chunks to individual stimulus items, (2) when other steps are taken specifically to block the recording of stimulus items into larger chunks, (3) in performance discontinuities caused by the capacity limit, and (4) in various indirect effects of the capacity limit. Under these conditions, rehearsal and long-term memory cannot be used to combine stimulus items into chunks of an unknown size; nor can storage mechanisms that are not capacity-limited, such as sensory memory, allow the capacity-limited storage mechanism to be refilled during recall. A single, central capacity limit averaging about four chunks is implicated along with other, noncapacity-limited sources. The pure STM capacity limit expressed in chunks is distinguished from compound STM limits obtained when the number of separately held chunks is unclear. Reasons why pure capacity estimates fall within a narrow range are discussed and a capacity limit for the focus of attention is proposed.",
"title": ""
},
{
"docid": "5268fd63c99f43d1a155c0078b2e5df5",
"text": "With Docker gaining widespread popularity in the recent years, the container scheduler becomes a crucial role for the exploding containerized applications and services. In this work, the container host energy conservation, the container image pulling costs from the image registry to the container hosts and the workload network transition costs from the clients to the container hosts are evaluated in combination. By modeling the scheduling problem as an integer linear programming, an effective and adaptive scheduler is proposed. Impressive cost savings were achieved compared to Docker Swarm scheduler. Moreover, it can be easily integrated into the open-source container orchestration frameworks.",
"title": ""
},
{
"docid": "4aad063cbb9ce56b799aad0feb2275e8",
"text": "Pervasive computing is by its nature open and extensible, and must integrate the information from a diverse range of sources. This leads to a problem of information exchange, so sub-systems must agree on shared representations. Ontologies potentially provide a well-founded mechanism for the representation and exchange of such structured information. A number of ontologies have been developed specifically for use in pervasive computing, none of which appears to cover adequately the space of concerns applicable to application designers. We compare and contrast the most popular ontologies, evaluating them against the system challenges generally recognized within the pervasive computing community. We identify a number of deficiencies that must be addressed in order to apply the ontological techniques successfully to next-generation pervasive systems.",
"title": ""
},
{
"docid": "09c19ae7eea50f269ee767ac6e67827b",
"text": "In the last years Python has gained more and more traction in the scientific community. Projects like NumPy, SciPy, and Matplotlib have created a strong foundation for scientific computing in Python and machine learning packages like scikit-learn or packages for data analysis like Pandas are building on top of it. In this paper we present Wyrm ( https://github.com/bbci/wyrm ), an open source BCI toolbox in Python. Wyrm is applicable to a broad range of neuroscientific problems. It can be used as a toolbox for analysis and visualization of neurophysiological data and in real-time settings, like an online BCI application. In order to prevent software defects, Wyrm makes extensive use of unit testing. We will explain the key aspects of Wyrm’s software architecture and design decisions for its data structure, and demonstrate and validate the use of our toolbox by presenting our approach to the classification tasks of two different data sets from the BCI Competition III. Furthermore, we will give a brief analysis of the data sets using our toolbox, and demonstrate how we implemented an online experiment using Wyrm. With Wyrm we add the final piece to our ongoing effort to provide a complete, free and open source BCI system in Python.",
"title": ""
},
{
"docid": "5dda2e6bf32fbe2a9e4b78eeeec2ab6d",
"text": "We present a tool, called DB2OWL, to automatically generate ontologies from database schemas. The mapping process starts by detecting particular cases for conceptual elements in the database and accordingly converts database components to the corresponding ontology components. We have implemented a prototype of DB2OWL tool to create OWL ontology from relational database.",
"title": ""
},
{
"docid": "3a3b898ae050456c7bf2b5997f7c12ca",
"text": "The Budeanu definitions of reactive and distortion power in circuits with nonsinusoidal waveforms have been widely used for almost 60 years. There have been objections, concerned mainly with the questions of whether these powers should be defined in the frequency domain and whether they can be measured as defined. The main drawbacks of these definitions include the fact that the Budeanu reactive and distortion powers do not possess any attributes which might be related to the power phenomena in the circuit; that their values do not provide any information which would allow the design of compensating circuits; and that the distortion power value does not provide any information about waveform distortion. It is concluded that Budeanu's concept has led the power theory of circuits with nonsinusoidal waveforms into a blind alley.",
"title": ""
},
{
"docid": "952735cb937248c837e0b0244cd9dbb1",
"text": "Recently, the desired very high throughput of 5G wireless networks drives millimeter-wave (mm-wave) communication into practical applications. A phased array technique is required to increase the effective antenna aperture at mm-wave frequency. Integrated solutions of beamforming/beam steering are extremely attractive for practical implementations. After a discussion on the basic principles of radio beam steering, we review and explore the recent advanced integration techniques of silicon-based electronic integrated circuits (EICs), photonic integrated circuits (PICs), and antenna-on-chip (AoC). For EIC, the latest advanced designs of on-chip true time delay (TTD) are explored. Even with such advances, the fundamental loss of a silicon-based EIC still exists, which can be solved by advanced PIC solutions with ultra-broad bandwidth and low loss. Advanced PIC designs for mm-wave beam steering are then reviewed with emphasis on an optical TTD. Different from the mature silicon-based EIC, the photonic integration technology for PIC is still under development. In this paper, we review and explore the potential photonic integration platforms and discuss how a monolithic integration based on photonic membranes fits the photonic mm-wave beam steering application, especially for the ease of EIC and PIC integration on a single chip. To combine EIC, for its accurate and mature fabrication techniques, with PIC, for its ultra-broad bandwidth and low loss, a hierarchical mm-wave beam steering chip with large-array delays realized in PIC and sub-array delays realized in EIC can be a future-proof solution. Moreover, the antenna units can be further integrated on such a chip using AoC techniques. Among the mentioned techniques, the integration trends on device and system levels are discussed extensively.",
"title": ""
},
{
"docid": "1430c03448096953c6798a0b6151f0b2",
"text": "This case study analyzes the impact of theory-based factors on the implementation of different blockchain technologies in use cases from the energy sector. We construct an integrated research model based on the Diffusion of Innovations theory, institutional economics and the Technology-Organization-Environment framework. Using qualitative data from in-depth interviews, we link constructs to theory and assess their impact on each use case. Doing so we can depict the dynamic relations between different blockchain technologies and the energy sector. The study provides insights for decision makers in electric utilities, and government administrations.",
"title": ""
},
{
"docid": "c5cc4da2906670c30fc0bac3040217bd",
"text": "Many popular problems in robotics and computer vision including various types of simultaneous localization and mapping (SLAM) or bundle adjustment (BA) can be phrased as least squares optimization of an error function that can be represented by a graph. This paper describes the general structure of such problems and presents g2o, an open-source C++ framework for optimizing graph-based nonlinear error functions. Our system has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA. We provide evaluations on a wide range of real-world and simulated datasets. The results demonstrate that while being general g2o offers a performance comparable to implementations of state-of-the-art approaches for the specific problems.",
"title": ""
},
{
"docid": "c30d53cd8c350615f20d5baef55de6d0",
"text": "The Internet of Things (IoT) is everywhere around us. Smart communicating objects offer the digitalization of lives. Thus, IoT opens new opportunities in criminal investigations such as a protagonist or a witness to the event. Any investigation process involves four phases: firstly the identification of an incident and its evidence, secondly device collection and preservation, thirdly data examination and extraction and then finally data analysis and formalization.\n In recent years, the scientific community sought to develop a common digital framework and methodology adapted to IoT-based infrastructure. However, the difficulty of IoT lies in the heterogeneous nature of the device, lack of standards and the complex architecture. Although digital forensics are considered and adopted in IoT investigations, this work only focuses on collection. Indeed the identification phase is relatively unexplored. It addresses challenges of finding the best evidence and locating hidden devices. So, the traditional method of digital forensics does not fully fit the IoT environment.\n In this paperwork, we investigate the mobility in the context of IoT at the crime scene. This paper discusses the data identification and the classification methodology from IoT to looking for the best evidences. We propose tools and techniques to identify and locate IoT devices. We develop the recent concept of \"digital footprint\" in the crime area based on frequencies and interactions mapping between devices. We propose technical and data criteria to efficiently select IoT devices. Finally, the paper introduces a generalist classification table as well as the limits of such an approach.",
"title": ""
},
{
"docid": "d15a2f27112c6bd8bfa2f9c01471c512",
"text": "Assuming a special version of the Montgomery-Odlyzko law on the pair correlation of zeros of the Riemann zeta function conjectured by Rudnick and Sarnak and assuming the Riemann Hypothesis, we prove new results on the prime number theorem, difference of consecutive primes, and the twin prime conjecture. 1. Introduction. Assuming the Riemann Hypothesis (RH), let us denote by 1=2 ig a nontrivial zero of a primitive L-function L
s;p attached to an irreducible cuspidal automorphic representation of GLm; m ^ 1, over Q. When m 1, this L-function is the Riemann zeta function z
s or the Dirichlet L-function L
s; c for a primitive character c. Rudnick and Sarnak [13] examined the n-level correlation for these zeros and made a far reaching conjecture which is called the Montgomery [9]-Odlyzko [11], [12] Law by Katz and Sarnak [6]. Rudnick and Sarnak also proved a case of their conjecture when a test function f has its Fourier transform b f supported in a restricted region. In this article, we will show that a version of the above conjecture for the pair correlation of zeros of the zeta function z
s implies interesting arithmetical results on prime distribution (Theorems 2, 3, and 4). These results can give us deep insight on possible ultimate bounds of these prime distribution problems. One can also see that the pair (and nlevel) correlation of zeros of zeta and L-functions is a powerful method in number theory. Our computation shows that the test function f and the support of its Fourier transform b f play a crucial role in the conjecture. To see the conjecture in Rudnick and Sarnak [13] in the case of the zeta function z
s and n 2, the pair correlation, we use a test function f
x; y which satisfies the following three conditions: (i) f
x; y f
y; x for any x; y 2 R, (ii) f
x t; y t f
x; y for any t 2 R, and (iii) f
x; y tends to 0 rapidly as j
x; yj ! 1 on the hyperplane x y 0. Arch. Math. 76 (2001) 41±50 0003-889X/01/010041-10 $ 3.50/0 Birkhäuser Verlag, Basel, 2001 Archiv der Mathematik Mathematics Subject Classification (1991): 11M26, 11N05, 11N75. 1) Supported in part by China NNSF Grant # 19701019. 2) Supported in part by USA NSF Grant # DMS 97-01225. Define the function W2
x; y 1ÿ sin p
xÿ y
p
xÿ y : Denote the Dirac function by d
x which satisfies R d
xdx 1 and defines a distribution f 7! f
0. We then define the pair correlation sum of zeros gj of the zeta function: R2
T; f ; h P g1;g2 distinct h g1 T ; g2 T f Lg1 2p ; Lg2 2p ; where T ^ 2, L log T, and h
x; y is a localized cutoff function which tends to zero rapidly when j
x; yj tends to infinity. The conjecture proposed by Rudnick and Sarnak [13] is that R2
T; f ; h 1 2p TL
",
"title": ""
},
{
"docid": "181a3d68fd5b5afc3527393fc3b276f9",
"text": "Updating inference in response to new evidence is a fundamental challenge in artificial intelligence. Many real problems require large probabilistic graphical models, containing possibly millions of interdependent variables. For such large models, jointly updating the most likely (i.e., MAP) configuration of the variables each time new evidence is encountered can be infeasible, even if inference is tractable. In this paper, we introduce budgeted online collective inference, in which the MAP configuration of a graphical model is updated efficiently by revising the assignments to a subset of the variables while holding others fixed. The goal is to selectively update certain variables without sacrificing quality with respect to full inference. To formalize the consequences of partially updating inference, we introduce the concept of inference regret. We derive inference regret bounds for a class of graphical models with strongly-convex free energies. These theoretical insights, combined with a thorough analysis of the optimization solver, motivate new approximate methods for efficiently updating the variable assignments under a budget constraint. In experiments, we demonstrate that our algorithms can reduce inference time by 65% with accuracy comparable to full inference.",
"title": ""
},
{
"docid": "054cde7ac85562e1f96e69f0d769de29",
"text": "Research on the impact of nocturnal road traffic noise on sleep and the consequences on daytime functioning demonstrates detrimental effects that cannot be ignored. The physiological reactions due to continuing noise processing during night time lead to primary sleep disturbances, which in turn impair daytime functioning. This review focuses on noise processing in general and in relation to sleep, as well as methodological aspects in the study of noise and sleep. More specifically, the choice of a research setting and noise assessment procedure is discussed and the concept of sleep quality is elaborated. In assessing sleep disturbances, we differentiate between objectively measured and subjectively reported complaints, which demonstrates the need for further understanding of the impact of noise on several sleep variables. Hereby, mediating factors such as noise sensitivity appear to play an important role. Research on long term effects of noise intrusion on sleep up till now has mainly focused on cardiovascular outcomes. The domain might benefit from additional longitudinal studies on deleterious effects of noise on mental health and general well-being.",
"title": ""
},
{
"docid": "41df9902a1b88da0943ae8641541acc0",
"text": "The computational and robotic synthesis of language evolution is emerging as a new exciting field of research. The objective is to come up with precise operational models of how communities of agents, equipped with a cognitive apparatus, a sensori-motor system, and a body, can arrive at shared grounded communication systems. Such systems may have similar characteristics to animal communication or human language. Apart from its technological interest in building novel applications in the domain of human-robot or robot-robot interaction, this research is of interest to the many disciplines concerned with the origins and evolution of language and communication.",
"title": ""
},
{
"docid": "55bd77c5cf2660c0690d50f639769b9c",
"text": "Multi-ported RAMs are essential for high-performance parallel computation systems. VLIW and vector processors, CGRAs, DSPs, CMPs and other processing systems often rely upon multi-ported memories for parallel access, hence higher performance. Although memories with a large number of read and write ports are important, their high implementation cost means they are used sparingly in designs. As a result, FPGA vendors only provide dual-ported block RAMs to handle the majority of usage patterns. In this paper, a novel and modular approach is proposed to construct multi-ported memories out of basic dual-ported RAM blocks. Like other multi-ported RAM designs, each write port uses a different RAM bank and each read port uses bank replication. The main contribution of this work is an optimization that merges the previous live-value-table (LVT) and XOR approaches into a common design that uses a generalized, simpler structure we call an invalidation-based live-value-table (I-LVT). Like a regular LVT, the I-LVT determines the correct bank to read from, but it differs in how updates to the table are made; the LVT approach requires multiple write ports, often leading to an area-intensive register-based implementation, while the XOR approach uses wider memories to accommodate the XOR-ed data and suffers from lower clock speeds. Two specific I-LVT implementations are proposed and evaluated, binary and one-hot coding. The I-LVT approach is especially suitable for larger multi-ported RAMs because the table is implemented only in SRAM cells. The I-LVT method gives higher performance while occupying less block RAMs than earlier approaches: for several configurations, the suggested method reduces the block RAM usage by over 44% and improves clock speed by over 76%. To assist others, we are releasing our fully parameterized Verilog implementation as an open source hardware library. The library has been extensively tested using ModelSim and Altera's Quartus tools.",
"title": ""
}
] |
scidocsrr
|
ad57643ecac12a7516a07a7210750e0f
|
Person Re-identification by Attributes
|
[
{
"docid": "dbe5661d99798b24856c61b93ddb2392",
"text": "Traditionally, appearance models for recognition, reacquisition and tracking problems have been evaluated independently using metrics applied to a complete system. It is shown that appearance models for these three problems can be evaluated using a cumulative matching curve on a standardized dataset, and that this one curve can be converted to a synthetic reacquisition or disambiguation rate for tracking. A challenging new dataset for viewpoint invariant pedestrian recognition (VIPeR) is provided as an example. This dataset contains 632 pedestrian image pairs from arbitrary viewpoints. Several baseline methods are tested on this dataset and the results are presented as a benchmark for future appearance models and matchin methods.",
"title": ""
},
{
"docid": "e5d523d8a1f584421dab2eeb269cd303",
"text": "In this paper, we propose a novel appearance-based method for person re-identification, that condenses a set of frames of the same individual into a highly informative signature, called Histogram Plus Epitome, HPE. It incorporates complementary global and local statistical descriptions of the human appearance, focusing on the overall chromatic content, via histograms representation, and on the presence of recurrent local patches, via epitome estimation. The matching of HPEs provides optimal performances against low resolution, occlusions, pose and illumination variations, defining novel state-of-the-art results on all the datasets considered.",
"title": ""
},
{
"docid": "c80222e5a7dfe420d16e10b45f8fab66",
"text": "Matching people across non-overlapping camera views, known as person re-identification, is challenging due to the lack of spatial and temporal constraints and large visual appearance changes caused by variations in view angle, lighting, background clutter and occlusion. To address these challenges, most previous approaches aim to extract visual features that are both distinctive and stable under appearance changes. However, most visual features and their combinations under realistic conditions are neither stable nor distinctive thus should not be used indiscriminately. In this paper, we propose to formulate person re-identification as a distance learning problem, which aims to learn the optimal distance that can maximises matching accuracy regardless the choice of representation. To that end, we introduce a novel Probabilistic Relative Distance Comparison (PRDC) model, which differs from most existing distance learning methods in that, rather than minimising intra-class variation whilst maximising intra-class variation, it aims to maximise the probability of a pair of true match having a smaller distance than that of a wrong match pair. This makes our model more tolerant to appearance changes and less susceptible to model over-fitting. Extensive experiments are carried out to demonstrate that 1) by formulating the person re-identification problem as a distance learning problem, notable improvement on matching accuracy can be obtained against conventional person re-identification techniques, which is particularly significant when the training sample size is small; and 2) our PRDC outperforms not only existing distance learning methods but also alternative learning methods based on boosting and learning to rank.",
"title": ""
},
{
"docid": "fbc47f2d625755bda6d9aa37805b69f1",
"text": "In many surveillance applications it is desirable to determine if a given individual has been previously observed over a network of cameras. This is the person reidentification problem. This paper focuses on reidentification algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Person reidentification approaches have two aspects: (i) establish correspondence between parts, and (ii) generate signatures that are invariant to variations in illumination, pose, and the dynamic appearance of clothing. A novel spatiotemporal segmentation algorithm is employed to generate salient edgels that are robust to changes in appearance of clothing. The invariant signatures are generated by combining normalized color and salient edgel histograms. Two approaches are proposed to generate correspondences: (i) a model based approach that fits an articulated model to each individual to establish a correspondence map, and (ii) an interest point operator approach that nominates a large number of potential correspondences which are evaluated using a region growing scheme. Finally, the approaches are evaluated on a 44 person database across 3 disparate views.",
"title": ""
}
] |
[
{
"docid": "c4f0e371ea3950e601f76f8d34b736e3",
"text": "Discretization is an essential preprocessing technique used in many knowledge discovery and data mining tasks. Its main goal is to transform a set of continuous attributes into discrete ones, by associating categorical values to intervals and thus transforming quantitative data into qualitative data. In this manner, symbolic data mining algorithms can be applied over continuous data and the representation of information is simplified, making it more concise and specific. The literature provides numerous proposals of discretization and some attempts to categorize them into a taxonomy can be found. However, in previous papers, there is a lack of consensus in the definition of the properties and no formal categorization has been established yet, which may be confusing for practitioners. Furthermore, only a small set of discretizers have been widely considered, while many other methods have gone unnoticed. With the intention of alleviating these problems, this paper provides a survey of discretization methods proposed in the literature from a theoretical and empirical perspective. From the theoretical perspective, we develop a taxonomy based on the main properties pointed out in previous research, unifying the notation and including all the known methods up to date. Empirically, we conduct an experimental study in supervised classification involving the most representative and newest discretizers, different types of classifiers, and a large number of data sets. The results of their performances measured in terms of accuracy, number of intervals, and inconsistency have been verified by means of nonparametric statistical tests. Additionally, a set of discretizers are highlighted as the best performing ones.",
"title": ""
},
{
"docid": "f35dc45e28f2483d5ac66271590b365d",
"text": "We present a vector space–based model for selectional preferences that predicts plausibility scores for argument headwords. It does not require any lexical resources (such as WordNet). It can be trained either on one corpus with syntactic annotation, or on a combination of a small semantically annotated primary corpus and a large, syntactically analyzed generalization corpus. Our model is able to predict inverse selectional preferences, that is, plausibility scores for predicates given argument heads. We evaluate our model on one NLP task (pseudo-disambiguation) and one cognitive task (prediction of human plausibility judgments), gauging the influence of different parameters and comparing our model against other model classes. We obtain consistent benefits from using the disambiguation and semantic role information provided by a semantically tagged primary corpus. As for parameters, we identify settings that yield good performance across a range of experimental conditions. However, frequency remains a major influence of prediction quality, and we also identify more robust parameter settings suitable for applications with many infrequent items.",
"title": ""
},
{
"docid": "ad88d2e2213624270328be0aa019b5cd",
"text": "The traditional decision-making framework for newsvendor models is to assume a distribution of the underlying demand. However, the resulting optimal policy is typically sensitive to the choice of the distribution. A more conservative approach is to assume that the distribution belongs to a set parameterized by a few known moments. An ambiguity-averse newsvendor would choose to maximize the worst-case profit. Most models of this type assume that only the mean and the variance are known, but do not attempt to include asymmetry properties of the distribution. Other recent models address asymmetry by including skewness and kurtosis. However, closed-form expressions on the optimal bounds are difficult to find for such models. In this paper, we propose a framework under which the expectation of a piecewise linear objective function is optimized over a set of distributions with known asymmetry properties. This asymmetry is represented by the first two moments of multiple random variables that result from partitioning the original distribution. In the simplest case, this reduces to semivariance. The optimal bounds can be solved through a second-order cone programming (SOCP) problem. This framework can be applied to the risk-averse and risk-neutral newsvendor problems and option pricing. We provide a closed-form expression for the worst-case newsvendor profit with only mean, variance and semivariance information.",
"title": ""
},
{
"docid": "9076428e840f37860a395b46445c22c8",
"text": "Embedded First-In First-Out (FIFO) memories are increasingly used in many IC designs. We have created a new full-custom embedded ripple-through FIFO module with asynchronous read and write clocks. The implementation is based on a micropipeline architecture and is at least a factor two smaller than SRAM-based and standard-cell-based counterparts. This paper gives an overview of the most important design features of the new FIFO module and describes its test and design-for-test approach.",
"title": ""
},
{
"docid": "4fd78d1f9737ad996a2e3b4495e911c6",
"text": "The accuracy of Wrst impressions was examined by investigating judged construct (negative aVect, positive aVect, the Big Wve personality variables, intelligence), exposure time (5, 20, 45, 60, and 300 s), and slice location (beginning, middle, end). Three hundred and thirty four judges rated 30 targets. Accuracy was deWned as the correlation between a judge’s ratings and the target’s criterion scores on the same construct. Negative aVect, extraversion, conscientiousness, and intelligence were judged moderately well after 5-s exposures; however, positive aVect, neuroticism, openness, and agreeableness required more exposure time to achieve similar levels of accuracy. Overall, accuracy increased with exposure time, judgments based on later segments of the 5-min interactions were more accurate, and 60 s yielded the optimal ratio between accuracy and slice length. Results suggest that accuracy of Wrst impressions depends on the type of judgment made, amount of exposure, and temporal location of the slice of judged social behavior. © 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "cc96a29f9c2ad0bcbeb52b2dc4c96996",
"text": "The authors model the neural mechanisms underlying spatial cognition, integrating neuronal systems and behavioral data, and address the relationships between long-term memory, short-term memory, and imagery, and between egocentric and allocentric and visual and ideothetic representations. Long-term spatial memory is modeled as attractor dynamics within medial-temporal allocentric representations, and short-term memory is modeled as egocentric parietal representations driven by perception, retrieval, and imagery and modulated by directed attention. Both encoding and retrieval/imagery require translation between egocentric and allocentric representations, which are mediated by posterior parietal and retrosplenial areas and the use of head direction representations in Papez's circuit. Thus, the hippocampus effectively indexes information by real or imagined location, whereas Papez's circuit translates to imagery or from perception according to the direction of view. Modulation of this translation by motor efference allows spatial updating of representations, whereas prefrontal simulated motor efference allows mental exploration. The alternating temporal-parietal flows of information are organized by the theta rhythm. Simulations demonstrate the retrieval and updating of familiar spatial scenes, hemispatial neglect in memory, and the effects on hippocampal place cell firing of lesioned head direction representations and of conflicting visual and ideothetic inputs.",
"title": ""
},
{
"docid": "8531a633a78f65161c3793c89a3eb093",
"text": "Mindfulness-based approaches are increasingly employed as interventions for treating a variety of psychological, psychiatric and physical problems. Such approaches include ancient Buddhist mindfulness meditations such as Vipassana and Zen meditations, modern group-based standardized meditations, such as mindfulness-based stress reduction and mindfulness-based cognitive therapy, and further psychological interventions, such as dialectical behavioral therapy and acceptance and commitment therapy. We review commonalities and differences of these interventions regarding philosophical background, main techniques, aims, outcomes, neurobiology and psychological mechanisms. In sum, the currently applied mindfulness-based interventions show large differences in the way mindfulness is conceptualized and practiced. The decision to consider such practices as unitary or as distinct phenomena will probably influence the direction of future research.",
"title": ""
},
{
"docid": "ec788f48207b0a001810e1eabf6b2312",
"text": "Maximum likelihood factor analysis provides an effective method for estimation of factor matrices and a useful test statistic in the likelihood ratio for rejection of overly simple factor models. A reliability coefficient is proposed to indicate quality of representation of interrelations among attributes in a battery by a maximum likelihood factor analysis. Usually, for a large sample of individuals or objects, the likelihood ratio statistic could indicate that an otherwise acceptable factor model does not exactly represent the interrelations among the attributes for a population. The reliability coefficient could indicate a very close representation in this case and be a better indication as to whether to accept or reject the factor solution.",
"title": ""
},
{
"docid": "4c588e5f05c3e4c2f3b974306095af02",
"text": "Software Development Life Cycle (SDLC) is a model which provides us the basic information about the methods/techniques to develop software. It is concerned with the software management processes that examine the area of software development through the development models, which are known as software development life cycle. There are many development models namely Waterfall model, Iterative model, V-shaped model, Spiral model, Extreme programming, Iterative and Incremental Method, Rapid prototyping model and Big Bang Model. This is paper is concerned with the study of these different software development models and to compare their advantages and disadvantages.",
"title": ""
},
{
"docid": "2f7d40aa2b6f2986ab4a48eec757036a",
"text": "Patients ask for procedures with long-lasting effects. ArteFill is the first permanent injectable approved in 2006 by the FDA for nasolabial folds. It consists of cleaned microspheres of polymethylmethacrylate (PMMA) suspended in bovine collagen. Over the development period of 20 years most of its side effects have been eliminated to achieve the same safety standard as today’s hyaluronic acid products. A 5-year follow-up study in U.S. clinical trial patients has shown the same wrinkle improvement as seen at 6 months. Long-term follow-up in European Artecoll patients has shown successful wrinkle correction lasting up to 15 years. A wide variety of off-label indications and applications have been developed that help the physician meet the individual needs of his/her patients. Serious complications after ArteFill injections, such as granuloma formation, have not been reported due to the reduction of PMMA microspheres smaller than 20 μm to less than 1% “by the number.” Minor technique-related side effects, however, may occur during the initial learning curve. Patient and physician satisfaction with ArteFill has been shown to be greater than 90%.",
"title": ""
},
{
"docid": "8db37f6f495a68da176e1ed411ce37a7",
"text": "We present Bolt, a data management system for an emerging class of applications—those that manipulate data from connected devices in the home. It abstracts this data as a stream of time-tag-value records, with arbitrary, application-defined tags. For reliable sharing among applications, some of which may be running outside the home, Bolt uses untrusted cloud storage as seamless extension of local storage. It organizes data into chunks that contains multiple records and are individually compressed and encrypted. While chunking enables efficient transfer and storage, it also implies that data is retrieved at the granularity of chunks, instead of records. We show that the resulting overhead, however, is small because applications in this domain frequently query for multiple proximate records. We develop three diverse applications on top of Bolt and find that the performance needs of each are easily met. We also find that compared to OpenTSDB, a popular time-series database system, Bolt is up to 40 times faster than OpenTSDB while requiring 3–5 times less storage space.",
"title": ""
},
{
"docid": "11538da6cfda3a81a7ddec0891aae1d9",
"text": "This work presents a dataset and annotation scheme for the new task of identifying “good” conversations that occur online, which we call ERICs: Engaging, Respectful, and/or Informative Conversations. We develop a taxonomy to reflect features of entire threads and individual comments which we believe contribute to identifying ERICs; code a novel dataset of Yahoo News comment threads (2.4k threads and 10k comments) and 1k threads from the Internet Argument Corpus; and analyze the features characteristic of ERICs. This is one of the largest annotated corpora of online human dialogues, with the most detailed set of annotations. It will be valuable for identifying ERICs and other aspects of argumentation, dialogue, and discourse.",
"title": ""
},
{
"docid": "a9975365f0bad734b77b67f63bdf7356",
"text": "Most existing models for multilingual natural language processing (NLP) treat language as a discrete category, and make predictions for either one language or the other. In contrast, we propose using continuous vector representations of language. We show that these can be learned efficiently with a character-based neural language model, and used to improve inference about language varieties not seen during training. In experiments with 1303 Bible translations into 990 different languages, we empirically explore the capacity of multilingual language models, and also show that the language vectors capture genetic relationships between languages.",
"title": ""
},
{
"docid": "dd40063dd10027f827a65976261c8683",
"text": "Many software process methods and tools presuppose the existence of a formal model of a process. Unfortunately, developing a formal model for an on-going, complex process can be difficult, costly, and error prone. This presents a practical barrier to the adoption of process technologies, which would be lowered by automated assistance in creating formal models. To this end, we have developed a data analysis technique that we term process discovery. Under this technique, data describing process events are first captured from an on-going process and then used to generate a formal model of the behavior of that process. In this article we describe a Markov method that we developed specifically for process discovery, as well as describe two additional methods that we adopted from other domains and augmented for our purposes. The three methods range from the purely algorithmic to the purely statistical. We compare the methods and discuss their application in an industrial case study.",
"title": ""
},
{
"docid": "d51ef75ccf464cc03656210ec500db44",
"text": "The choice of a business process modelling (BPM) tool in combination with the selection of a modelling language is one of the crucial steps in BPM project preparation. Different aspects influence the decision: tool functionality, price, modelling language support, etc. In this paper we discuss the aspect of usability, which has already been recognized as an important topic in software engineering and web design. We conduct a literature review to find out the current state of research on the usability in the BPM field. The results of the literature review show, that although a number of research papers mention the importance of usability for BPM tools, real usability evaluation studies have rarely been undertaken. Based on the results of the literature analysis, the possible research directions in the field of usability of BPM tools are suggested.",
"title": ""
},
{
"docid": "5e8e39cb778e86b24d6ceee6419dd333",
"text": "The nature of healthcare processes in a multidisciplinary hospital is inherently complex. In this paper, we identify particular problems of modeling healthcare processes with the de-facto standard process modeling language BPMN. We discuss all possibilities of BPMN adressing these problems. Where plain BPMN fails to produce nice and easily comprehensible results, we propose a new approach: Encorporating role information in process models using the color attribute of tasks complementary to the usage of lanes.",
"title": ""
},
{
"docid": "de22f2f15cc427b50d4018e8c44df7e4",
"text": "In this paper we examine challenges identified with participatory design research in the developing world and develop the postcolonial notion of cultural hybridity as a sensitizing concept. While participatory design intentionally addresses power relationships, its methodology does not to the same degree cover cultural power relationships, which extend beyond structural power and voice. The notion of cultural hybridity challenges the static cultural binary opposition between the self and the other, Western and non-Western, or the designer and the user---offering a more nuanced approach to understanding the malleable nature of culture. Drawing from our analysis of published literature in the participatory design community, we explore the complex relationship of participatory design to international development projects and introduce postcolonial cultural hybridity via postcolonial theory and its application within technology design thus far. Then, we examine how participatory approaches and cultural hybridity may interact in practice and conclude with a set of sensitizing insights and topics for further discussion in the participatory design community.",
"title": ""
},
{
"docid": "2e6081fc296fbe22c97d1997a77093f6",
"text": "Despite the security community's best effort, the number of serious vulnerabilities discovered in software is increasing rapidly. In theory, security audits should find and remove the vulnerabilities before the code ever gets deployed. However, due to the enormous amount of code being produced, as well as a the lack of manpower and expertise, not all code is sufficiently audited. Thus, many vulnerabilities slip into production systems. A best-practice approach is to use a code metric analysis tool, such as Flawfinder, to flag potentially dangerous code so that it can receive special attention. However, because these tools have a very high false-positive rate, the manual effort needed to find vulnerabilities remains overwhelming. In this paper, we present a new method of finding potentially dangerous code in code repositories with a significantly lower false-positive rate than comparable systems. We combine code-metric analysis with metadata gathered from code repositories to help code review teams prioritize their work. The paper makes three contributions. First, we conducted the first large-scale mapping of CVEs to GitHub commits in order to create a vulnerable commit database. Second, based on this database, we trained a SVM classifier to flag suspicious commits. Compared to Flawfinder, our approach reduces the amount of false alarms by over 99 % at the same level of recall. Finally, we present a thorough quantitative and qualitative analysis of our approach and discuss lessons learned from the results. We will share the database as a benchmark for future research and will also provide our analysis tool as a web service.",
"title": ""
},
{
"docid": "03422659c355a0e9385957768ee1629e",
"text": "Recent research has resulted in the creation of many fact extraction systems. To be able to utilize the extracted facts to their full potential, it is essential to understand their semantics. Placing these extracted facts in an ontology is an effective way to provide structure, which facilitates better understanding of semantics. Today there are many systems that extract facts and organize them in an ontology, namely DBpedia, NELL, YAGO etc. While such ontologies are used in a variety of applications, including IBM’s Jeopardy-winning Watson system, they demand significant effort in their creation. They are either manually curated, or built using semi-supervised machine learning techniques. As the effort in the creation of an ontology is significant, it is often hard to organize facts extracted from a corpus of documents that is different from the one used to build these ontologies in the first place. The goal of this work is to be able to automatically construct ontologies, for a given set of entities, properties and relations. One key source of this data is the Wikipedia tables dataset. Wikipedia tables are unique in that they are a large (1.4 million) and heterogeneous set of tables that can be extracted at very high levels of precision. Rather than augmenting an existing ontology, which is a very challenging research problem in itself, I propose to automatically construct a new ontology by utilizing representations of entities, their attributes and relations. These representations will be learnt using unsupervised machine learning techniques on facts extracted from Wikipedia tables. Thus, the end system will not only extract facts from Wikipedia tables, but also automatically organize them in an Ontology to understand the semantics of Wikipedia tables better.",
"title": ""
},
{
"docid": "57ff69385a6b8202b02bf4c03d7dd78b",
"text": "bstract In this paper, we present a survey on the application of recurrent neural networks to the task of statistical language modeling. lthough it has been shown that these models obtain good performance on this task, often superior to other state-of-the-art techniques, hey suffer from some important drawbacks, including a very long training time and limitations on the number of context words hat can be taken into account in practice. Recent extensions to recurrent neural network models have been developed in an attempt o address these drawbacks. This paper gives an overview of the most important extensions. Each technique is described and its erformance on statistical language modeling, as described in the existing literature, is discussed. Our structured overview makes t possible to detect the most promising techniques in the field of recurrent neural networks, applied to language modeling, but it lso highlights the techniques for which further research is required. 2014 Published by Elsevier Ltd.",
"title": ""
}
] |
scidocsrr
|
b75fd4fbea4ae254fc5e5defc441433f
|
A cooperative perception system for multiple UAVs: Application to automatic detection of forest fires
|
[
{
"docid": "1632b81068788aeeb4e458e340bbcec9",
"text": "We present the design and implementation of a real-time, vision-based landing algorithm for an autonomous helicopter. The landing algorithm is integrated with algorithms for visual acquisition of the target (a helipad), and navigation to the target, from an arbitrary initial position and orientation. We use vision for precise target detection and recognition, and a combination of vision and GPS for navigation. The helicopter updates its landing target parameters based on vision and uses an onboard behavior-based controller to follow a path to the landing site. We present significant results from flight trials in the field which demonstrate that our detection, recognition and control algorithms are accurate, robust and repeatable.",
"title": ""
}
] |
[
{
"docid": "16fa1af9571b623aa756d49fb269ecee",
"text": "The subgraph isomorphism problem is one of the most important problems for pattern recognition in graphs. Its applications are found in many di®erent disciplines, including chemistry, medicine, and social network analysis. Because of the NP-completeness of the problem, the existing exact algorithms exhibit an exponential worst-case running time. In this paper, we propose several improvements to the well-known Ullmann's algorithm for the problem. The improvements lower the time consumption as well as the space requirements of the algorithm. We experimentally demonstrate the e±ciency of our improvement by comparing it to another set of improvements called FocusSearch, as well as other state-of-the-art algorithms, namely VF2 and LAD.",
"title": ""
},
{
"docid": "87eed2ab66bd9bda90cf2a838b990207",
"text": "We present a new framework for compositional distributional semantics in which the distributional contexts of lexemes are expressed in terms of anchored packed dependency trees. We show that these structures have the potential to capture the full sentential contexts of a lexeme and provide a uniform basis for the composition of distributional knowledge in a way that captures both mutual disambiguation and generalization.",
"title": ""
},
{
"docid": "75d5fa282c31e2955b3089d75c0dff4f",
"text": "Over the last two decades, we have seen remarkable progress in computer vision with demonstration of capabilities such as face detection, handwritten digit recognition, reconstructing three-dimensional models of cities, automated monitoring of activities, segmenting out organs or tissues in biological images, and sensing for control of robots and cars. Yet there are many problems where computers still perform significantly below human perception. For example, in the recent PAS. CAL benchmark challenge on visual object detection, the average precision for most 3D object categories was under 50%.",
"title": ""
},
{
"docid": "ef8ba8ae9696333f5da066813a4b79d7",
"text": "Neural image/video captioning models can generate accurate descriptions, but their internal process of mapping regions to words is a black box and therefore difficult to explain. Top-down neural saliency methods can find important regions given a high-level semantic task such as object classification, but cannot use a natural language sentence as the top-down input for the task. In this paper, we propose Caption-Guided Visual Saliency to expose the region-to-word mapping in modern encoder-decoder networks and demonstrate that it is learned implicitly from caption training data, without any pixel-level annotations. Our approach can produce spatial or spatiotemporal heatmaps for both predicted captions, and for arbitrary query sentences. It recovers saliency without the overhead of introducing explicit attention layers, and can be used to analyze a variety of existing model architectures and improve their design. Evaluation on large-scale video and image datasets demonstrates that our approach achieves comparable captioning performance with existing methods while providing more accurate saliency heatmaps. Our code is available at visionlearninggroup.github.io/caption-guided-saliency/.",
"title": ""
},
{
"docid": "32e43b54fcfce9404ff306d72031f0d8",
"text": "A stereo vision based road free space detection method was proposed in this paper. In the method, we firstly use semi global stereo matching to get disparity map, which then is projected to V disparity. Secondly, we use a modified Hough Transformation method to detect slope line to model road surface in V disparity map, then detect the vanishing line via road surface model and eliminate road and sky area in disparity map. Thirdly, the processed disparity map was projected to get U disparity, and in which we select intersection points of road and obstacles. Finally, dynamic programming was utilized to optimize best intersection line, which then were projected back to get free space boundary. The experimental results on KITTI datasets show that, the method can correctly detect free space boundary, and get remarkable results.",
"title": ""
},
{
"docid": "57a48d8c45b7ed6bbcde11586140f8b6",
"text": "We want to build robots that are useful in unstructured real world applications, such as doing work in the household. Grasping in particular is an important skill in this domain, yet it remains a challenge. One of the key hurdles is handling unexpected changes or motion in the objects being grasped and kinematic noise or other errors in the robot. This paper proposes an approach to learning a closed-loop controller for robotic grasping that dynamically guides the gripper to the object. We use a wrist-mounted sensor to acquire depth images in front of the gripper and train a convolutional neural network to learn a distance function to true grasps for grasp configurations over an image. The training sensor data is generated in simulation, a major advantage over previous work that uses real robot experience, which is costly to obtain. Despite being trained in simulation, our approach works well on real noisy sensor images. We compare our controller in simulated and real robot experiments to a strong baseline for grasp pose detection, and find that our approach significantly outperforms the baseline in the presence of kinematic noise, perceptual errors and disturbances of the object during grasping.",
"title": ""
},
{
"docid": "52e89cfbed8497bad3971e3338f05e39",
"text": "BACKGROUND\nThe navicular drop test is a measure to evaluate the function of the medial longitudinal arch, which is important for examination of patients with overuse injuries. Conflicting results have been found with regard to differences in navicular drop between healthy and injured participants. Normal values have not yet been established as foot length, age, gender, and Body Mass Index (BMI) may influence the navicular drop. The purpose of the study was to investigate the influence of foot length, age, gender, and BMI on the navicular drop during walking.\n\n\nMETHODS\nNavicular drop was measured with a novel technique (Video Sequence Analysis, VSA) using 2D video. Flat reflective markers were placed on the medial side of the calcaneus, the navicular tuberosity, and the head of the first metatarsal bone. The navicular drop was calculated as the perpendicular distance between the marker on the navicular tuberosity and the line between the markers on calcaneus and first metatarsal head. The distance between the floor and the line in standing position between the markers on calcaneus and first metatarsal were added afterwards.\n\n\nRESULTS\n280 randomly selected participants without any foot problems were analysed during treadmill walking (144 men, 136 women). Foot length had a significant influence on the navicular drop in both men (p < 0.001) and women (p = 0.015), whereas no significant effect was found of age (p = 0.27) or BMI (p = 0.88). Per 10 mm increase in foot length, the navicular drop increased by 0.40 mm for males and 0.31 mm for females. Linear models were created to calculate the navicular drop relative to foot length.\n\n\nCONCLUSION\nThe study demonstrated that the dynamic navicular drop is influenced by foot length and gender. Lack of adjustment for these factors may explain, at least to some extent, the disagreement between previous studies on navicular drop. Future studies should account for differences in these parameters.",
"title": ""
},
{
"docid": "6f3931bf36c98642ee89284c6d6d7b7e",
"text": "Despite rapidly increasing numbers of diverse online shoppers the relationship of website design to trust, satisfaction, and loyalty has not previously been modeled across cultures. In the current investigation three components of website design (Information Design, Navigation Design, and Visual Design) are considered for their impact on trust and satisfaction. In turn, relationships of trust and satisfaction to online loyalty are evaluated. Utilizing data collected from 571 participants in Canada, Germany, and China various relationships in the research model are tested using PLS analysis for each country separately. In addition the overall model is tested for all countries combined as a control and verification of earlier research findings, although this time with a mixed country sample. All paths in the overall model are confirmed. Differences are determined for separate country samples concerning whether Navigation Design, Visual Design, and Information Design result in trust, satisfaction, and ultimately loyalty suggesting design characteristics should be a central consideration in website design across cultures.",
"title": ""
},
{
"docid": "914e5896f60967ed1be97e00049d9238",
"text": "Numerous software architecture proposals are available to industrial information engineers in developing their enterprise information systems. While those proposals and corresponding methodologies are helpful to engineers in determining appropriate architecture, the systematic methods for the evaluation of software architecture are scarce. To select appropriate software architecture from various alternatives appropriately, a scenario-based method has been proposed to assess how software architecture affects the fulfillment of business requirements. The empirical evaluation on the selection of a supply chain software tool has shown that the developed method offers remarkable insights of software development and can be incorporated into the industrial informatics practice of an organization with a moderate cost.",
"title": ""
},
{
"docid": "8ec018e0fc4ca7220387854bdd034a58",
"text": "Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.",
"title": ""
},
{
"docid": "e3caf8dcb01139ae780616c022e1810d",
"text": "The relative age effect (RAE) and its relationships with maturation, anthropometry, and physical performance characteristics were examined across a representative sample of English youth soccer development programmes. Birth dates of 1,212 players, chronologically age-grouped (i.e., U9's-U18's), representing 17 professional clubs (i.e., playing in Leagues 1 & 2) were obtained and categorised into relative age quartiles from the start of the selection year (Q1 = Sep-Nov; Q2 = Dec-Feb; Q3 = Mar-May; Q4 = Jun-Aug). Players were measured for somatic maturation and performed a battery of physical tests to determine aerobic fitness (Multi-Stage Fitness Test [MSFT]), Maximal Vertical Jump (MVJ), sprint (10 & 20m), and agility (T-Test) performance capabilities. Odds ratio's (OR) revealed Q1 players were 5.3 times (95% confidence intervals [CI]: 4.08-6.83) more likely to be selected than Q4's, with a particularly strong RAE bias observed in U9 (OR: 5.56) and U13-U16 squads (OR: 5.45-6.13). Multivariate statistical models identified few between quartile differences in anthropometric and fitness characteristics, and confirmed chronological age-group and estimated age at peak height velocity (APHV) as covariates. Assessment of practical significance using magnitude-based inferences demonstrated body size advantages in relatively older players (Q1 vs. Q4) that were very-likely small (Effect Size [ES]: 0.53-0.57), and likely to very-likely moderate (ES: 0.62-0.72) in U12 and U14 squads, respectively. Relatively older U12-U14 players also demonstrated small advantages in 10m (ES: 0.31-0.45) and 20m sprint performance (ES: 0.36-0.46). The data identify a strong RAE bias at the entry-point to English soccer developmental programmes. RAE was also stronger circa-PHV, and relatively older players demonstrated anaerobic performance advantages during the pubescent period. Talent selectors should consider motor function and maturation status assessments to avoid premature and unwarranted drop-out of soccer players within youth development programmes.",
"title": ""
},
{
"docid": "5828308d458a1527f651d638375f3732",
"text": "We conducted a mixed methods study of the use of the Meerkat and Periscope apps for live streaming video and audio broadcasts from a mobile device. We crowdsourced a task to describe the content, setting, and other characteristics of 767 live streams. We also interviewed 20 frequent streamers to explore their motivations and experiences. Together, the data provide a snapshot of early live streaming use practices. We found a diverse range of activities broadcast, which interviewees said were used to build their personal brand. They described live streaming as providing an authentic, unedited view into their lives. They liked how the interaction with viewers shaped the content of their stream. We found some evidence for multiple live streams from the same event, which represent an opportunity for multiple perspectives on events of shared public interest.",
"title": ""
},
{
"docid": "829e437aee100b302f35900e0b0a91ab",
"text": "A 1. 5 V 0.18mum CMOS LNA for GPS applications has been designed with fully differential topology. Under such a low supply voltage, the fully differential LNA has been simulated, it provides a series of good results in Noise figure, Linearity and Power consumption. The LNA achieves a Noise figure of 1. 5 dB, voltage gain of 32 dB, Power dissipation of 6 mW, and the input reflection coefficient (Sn) is -23 dB.",
"title": ""
},
{
"docid": "b3db73c0398e6c0e6a90eac45bb5821f",
"text": "The task of video grounding, which temporally localizes a natural language description in a video, plays an important role in understanding videos. Existing studies have adopted strategies of sliding window over the entire video or exhaustively ranking all possible clip-sentence pairs in a presegmented video, which inevitably suffer from exhaustively enumerated candidates. To alleviate this problem, we formulate this task as a problem of sequential decision making by learning an agent which regulates the temporal grounding boundaries progressively based on its policy. Specifically, we propose a reinforcement learning based framework improved by multi-task learning and it shows steady performance gains by considering additional supervised boundary information during training. Our proposed framework achieves state-ofthe-art performance on ActivityNet’18 DenseCaption dataset (Krishna et al. 2017) and Charades-STA dataset (Sigurdsson et al. 2016; Gao et al. 2017) while observing only 10 or less clips per video.",
"title": ""
},
{
"docid": "bc23c7f85cf30e51b44eced45fecdedd",
"text": "Article history: Received 27 August 2007 Received in revised form 25 December 2008 Accepted 7 January 2009",
"title": ""
},
{
"docid": "618ba5659da9110ae02299dff4be227f",
"text": "Over the past decade , many organizations have begun to routinely capture huge volumes of historical data describing their operations, products, and customers. At the same time, scientists and engineers in many fields have been capturing increasingly complex experimental data sets, such as gigabytes of functional magnetic resonance imaging (MRI) data describing brain activity in humans. The field of data mining addresses the question of how best to use this historical data to discover general regularities and improve the process of making decisions. Machine Learning and Data Mining",
"title": ""
},
{
"docid": "734638df47b05b425b0dcaaab11d886e",
"text": "Satisfying the needs of users of online video streaming services requires not only to manage the network Quality of Service (QoS), but also to address the user's Quality of Experience (QoE) expectations. While QoS factors reflect the status of individual networks, they do not comprehensively capture the end-to-end features affecting the quality delivered to the user. In this situation, QoE management is the better option. However, traditionally used QoE management models require human interaction and have stringent requirements in terms of time and complexity. Thus, they fail to achieve successful performance in terms of real-timeliness, accuracy, scalability and adaptability. This dissertation work investigates new methods to bring QoE management to the level required by the real-time management of video services. In this paper, we highlight our main contributions. First, with the aim to perform a combined network-service assessment, we designed an experimental methodology able to map network QoS onto service QoE. Our methodology is meant to provide service and network providers with the means to pinpoint the working boundaries of their video-sets and to predict the effect of network policies on perception. Second, we developed a generic machine learning framework that allows deriving accurate predictive No Reference (NR) assessment metrics, based on simplistic NR QoE methods, that are functionally and computationally viable for real-time QoE evaluation. The tools, methods and conclusions derived from this dissertation conform a solid contribution to QoE management of video streaming services, opening new venues for further research.",
"title": ""
},
{
"docid": "93177b2546e8efa1eccad4c81468f9fe",
"text": "Online Transaction Processing (OLTP) databases include a suite of features - disk-resident B-trees and heap files, locking-based concurrency control, support for multi-threading - that were optimized for computer technology of the late 1970's. Advances in modern processors, memories, and networks mean that today's computers are vastly different from those of 30 years ago, such that many OLTP databases will now fit in main memory, and most OLTP transactions can be processed in milliseconds or less. Yet database architecture has changed little.\n Based on this observation, we look at some interesting variants of conventional database systems that one might build that exploit recent hardware trends, and speculate on their performance through a detailed instruction-level breakdown of the major components involved in a transaction processing database system (Shore) running a subset of TPC-C. Rather than simply profiling Shore, we progressively modified it so that after every feature removal or optimization, we had a (faster) working system that fully ran our workload. Overall, we identify overheads and optimizations that explain a total difference of about a factor of 20x in raw performance. We also show that there is no single \"high pole in the tent\" in modern (memory resident) database systems, but that substantial time is spent in logging, latching, locking, B-tree, and buffer management operations.",
"title": ""
},
{
"docid": "f8ddedb1bdc57d75fb5ea9bf81ec51f5",
"text": "Given a text description, most existing semantic parsers synthesize a program in one shot. However, it is quite challenging to produce a correct program solely based on the description, which in reality is often ambiguous or incomplete. In this paper, we investigate interactive semantic parsing, where the agent can ask the user clarification questions to resolve ambiguities via a multi-turn dialogue, on an important type of programs called “If-Then recipes.” We develop a hierarchical reinforcement learning (HRL) based agent that significantly improves the parsing performance with minimal questions to the user. Results under both simulation and human evaluation show that our agent substantially outperforms non-interactive semantic parsers and rule-based agents.",
"title": ""
}
] |
scidocsrr
|
d789a62b9802b2e874e6ad5b7b354857
|
Toward Quantifying Vertex Similarity in Networks
|
[
{
"docid": "43dad2821c9b8663bc26d86b71362ef5",
"text": "Measures of graph similarity have a broad array of applications, including comparing chemical structures, navigating complex networks like the World Wide Web, and more recently, analyzing different kinds of biological data. This thesis surveys several different notions of similarity, then focuses on an interesting class of iterative algorithms that use the structural similarity of local neighborhoods to derive pairwise similarity scores between graph elements. We have developed a new similarity measure that uses a linear update to generate both node and edge similarity scores and has desirable convergence properties. This thesis also explores the application of our similarity measure to graph matching. We attempt to correctly position a subgraph GB within a graph GA using a maximum weight matching algorithm applied to the similarity scores between GA and GB. Significant performance improvements are observed when the topological information provided by the similarity measure is combined with additional information about the attributes of the graph elements and their local neighborhoods. Matching results are presented for subgraph matching within randomly-generated graphs; an appendix briefly discusses matching applications in the yeast interactome, a graph representing protein-protein interactions within yeast. Thesis Supervisor: George Verghese Title: Professor of Electrical Engineering and Computer Science",
"title": ""
}
] |
[
{
"docid": "6f99852d599ee533da2a7c58f9b90c42",
"text": "Searching for Web service access points is no longer attached to service registries as Web search engines have become a new major source for discovering Web services. In this work, we conduct a thorough analytical investigation on the plurality of Web service interfaces that exist on the Web today. Using our Web Service Crawler Engine (WSCE), we collect metadata service information on retrieved interfaces through accessible UBRs, service portals and search engines. We use this data to determine Web service statistics and distribution based on object sizes, types of technologies employed, and the number of functioning services. This statistical data can be used to help determine the current status of Web services. We determine an intriguing result that 63% of the available Web services on the Web are considered to be active. We further use our findings to provide insights on improving the service retrieval process.",
"title": ""
},
{
"docid": "db550980a6988bcd9a96486619d6478c",
"text": "Atmospheric turbulence induced fading is one of the main impairments affecting free-space optics (FSO) communications. In recent years, Gamma-Gamma fading has become the dominant fading model for FSO links because of its excellent agreement with measurement data for a wide range of turbulence conditions. However, in contrast to RF communications, the analysis techniques for FSO are not well developed and prior work has mostly resorted to simulations and numerical integration for performance evaluation in Gamma-Gamma fading. In this paper, we express the pairwise error probabilities of single-input single- output (SISO) and multiple-input multiple-output (MIMO) FSO systems with intensity modulation and direct detection (IM/DD) as generalized infinite power series with respect to the signal- to-noise ratio. For numerical evaluation these power series are truncated to a finite number of terms and an upper bound for the associated approximation error is provided. The resulting finite power series enables fast and accurate numerical evaluation of the bit error rate of IM/DD FSO with on-off keying and pulse position modulation in SISO and MIMO Gamma-Gamma fading channels. Furthermore, we extend the well-known RF concepts of diversity and combining gain to FSO and Gamma-Gamma fading. In particular, we provide simple closed-form expressions for the diversity gain and the combining gain of MIMO FSO with repetition coding across lasers at the transmitter and equal gain combining or maximal ratio combining at the receiver.",
"title": ""
},
{
"docid": "8d2572c07ec2bb7716facd27e36cc83e",
"text": "AIM\nThis study examined the levels of occupational stress and burnout among surgeons in Fiji.\n\n\nMETHODS\nA document set comprising a cover letter; a consent form; a sociodemographic and supplementary information questionnaire; the Maslach Burnout Inventory (MBI); the 12-item General Health Questionnaire (GHQ-12); the Alcohol Use Disorders Identification Test (AUDIT); and the Professional Quality of Life (ProQOL) questionnaires were provided to surgeons from three public divisional hospitals in Fiji. Thirty-six of 43 (83.7%) invited surgeons participated in the study.\n\n\nRESULTS\nAccording to their MBI scores, surgeons suffered from low (10, 27.8%), moderate (23, 63.9%), and high (3, 8.3%) levels of burnout. Comparatively, 23 (63.9%) demonstrated moderate burnout according to their ProQOL scores. Substantial psychiatric morbidity was observed in 16 (44.0%) surgeons per their GHQ-12 scores. Consumption of alcohol was noted in 29 (80.6%) surgeons, and 12 (33.4%) had AUDIT scores characterizing their alcohol use in excess of low-risk guidelines or as harmful or hazardous drinking. Surgeons of Fijian nationality showed higher MBI emotional exhaustion and depersonalization scores compared with surgeons of other nationalities. Surgeons with an awareness of the availability of counseling services at their hospitals showed low AUDIT and ProQOL burnout scores. Smokers, alcohol drinkers, and kava drinkers showed higher AUDIT scores.\n\n\nCONCLUSION\nThis study highlights a level of occupational stress and burnout among surgeons in Fiji and a lack of awareness of their mental and physical well-being. The authors recommend that occupational stress and burnout intervention strategies be put in place in hospitals in Fiji.",
"title": ""
},
{
"docid": "16d7767e9f2216ce0789b8a92d8d65e4",
"text": "In the rst genetic programming (GP) book John Koza noticed that tness histograms give a highly informative global view of the evolutionary process (Koza, 1992). The idea is further developed in this paper by discussing GP evolution in analogy to a physical system. I focus on three interrelated major goals: (1) Study the the problem of search eeort allocation in GP; (2) Develop methods in the GA/GP framework that allow adap-tive control of diversity; (3) Study ways of adaptation for faster convergence to optimal solution. An entropy measure based on phenotype classes is introduced which abstracts tness histograms. In this context, entropy represents a measure of population diversity. An analysis of entropy plots and their correlation with other statistics from the population enables an intelligent adaptation of search control.",
"title": ""
},
{
"docid": "7d5300adb91df986d4fe94195422e35f",
"text": "This paper proposes a simple CNN model for creating general-purpose sentence embeddings that can transfer easily across domains and can also act as effective initialization for downstream tasks. Recently, averaging the embeddings of words in a sentence has proven to be a surprisingly successful and efficient way of obtaining sentence embeddings. However, these models represent a sentence, only in terms of features of words or uni-grams in it. In contrast, our model (CSE) utilizes both features of words and n-grams to encode sentences, which is actually a generalization of these bag-of-words models. The extensive experiments demonstrate that CSE performs better than average models in transfer learning setting and exceeds the state of the art in supervised learning setting by initializing the parameters with the pre-trained sentence embeddings.",
"title": ""
},
{
"docid": "c83d4f1136b07797912a4c4722b685a1",
"text": "In agriculture research of automatic leaf disease detection is essential research topic as it may prove benefits in monitoring large fields of crops, and thus automatically detect symptoms of disease as soon as they appear on plant leaves. The term disease is usually used only for destruction of live plants. This paper provides various methods used to study of leaf disease detection using image processing. The methods studies are for increasing throughput and reduction subjectiveness arising from human experts in detecting the leaf disease[1].digital image processing is a technique used for enhancement of the image. To improve agricultural products automatic detection of symptoms is beneficial. Keyword— Leaf disease, Image processing.",
"title": ""
},
{
"docid": "abb54a0c155805e7be2602265f78ae79",
"text": "In this paper we sketch out a computational theory of spatial cognition motivated by navigational behaviours, ecological requirements, and neural mechanisms as identified in animals and man. Spatial cognition is considered in the context of a cognitive agent built around the action-perception cycle. Besides sensors and effectors, the agent comprises multiple memory structures including a working memory and a longterm memory stage. Spatial longterm memory is modeled along the graph approach, treating recognizable places or poses as nodes and navigational actions as links. Models of working memory and its interaction with reference memory are discussed. The model provides an overall framework of spatial cognition which can be adapted to model different levels of behavioural complexity as well as interactions between working and longterm memory. A number of design questions for building cognitive robots are derived from comparison with biological systems and discussed in the paper.",
"title": ""
},
{
"docid": "c716e7dc1c0e770001bcb57eab871968",
"text": "We present a new method to visualize from an ensemble of flow fields the statistical properties of streamlines passing through a selected location. We use principal component analysis to transform the set of streamlines into a low-dimensional Euclidean space. In this space the streamlines are clustered into major trends, and each cluster is in turn approximated by a multivariate Gaussian distribution. This yields a probabilistic mixture model for the streamline distribution, from which confidence regions can be derived in which the streamlines are most likely to reside. This is achieved by transforming the Gaussian random distributions from the low-dimensional Euclidean space into a streamline distribution that follows the statistical model, and by visualizing confidence regions in this distribution via iso-contours. We further make use of the principal component representation to introduce a new concept of streamline-median, based on existing median concepts in multidimensional Euclidean spaces. We demonstrate the potential of our method in a number of real-world examples, and we compare our results to alternative clustering approaches for particle trajectories as well as curve boxplots.",
"title": ""
},
{
"docid": "15004021346a3c79924733bfc38bbe82",
"text": "Self-improving systems are a promising new approach to developing artificial intelligence. But will their behavior be predictable? Can we be sure that they will behave as we intended even after many generations of selfimprovement? This paper presents a framework for answering questions like these. It shows that self-improvement causes systems to converge on an",
"title": ""
},
{
"docid": "4451f35b38f0b3af0ff006d8995b0265",
"text": "Social media together with still growing social media communities has become a powerful and promising solution in crisis and emergency management. Previous crisis events have proved that social media and mobile technologies used by citizens (widely) and public services (to some extent) have contributed to the post-crisis relief efforts. The iSAR+ EU FP7 project aims at providing solutions empowering citizens and PPDR (Public Protection and Disaster Relief) organizations in online and mobile communications for the purpose of crisis management especially in search and rescue operations. This paper presents the results of survey aiming at identification of preliminary end-user requirements in the close interworking with end-users across Europe.",
"title": ""
},
{
"docid": "938066effe63f546c5bb451987cad4ad",
"text": "The area under the ROC curve (AUC) is a natural performance measure when the goal is to find a discriminative decision function. We present a rigorous derivation of an AUC maximizing Support Vector Machine; its optimization criterion is composed of a convex bound on the AUC and a margin term. The number of constraints in the optimization problem grows quadratically in the number of examples. We discuss an approximation for large data sets that clusters the constraints. Our experiments show that the AUC maximizing Support Vector Machine does in fact lead to higher AUC values.",
"title": ""
},
{
"docid": "e3da39127d234b40f28ac957d58d2098",
"text": "Evaluating the clinical similarities between pairwisepatients is a fundamental problem in healthcare informatics. Aproper patient similarity measure enables various downstreamapplications, such as cohort study and treatment comparative effectiveness research. One major carrier for conductingpatient similarity research is the Electronic Health Records(EHRs), which are usually heterogeneous, longitudinal, andsparse. Though existing studies on learning patient similarityfrom EHRs have shown being useful in solving real clinicalproblems, their applicability is limited due to the lack of medicalinterpretations. Moreover, most previous methods assume avector based representation for patients, which typically requiresaggregation of medical events over a certain time period. As aconsequence, the temporal information will be lost. In this paper, we propose a patient similarity evaluation framework based ontemporal matching of longitudinal patient EHRs. Two efficientmethods are presented, unsupervised and supervised, both ofwhich preserve the temporal properties in EHRs. The supervisedscheme takes a convolutional neural network architecture, andlearns an optimal representation of patient clinical recordswith medical concept embedding. The empirical results on real-world clinical data demonstrate substantial improvement overthe baselines.",
"title": ""
},
{
"docid": "7b4140cb95fbaae6e272326ab59fb884",
"text": "Network intrusion detection systems (NIDSs) play a crucial role in defending computer networks. However, there are concerns regarding the feasibility and sustainability of current approaches when faced with the demands of modern networks. More specifically, these concerns relate to the increasing levels of required human interaction and the decreasing levels of detection accuracy. This paper presents a novel deep learning technique for intrusion detection, which addresses these concerns. We detail our proposed nonsymmetric deep autoencoder (NDAE) for unsupervised feature learning. Furthermore, we also propose our novel deep learning classification model constructed using stacked NDAEs. Our proposed classifier has been implemented in graphics processing unit (GPU)-enabled TensorFlow and evaluated using the benchmark KDD Cup ’99 and NSL-KDD datasets. Promising results have been obtained from our model thus far, demonstrating improvements over existing approaches and the strong potential for use in modern NIDSs.",
"title": ""
},
{
"docid": "95ca78f61a46f6e34edce6210d5e0939",
"text": "Wireless sensor networks (WSNs) have recently gained a lot of attention by scientific community. Small and inexpensive devices with low energy consumption and limited computing resources are increasingly being adopted in different application scenarios including environmental monitoring, target tracking and biomedical health monitoring. In many such applications, node localization is inherently one of the system parameters. Localization process is necessary to report the origin of events, routing and to answer questions on the network coverage ,assist group querying of sensors. In general, localization schemes are classified into two broad categories: range-based and range-free. However, it is difficult to classify hybrid solutions as range-based or range-free. In this paper we make this classification easy, where range-based schemes and range-free schemes are divided into two types: fully schemes and hybrid schemes. Moreover, we compare the most relevant localization algorithms and discuss the future research directions for wireless sensor networks localization schemes.",
"title": ""
},
{
"docid": "12932c683fe7d378341baacb09a290d0",
"text": "News coverage of video game violence studies has been critiqued for focusing mainly on studies supporting negative effects and failing to report studies that did not find evidence for such effects. These concerns were tested in a sample of 68 published studies using child and adolescent samples. Contrary to our hypotheses, study effect size was not a predictor of either newspaper coverage or publication in journals with a high-impact factor. However, a relationship between poorer study quality and newspaper coverage approached significance. High-impact journals were not found to publish studies with higher quality. Poorer quality studies, which tended to highlight negative findings, also received more citations in scholarly sources. Our findings suggest that negative effects of violent video games exposure in children and adolescents, rather than large effect size or high methodological quality, increase the likelihood of a study being cited in other academic publications and subsequently receiving news media coverage.",
"title": ""
},
{
"docid": "274186e87674920bfe98044aa0208320",
"text": "Message routing in mobile delay tolerant networks inherently relies on the cooperation between nodes. In most existing routing protocols, the participation of nodes in the routing process is taken as granted. However, in reality, nodes can be unwilling to participate. We first show in this paper the impact of the unwillingness of nodes to participate in existing routing protocols through a set of experiments. Results show that in the presence of even a small proportion of nodes that do not forward messages, performance is heavily degraded. We then analyze two major reasons of the unwillingness of nodes to participate, i.e., their rational behavior (also called selfishness) and their wariness of disclosing private mobility information. Our main contribution in this paper is to survey the existing related research works that overcome these two issues. We provide a classification of the existing approaches for protocols that deal with selfish behavior. We then conduct experiments to compare the performance of these strategies for preventing different types of selfish behavior. For protocols that preserve the privacy of users, we classify the existing approaches and provide an analytical comparison of their security guarantees.",
"title": ""
},
{
"docid": "2757d2ab9c3fbc2eb01385771f297a71",
"text": "In this brief, we propose a variable structure based nonlinear missile guidance/autopilot system with highly maneuverable actuators, mainly consisting of thrust vector control and divert control system, for the task of intercepting of a theater ballistic missile. The aim of the present work is to achieve bounded target interception under the mentioned 5 degree-of-freedom (DOF) control such that the distance between the missile and the target will enter the range of triggering the missile's explosion. First, a 3-DOF sliding-mode guidance law of the missile considering external disturbances and zero-effort-miss (ZEM) is designed to minimize the distance between the center of the missile and that of the target. Next, a quaternion-based sliding-mode attitude controller is developed to track the attitude command while coping with variation of missile's inertia and uncertain aerodynamic force/wind gusts. The stability of the overall system and ZEM-phase convergence are analyzed thoroughly via Lyapunov stability theory. Extensive simulation results are obtained to validate the effectiveness of the proposed integrated guidance/autopilot system by use of the 5-DOF inputs.",
"title": ""
},
{
"docid": "ec772eccaa45eb860582820e751f3415",
"text": "Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.",
"title": ""
},
{
"docid": "aec638e44bf1aade4506e0e17bbe3757",
"text": "Latency determines not only how players experience online gameplay but also how to design the games to mitigate its effects and meet player expectations.",
"title": ""
},
{
"docid": "c2db241a94d9fec15af613d593730dea",
"text": "This study investigated the influence of Cloisite-15A nanoclay on the physical, performance, and mechanical properties of bitumen binder. Cloisite-15A was blended in the bitumen in variegated percentages from 1% to 9% with increment of 2%. The blended bitumen was characterized using penetration, softening point, and dynamic viscosity using rotational viscometer, and compared with unmodified bitumen equally penetration grade 60/70. The rheological parameters were investigated using Dynamic Shear Rheometer (DSR), and mechanical properties were investigated by using Marshall Stability test. The results indicated an increase in softening point, dynamic viscosity and decrease in binder penetration. Rheological properties of bitumen increase complex modulus, decrease phase angle and improve rutting resistances as well. There was significant improvement in Marshall Stability, rather marginal improvement in flow value. The best improvement in the modified binder was obtained with 5% Cloisite-15A nanoclay. Keywords—Cloisite-15A, complex shear modulus, phase angle, rutting resistance.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.