query_id
stringlengths 32
32
| query
stringlengths 5
4.91k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
4161a47d40b6ff09d0bff26cd2e55295
|
Detecting Changes in Twitter Streams using Temporal Clusters of Hashtags
|
[
{
"docid": "c0235dd0dc574f18c6f11e1afc7c4903",
"text": "Today streaming text mining plays an important role within real-time social media mining. Given the amount and cadence of the data generated by those platforms, classical text mining techniques are not suitable to deal with such new mining challenges. Event detection is no exception, available algorithms rely on text mining techniques applied to pre-known datasets processed with no restrictions about computational complexity and required execution time per document analysis. This work presents a lightweight event detection using wavelet signal analysis of hashtag occurrences in the twitter public stream. It also proposes a strategy to describe detected events using a Latent Dirichlet Allocation topic inference model based on Gibbs Sampling. Peak detection using Continuous Wavelet Transformation achieved good results in the identification of abrupt increases on the mentions of specific hashtags. The combination of this method with the extraction of topics from tweets with hashtag mentions proved to be a viable option to summarize detected twitter events in streaming environments.",
"title": ""
},
{
"docid": "18738a644f88af299d9e94157f804812",
"text": "Twitter is among the fastest-growing microblogging and online social networking services. Messages posted on Twitter (tweets) have been reporting everything from daily life stories to the latest local and global news and events. Monitoring and analyzing this rich and continuous user-generated content can yield unprecedentedly valuable information, enabling users and organizations to acquire actionable knowledge. This article provides a survey of techniques for event detection from Twitter streams. These techniques aim at finding real-world occurrences that unfold over space and time. In contrast to conventional media, event detection from Twitter streams poses new challenges. Twitter streams contain large amounts of meaningless messages and polluted content, which negatively affect the detection performance. In addition, traditional text mining techniques are not suitable, because of the short length of tweets, the large number of spelling and grammatical errors, and the frequent use of informal and mixed language. Event detection techniques presented in literature address these issues by adapting techniques from various fields to the uniqueness of Twitter. This article classifies these techniques according to the event type, detection task, and detection method and discusses commonly used features. Finally, it highlights the need for public benchmarks to evaluate the performance of different detection approaches and various features.",
"title": ""
}
] |
[
{
"docid": "2a4eb6d12a50034b5318d246064cb86e",
"text": "In this paper, we study the 3D volumetric modeling problem by adopting the Wasserstein introspective neural networks method (WINN) that was previously applied to 2D static images. We name our algorithm 3DWINN which enjoys the same properties as WINN in the 2D case: being simultaneously generative and discriminative. Compared to the existing 3D volumetric modeling approaches, 3DWINN demonstrates competitive results on several benchmarks in both the generation and the classification tasks. In addition to the standard inception score, the Fréchet Inception Distance (FID) metric is also adopted to measure the quality of 3D volumetric generations. In addition, we study adversarial attacks for volumetric data and demonstrate the robustness of 3DWINN against adversarial examples while achieving appealing results in both classification and generation within a single model. 3DWINN is a general framework and it can be applied to the emerging tasks for 3D object and scene modeling.",
"title": ""
},
{
"docid": "befd91b3e6874b91249d101f8373db01",
"text": "Today's biomedical research has become heavily dependent on access to the biological knowledge encoded in expert curated biological databases. As the volume of biological literature grows rapidly, it becomes increasingly difficult for biocurators to keep up with the literature because manual curation is an expensive and time-consuming endeavour. Past research has suggested that computer-assisted curation can improve efficiency, but few text-mining systems have been formally evaluated in this regard. Through participation in the interactive text-mining track of the BioCreative 2012 workshop, we developed PubTator, a PubMed-like system that assists with two specific human curation tasks: document triage and bioconcept annotation. On the basis of evaluation results from two external user groups, we find that the accuracy of PubTator-assisted curation is comparable with that of manual curation and that PubTator can significantly increase human curatorial speed. These encouraging findings warrant further investigation with a larger number of publications to be annotated. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator/",
"title": ""
},
{
"docid": "ab47dbcafba637ae6e3b474642439bd3",
"text": "Ear detection from a profile face image is an important step in many applications including biometric recognition. But accurate and rapid detection of the ear for real-time applications is a challenging task, particularly in the presence of occlusions. In this work, a cascaded AdaBoost based ear detection approach is proposed. In an experiment with a test set of 203 profile face images, all the ears were accurately detected by the proposed detector with a very low (5 x 10-6) false positive rate. It is also very fast and relatively robust to the presence of occlusions and degradation of the ear images (e.g. motion blur). The detection process is fully automatic and does not require any manual intervention.",
"title": ""
},
{
"docid": "8f8bd08f73ee191a1f826fa0d61ff149",
"text": "We propose an algorithm for designing linear equalizers that maximize the structural similarity (SSIM) index between the reference and restored signals. The SSIM index has enjoyed considerable application in the evaluation of image processing algorithms. Algorithms, however, have not been designed yet to explicitly optimize for this measure. The design of such an algorithm is nontrivial due to the nonconvex nature of the distortion measure. In this paper, we reformulate the nonconvex problem as a quasi-convex optimization problem, which admits a tractable solution. We compute the optimal solution in near closed form, with complexity of the resulting algorithm comparable to complexity of the linear minimum mean squared error (MMSE) solution, independent of the number of filter taps. To demonstrate the usefulness of the proposed algorithm, it is applied to restore images that have been blurred and corrupted with additive white gaussian noise. As a special case, we consider blur-free image denoising. In each case, its performance is compared to a locally adaptive linear MSE-optimal filter. We show that the images denoised and restored using the SSIM-optimal filter have higher SSIM index, and superior perceptual quality than those restored using the MSE-optimal adaptive linear filter. Through these results, we demonstrate that a) designing image processing algorithms, and, in particular, denoising and restoration-type algorithms, can yield significant gains over existing (in particular, linear MMSE-based) algorithms by optimizing them for perceptual distortion measures, and b) these gains may be obtained without significant increase in the computational complexity of the algorithm.",
"title": ""
},
{
"docid": "07015d54df716331e42613e547e74771",
"text": "A complex computing problem may be efficiently solved on a system with multiple processing elements by dividing its implementation code into several tasks or modules that execute in parallel. The modules may then be assigned to and scheduled on the processing elements so that the total execution time is minimum. Finding an optimal schedule for parallel programs is a non-trivial task and is considered to be NP-complete. For heterogeneous systems having processors with different characteristics, most of the scheduling algorithms use greedy approach to assign processors to the modules. This paper suggests a novel approach called constrained earliest finish time (CEFT) to provide better schedules for heterogeneous systems using the concept of the constrained critical paths (CCPs). In contrast to other approaches used for heterogeneous systems, the CEFT strategy takes into account a broader view of the input task graph. Furthermore, the statically generated CCPs may be efficiently scheduled in comparison with other approaches. The experimentation results show that the CEFT scheduling strategy outperforms the well-known HEFT, DLS and LMT strategies by producing shorter schedules for a diverse collection of task graphs. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c2177b7e3cdca3800b3d465229835949",
"text": "BACKGROUND\nIn 2010, the World Health Organization published benchmarks for training in osteopathy in which osteopathic visceral techniques are included. The purpose of this study was to identify and critically appraise the scientific literature concerning the reliability of diagnosis and the clinical efficacy of techniques used in visceral osteopathy.\n\n\nMETHODS\nDatabases MEDLINE, OSTMED.DR, the Cochrane Library, Osteopathic Research Web, Google Scholar, Journal of American Osteopathic Association (JAOA) website, International Journal of Osteopathic Medicine (IJOM) website, and the catalog of Académie d'ostéopathie de France website were searched through December 2017. Only inter-rater reliability studies including at least two raters or the intra-rater reliability studies including at least two assessments by the same rater were included. For efficacy studies, only randomized-controlled-trials (RCT) or crossover studies on unhealthy subjects (any condition, duration and outcome) were included. Risk of bias was determined using a modified version of the quality appraisal tool for studies of diagnostic reliability (QAREL) in reliability studies. For the efficacy studies, the Cochrane risk of bias tool was used to assess their methodological design. Two authors performed data extraction and analysis.\n\n\nRESULTS\nEight reliability studies and six efficacy studies were included. The analysis of reliability studies shows that the diagnostic techniques used in visceral osteopathy are unreliable. Regarding efficacy studies, the least biased study shows no significant difference for the main outcome. The main risks of bias found in the included studies were due to the absence of blinding of the examiners, an unsuitable statistical method or an absence of primary study outcome.\n\n\nCONCLUSIONS\nThe results of the systematic review lead us to conclude that well-conducted and sound evidence on the reliability and the efficacy of techniques in visceral osteopathy is absent.\n\n\nTRIAL REGISTRATION\nThe review is registered PROSPERO 12th of December 2016. Registration number is CRD4201605286 .",
"title": ""
},
{
"docid": "bd3ba8635a8cd2112a1de52c90e2a04b",
"text": "Neural Machine Translation (NMT) is a new technique for machine translation that has led to remarkable improvements compared to rule-based and statistical machine translation (SMT) techniques, by overcoming many of the weaknesses in the conventional techniques. We study and apply NMT techniques to create a system with multiple models which we then apply for six Indian language pairs. We compare the performances of our NMT models with our system using automatic evaluation metrics such as UNK Count, METEOR, F-Measure, and BLEU. We find that NMT techniques are very effective for machine translations of Indian language pairs. We then demonstrate that we can achieve good accuracy even using a shallow network; on comparing the performance of Google Translate on our test dataset, our best model outperformed Google Translate by a margin of 17 BLEU points on Urdu-Hindi, 29 BLEU points on Punjabi-Hindi, and 30 BLEU points on Gujarati-Hindi translations.",
"title": ""
},
{
"docid": "6547b8d856a742925936ae20bdbf3543",
"text": "In this work we present a visual servoing approach that enables a humanoid robot to robustly execute dual arm grasping and manipulation tasks. Therefore the target object(s) and both hands are tracked alternately and a combined open-/ closed-loop controller is used for positioning the hands with respect to the target(s). We address the perception system and how the observable workspace can be increased by using an active vision system on a humanoid head. Furthermore a control framework for reactive positioning of both hands using position based visual servoing is presented, where the sensor data streams coming from the vision system, the joint encoders and the force/torque sensors are fused and joint velocity values are generated. This framework can be used for bimanual grasping as well as for two handed manipulations which is demonstrated with the humanoid robot Armar-III that executes grasping and manipulation tasks in a kitchen environment.",
"title": ""
},
{
"docid": "e2e47bef900599b0d7b168e02acf7e88",
"text": "Reflection seismic data from the F3 block in the Dutch North Sea exhibits many largeamplitude reflections at shallow horizons, typically categorized as “brightspots ” (Schroot and Schuttenhelm, 2003), mainly because of their bright appearance. In most cases, these bright reflections show a significant “flatness” contrasting with local structural trends. While flatspots are often easily identified in thick reservoirs, we have often occasionally observed apparent flatspot tuning effects at fluid contacts near reservoir edges and in thin reservoir beds, while only poorly understanding them. We conclude that many of the shallow large-amplitude reflections in block F3 are dominated by flatspots, and we investigate the thin-bed tuning effects that such flatspots cause as they interact with the reflection from the reservoir’s upper boundary. There are two possible effects to be considered: (1) the “wedge-model” tuning effects of the flatspot and overlying brightspots, dimspots, or polarity-reversals; and (2) the stacking effects that result from possible inclusion of post-critical flatspot reflections in these shallow sands. We modeled the effects of these two phenomena for the particular stratigraphic sequence in block F3. Our results suggest that stacking of post-critical flatspot reflections can cause similar large-amplitude but flat reflections, in some cases even causing an interface expected to produce a ‘dimspot’ to appear as a ‘brightspot’. Analysis of NMO stretch and muting shows the likely exclusion of critical offset data in stacked output. If post-critical reflections are included in stacking, unusual results will be observed. In the North Sea case, we conclude the tuning effect was the primary reason causing for the brightness and flatness of these reflections. However, it is still important to note that care should be taken while applying muting on reflections with wide range of incidence angles and the inclusion of critical offset data may cause some spurious features in the stacked section.",
"title": ""
},
{
"docid": "d69d694eadb068dc019dce0eb51d5322",
"text": "In this paper the application of image prior combinations to the Bayesian Super Resolution (SR) image registration and reconstruction problem is studied. Two sparse image priors, a Total Variation (TV) prior and a prior based on the `1 norm of horizontal and vertical first order differences (f.o.d.), are combined with a non-sparse Simultaneous Auto Regressive (SAR) prior. Since, for a given observation model, each prior produces a different posterior distribution of the underlying High Resolution (HR) image, the use of variational approximation will produce as many posterior approximations as priors we want to combine. A unique approximation is obtained here by finding the distribution on the HR image given the observations that minimizes a linear convex combination of Kullback-Leibler (KL) divergences. We find this distribution in closed form. The estimated HR images are compared with the ones obtained by other SR reconstruction methods.",
"title": ""
},
{
"docid": "2aa9d6eb5c8e3fd62541a562530352a2",
"text": "In the last few years, we have seen an exponential increase in the number of Internet-enabled devices, which has resulted in popularity of fog and cloud computing among end users. End users expect high data rates coupled with secure data access for various applications executed either at the edge (fog computing) or in the core network (cloud computing). However, the bidirectional data flow between the end users and the devices located at either the edge or core may cause congestion at the cloud data centers, which are used mainly for data storage and data analytics. The high mobility of devices (e.g., vehicles) may also pose additional challenges with respect to data availability and processing at the core data centers. Hence, there is a need to have most of the resources available at the edge of the network to ensure the smooth execution of end-user applications. Considering the challenges of future user demands, we present an architecture that integrates cloud and fog computing in the 5G environment that works in collaboration with the advanced technologies such as SDN and NFV with the NSC model. The NSC service model helps to automate the virtual resources by chaining in a series for fast computing in both computing technologies. The proposed architecture also supports data analytics and management with respect to device mobility. Moreover, we also compare the core and edge computing with respect to the type of hypervisors, virtualization, security, and node heterogeneity. By focusing on nodes' heterogeneity at the edge or core in the 5G environment, we also present security challenges and possible types of attacks on the data shared between different devices in the 5G environment.",
"title": ""
},
{
"docid": "cbdb038d8217ec2e0c4174519d6f2012",
"text": "Many information retrieval algorithms rely on the notion of a good distance that allows to efficiently compare objects of different nature. Recently, a new promising metric called Word Mover’s Distance was proposed to measure the divergence between text passages. In this paper, we demonstrate that this metric can be extended to incorporate term-weighting schemes and provide more accurate and computationally efficient matching between documents using entropic regularization. We evaluate the benefits of both extensions in the task of cross-lingual document retrieval (CLDR). Our experimental results on eight CLDR problems suggest that the proposed methods achieve remarkable improvements in terms of Mean Reciprocal Rank compared to several baselines.",
"title": ""
},
{
"docid": "6886b42b7624d2a47466d7356973f26c",
"text": "Conventional on-off keyed signals, such as return-to-zero (RZ) and nonreturn-to-zero (NRZ) signals are susceptible to cross-gain modulation (XGM) in semiconductor optical amplifiers (SOAs) due to pattern effect. In this letter, XGM effect of Manchester-duobinary, RZ differential phase-shift keying (RZ-DPSK), NRZ-DPSK, RZ, and NRZ signals in SOAs were compared. The experimental results confirmed the reduction of crosstalk penalty in SOAs by using Manchester-duobinary signals",
"title": ""
},
{
"docid": "4f6979ca99ec7fb0010fd102e7796248",
"text": "Cryptographic systems are essential for computer and communication security, for instance, RSA is used in PGP Email clients and AES is employed in full disk encryption. In practice, the cryptographic keys are loaded and stored in RAM as plain-text, and therefore vulnerable to physical memory attacks (e.g., cold-boot attacks). To tackle this problem, we propose Copker, which implements asymmetric cryptosystems entirely within the CPU, without storing plain-text private keys in the RAM. In its active mode, Copker stores kilobytes of sensitive data, including the private key and the intermediate states, only in onchip CPU caches (and registers). Decryption/signing operations are performed without storing sensitive information in system memory. In the suspend mode, Copker stores symmetrically encrypted private keys in memory, while employs existing solutions to keep the key-encryption key securely in CPU registers. Hence, Copker releases the system resources in the suspend mode. In this paper, we implement Copker with the most common asymmetric cryptosystem, RSA, with the support of multiple private keys. We show that Copker provides decryption/signing services that are secure against physical memory attacks. Meanwhile, with intensive experiments, we demonstrate that our implementation of Copker is secure and requires reasonable overhead. Keywords—Cache-as-RAM; cold-boot attack; key management; asymmetric cryptography implementation.",
"title": ""
},
{
"docid": "b0f13c59bb4ba0f81ebc86373ad80d81",
"text": "3D-stacked memory devices with processing logic can help alleviate the memory bandwidth bottleneck in GPUs. However, in order for such Near-Data Processing (NDP) memory stacks to be used for different GPU architectures, it is desirable to standardize the NDP architecture. Our proposal enables this standardization by allowing data to be spread across multiple memory stacks as is the norm in high-performance systems without an MMU on the NDP stack. The keys to this architecture are the ability to move data between memory stacks as required for computation, and a partitioned execution mechanism that offloads memory-intensive application segments onto the NDP stack and decouples address translation from DRAM accesses. By enhancing this system with a smart offload selection mechanism that is cognizant of the compute capability of the NDP and cache locality on the host processor, system performance and energy are improved by up to 66.8% and 37.6%, respectively.",
"title": ""
},
{
"docid": "a65d67cdd3206a99f91774ae983064b4",
"text": "BACKGROUND\nIn recent years there has been a progressive rise in the number of asylum seekers and refugees displaced from their country of origin, with significant social, economic, humanitarian and public health implications. In this population, up-to-date information on the rate and characteristics of mental health conditions, and on interventions that can be implemented once mental disorders have been identified, are needed. This umbrella review aims at systematically reviewing existing evidence on the prevalence of common mental disorders and on the efficacy of psychosocial and pharmacological interventions in adult and children asylum seekers and refugees resettled in low, middle and high income countries.\n\n\nMETHODS\nWe conducted an umbrella review of systematic reviews summarizing data on the prevalence of common mental disorders and on the efficacy of psychosocial and pharmacological interventions in asylum seekers and/or refugees. Methodological quality of the included studies was assessed with the AMSTAR checklist.\n\n\nRESULTS\nThirteen reviews reported data on the prevalence of common mental disorders while fourteen reviews reported data on the efficacy of psychological or pharmacological interventions. Although there was substantial variability in prevalence rates, we found that depression and anxiety were at least as frequent as post-traumatic stress disorder, accounting for up to 40% of asylum seekers and refugees. In terms of psychosocial interventions, cognitive behavioral interventions, in particular narrative exposure therapy, were the most studied interventions with positive outcomes against inactive but not active comparators.\n\n\nCONCLUSIONS\nCurrent epidemiological data needs to be expanded with more rigorous studies focusing not only on post-traumatic stress disorder but also on depression, anxiety and other mental health conditions. In addition, new studies are urgently needed to assess the efficacy of psychosocial interventions when compared not only with no treatment but also each other. Despite current limitations, existing epidemiological and experimental data should be used to develop specific evidence-based guidelines, possibly by international independent organizations, such as the World Health Organization or the United Nations High Commission for Refugees. Guidelines should be applicable to different organizations of mental health care, including low and middle income countries as well as high income countries.",
"title": ""
},
{
"docid": "a7747c3329f26833e01ade020b45eaeb",
"text": "The objective of this paper is to present the role of Ontology Learning Process in supporting an ontology engineer for creating and maintaining ontologies from textual resources. The knowledge structures that interest us are legal domain-specific ontologies. We will use these ontologies to build legal domain ontology for a Lebanese legal knowledge based system. The domain application of this work is the Lebanese criminal system. Ontologies can be learnt from various sources, such as databases, structured and unstructured documents. Here, the focus is on the acquisition of ontologies from unstructured text, provided as input. In this work, the Ontology Learning Process represents a knowledge extraction phase using Natural Language Processing techniques. The resulted ontology is considered as inexpressive ontology. There is a need to reengineer it in order to build a complete, correct and more expressive domain-specific ontology.",
"title": ""
},
{
"docid": "ad2655aaed8a4f3379cb206c6e405f16",
"text": "Lesions of the orbital frontal lobe, particularly its medial sectors, are known to cause deficits in empathic ability, whereas the role of this region in theory of mind processing is the subject of some controversy. In a functional magnetic resonance imaging study with healthy participants, emotional perspective-taking was contrasted with cognitive perspective-taking in order to examine the role of the orbital frontal lobe in subcomponents of theory of mind processing. Subjects responded to a series of scenarios presented visually in three conditions: emotional perspective-taking, cognitive perspective-taking and a control condition that required inferential reasoning, but not perspective-taking. Group results demonstrated that the medial orbitofrontal lobe, defined as Brodmann's areas 11 and 25, was preferentially involved in emotional as compared to cognitive perspective-taking. This finding is both consistent with the lesion literature, and resolves the inconsistency of orbital frontal findings in the theory of mind literature.",
"title": ""
},
{
"docid": "d60ea5f80654adeb4442f6aaa0c2f164",
"text": "Repetition and semantic-associative priming effects have been demonstrated for words in nonstructured contexts (i.e., word pairs or lists of words) in numerous behavioral and electrophysiological studies. The processing of a word has thus been shown to benefit from the prior presentation of an identical or associated word in the absence of a constraining context. An examination of such priming effects for words that are embedded within a meaningful discourse context provides information about the interaction of different levels of linguistic analysis. This article reviews behavioral and electrophysiological research that has examined the processing of repeated and associated words in sentence and discourse contexts. It provides examples of the ways in which eye tracking and event-related potentials might be used to further explore priming effects in discourse. The modulation of lexical priming effects by discourse factors suggests the interaction of information at different levels in online language comprehension.",
"title": ""
},
{
"docid": "0624dd3af2c1df013783b76a6ce0c7b3",
"text": "In SAC'05, Strangio proposed protocol ECKE- 1 as an efficient elliptic curve Diffie-Hellman two-party key agreement protocol using public key authentication. In this letter, we show that protocol ECKE-1 is vulnerable to key-compromise impersonation attacks. We also present an improved protocol - ECKE-1N, which can withstand such attacks. The new protocol's performance is comparable to the well-known MQV protocol and maintains the same remarkable list of security properties.",
"title": ""
}
] |
scidocsrr
|
01080490d8845e603208753303c2cc7c
|
The transformation of product development process into lean environment using set-based concurrent engineering: A case study from an aerospace industry
|
[
{
"docid": "61768befa972c8e9f46524a59c44fabb",
"text": "This paper presents a newly defined set-based concurrent engineering process, which the authors believe addresses some of the key challenges faced by engineering enterprises in the 21 century. The main principles of Set-Based Concurrent Engineering (SBCE) have been identified via an extensive literature review. Based on these principles the SBCE baseline model was developed. The baseline model defines the stages and activities which represent the product development process to be employed in the LeanPPD (lean product and process development) project. The LeanPPD project is addressing the needs of European manufacturing companies for a new model that extends beyond lean manufacturing, and incorporates lean thinking in the product design development process.",
"title": ""
}
] |
[
{
"docid": "32d0a26f21a25fe1e783b1edcfbcf673",
"text": "Histologic grading has been used as a guide for clinical management in follicular lymphoma (FL). Proliferation index (PI) of FL generally correlates with tumor grade; however, in cases of discordance, it is not clear whether histologic grade or PI correlates with clinical aggressiveness. To objectively evaluate these cases, we determined PI by Ki-67 immunostaining in 142 cases of FL (48 grade 1, 71 grade 2, and 23 grade 3). A total of 24 cases FL with low histologic grade but high PI (LG-HPI) were identified, a frequency of 18%. On histologic examination, LG-HPI FL often exhibited blastoid features. Patients with LG-HPI FL had inferior disease-specific survival but a higher 5-year disease-free rate than low-grade FL with concordantly low PI (LG-LPI). However, transformation to diffuse large B-cell lymphoma was uncommon in LG-HPI cases (1 of 19; 5%) as compared with LG-LPI cases (27 of 74; 36%). In conclusion, LG-HPI FL appears to be a subgroup of FL with clinical behavior more akin to grade 3 FL. We propose that these LG-HPI FL cases should be classified separately from cases of low histologic grade FL with concordantly low PI.",
"title": ""
},
{
"docid": "8b1734f040031e22c50b6b2a573ff58a",
"text": "Is it permissible to harm one to save many? Classic moral dilemmas are often defined by the conflict between a putatively rational response to maximize aggregate welfare (i.e., the utilitarian judgment) and an emotional aversion to harm (i.e., the non-utilitarian judgment). Here, we address two questions. First, what specific aspect of emotional responding is relevant for these judgments? Second, is this aspect of emotional responding selectively reduced in utilitarians or enhanced in non-utilitarians? The results reveal a key relationship between moral judgment and empathic concern in particular (i.e., feelings of warmth and compassion in response to someone in distress). Utilitarian participants showed significantly reduced empathic concern on an independent empathy measure. These findings therefore reveal diminished empathic concern in utilitarian moral judges.",
"title": ""
},
{
"docid": "8c7b6d0ecb1b1a4a612f44e8de802574",
"text": "Recently, the Fisher vector representation of local features has attracted much attention because of its effectiveness in both image classification and image retrieval. Another trend in the area of image retrieval is the use of binary feature such as ORB, FREAK, and BRISK. Considering the significant performance improvement in terms of accuracy in both image classification and retrieval by the Fisher vector of continuous feature descriptors, if the Fisher vector were also to be applied to binary features, we would receive the same benefits in binary feature based image retrieval and classification. In this paper, we derive the closed-form approximation of the Fisher vector of binary features which are modeled by the Bernoulli mixture model. In experiments, it is shown that the Fisher vector representation improves the accuracy of image retrieval by 25% compared with a bag of binary words approach.",
"title": ""
},
{
"docid": "b0eb2048209c7ceeb3c67c2b24693745",
"text": "Modeling an ontology is a hard and time-consuming task. Although methodologies are useful for ontologists to create good ontologies, they do not help with the task of evaluating the quality of the ontology to be reused. For these reasons, it is imperative to evaluate the quality of the ontology after constructing it or before reusing it. Few studies usually present only a set of criteria and questions, but no guidelines to evaluate the ontology. The effort to evaluate an ontology is very high as there is a huge dependence on the evaluator’s expertise to understand the criteria and questions in depth. Moreover, the evaluation is still very subjective. This study presents a novel methodology for ontology evaluation, taking into account three fundamental principles: i) it is based on the Goal, Question, Metric approach for empirical evaluation; ii) the goals of the methodologies are based on the roles of knowledge representations combined with specific evaluation criteria; iii) each ontology is evaluated according to the type of ontology. The methodology was empirically evaluated using different ontologists and ontologies of the same domain. The main contributions of this study are: i) defining a step-by-step approach to evaluate the quality of an ontology; ii) proposing an evaluation based on the roles of knowledge representations; iii) the explicit difference of the evaluation according to the type of the ontology iii) a questionnaire to evaluate the ontologies; iv) a statistical model that automatically calculates the quality of the ontologies.",
"title": ""
},
{
"docid": "4b6a4f9d91bc76c541f4879a1a684a3f",
"text": "Query auto-completion (QAC) is one of the most prominent features of modern search engines. The list of query candidates is generated according to the prefix entered by the user in the search box and is updated on each new key stroke. Query prefixes tend to be short and ambiguous, and existing models mostly rely on the past popularity of matching candidates for ranking. However, the popularity of certain queries may vary drastically across different demographics and users. For instance, while instagram and imdb have comparable popularities overall and are both legitimate candidates to show for prefix i, the former is noticeably more popular among young female users, and the latter is more likely to be issued by men.\n In this paper, we present a supervised framework for personalizing auto-completion ranking. We introduce a novel labelling strategy for generating offline training labels that can be used for learning personalized rankers. We compare the effectiveness of several user-specific and demographic-based features and show that among them, the user's long-term search history and location are the most effective for personalizing auto-completion rankers. We perform our experiments on the publicly available AOL query logs, and also on the larger-scale logs of Bing. The results suggest that supervised rankers enhanced by personalization features can significantly outperform the existing popularity-based base-lines, in terms of mean reciprocal rank (MRR) by up to 9%.",
"title": ""
},
{
"docid": "b0c4b345063e729d67396dce77e677a6",
"text": "Work done on the implementation of a fuzzy logic controller in a single intersection of two one-way streets is presented. The model of the intersection is described and validated, and the use of the theory of fuzzy sets in constructing a controller based on linguistic control instructions is introduced. The results obtained from the implementation of the fuzzy logic controller are tabulated against those corresponding to a conventional effective vehicle-actuated controller. With the performance criterion being the average delay of vehicles, it is shown that the use of a fuzzy logic controller results in a better performance.",
"title": ""
},
{
"docid": "ad6672657fc07ed922f1e2c0212b30bc",
"text": "As a generalization of the ordinary wavelet transform, the fractional wavelet transform (FRWT) is a very promising tool for signal analysis and processing. Many of its fundamental properties are already known; however, little attention has been paid to its sampling theory. In this paper, we first introduce the concept of multiresolution analysis associated with the FRWT, and then propose a sampling theorem for signals in FRWT-based multiresolution subspaces. The necessary and sufficient condition for the sampling theorem is derived. Moreover, sampling errors due to truncation and aliasing are discussed. The validity of the theoretical derivations is demonstrated via simulations.",
"title": ""
},
{
"docid": "892e70d9666267bc1faf3911c8e60264",
"text": "Interactive spoken dialogue provides many new challenges for natural language understanding systems. One of the most critical challenges is simply determining the speaker’s intended utterances: both segmenting a speaker’s turn into utterances and determining the intended words in each utterance. Even assuming perfect word recognition, the latter problem is complicated by the occurrence of speech repairs, which occur where speakers go back and change (or repeat) something they just said. The words that are replaced or repeated are no longer part of the intended utterance, and so need to be identified. Segmenting turns and resolving repairs are strongly intertwined with a third task: identifying discourse markers. Because of the interactions, and interactions with POS tagging and speech recognition, we need to address these tasks together and early on in the processing stream. This paper presents a statistical language model in which we redefine the speech recognition problem so that it includes the identification of POS tags, discourse markers, speech repairs and intonational phrases. By solving these simultaneously, we obtain better results on each task than addressing them separately. Our model is able to identify 72% of turn-internal intonational boundaries with a precision of 71%, 97% of discourse markers with 96% precision, and detect and correct 66% of repairs with 74% precision.",
"title": ""
},
{
"docid": "1adc476c1e322d7cc7a0c93e726a8e2c",
"text": "A wireless body area network is a radio-frequency- based wireless networking technology that interconnects tiny nodes with sensor or actuator capabilities in, on, or around a human body. In a civilian networking environment, WBANs provide ubiquitous networking functionalities for applications varying from healthcare to safeguarding of uniformed personnel. This article surveys pioneer WBAN research projects and enabling technologies. It explores application scenarios, sensor/actuator devices, radio systems, and interconnection of WBANs to provide perspective on the trade-offs between data rate, power consumption, and network coverage. Finally, a number of open research issues are discussed.",
"title": ""
},
{
"docid": "fe55db2d04fdba4f4655e39520f135bd",
"text": "The application of virtual reality in e-commerce has enormous potential for transforming online shopping into a real-world equivalent. However, the growing research interest focuses on virtual reality technology adoption for the development of e-commerce environments without addressing social and behavioral facets of online shopping such as trust. At the same time, trust is a critical success factor for e-commerce and remains an open issue as to how it can be accomplished within an online store. This paper shows that the use of virtual reality for online shopping environments offers an advanced customer experience compared to conventional web stores and enables the formation of customer trust. The paper presents a prototype virtual shopping mall environment, designed on principles derived by an empirically tested model for building trust in e-commerce. The environment is evaluated with an empirical study providing evidence and explaining that a virtual reality shopping environment would be preferred by customers over a conventional web store and would facilitate the assessment of the e-vendor’s trustworthiness.",
"title": ""
},
{
"docid": "3e6151d32fc5c2be720aab5cc467eecb",
"text": "The weighted linear combination (WLC) technique is a decision rule for deriving composite maps using GIS. It is one of the most often used decision models in GIS. The method, however, is frequently applied without full understanding of the assumptions underling this approach. In many case studies, the WLC model has been applied incorrectly and with dubious results because analysts (decision makers) have ignored or been unaware of the assumptions. This paper provides a critical overview of the current practice with respect to GIS/WLC and suggests the best practice approach.",
"title": ""
},
{
"docid": "5a7e97c755e29a9a3c82fc3450f9a929",
"text": "Intel Software Guard Extensions (SGX) is a hardware-based Trusted Execution Environment (TEE) that enables secure execution of a program in an isolated environment, called an enclave. SGX hardware protects the running enclave against malicious software, including the operating system, hypervisor, and even low-level firmware. This strong security property allows trustworthy execution of programs in hostile environments, such as a public cloud, without trusting anyone (e.g., a cloud provider) between the enclave and the SGX hardware. However, recent studies have demonstrated that enclave programs are vulnerable to accurate controlled-channel attacks conducted by a malicious OS. Since enclaves rely on the underlying OS, curious and potentially malicious OSs can observe a sequence of accessed addresses by intentionally triggering page faults. In this paper, we propose T-SGX, a complete mitigation solution to the controlled-channel attack in terms of compatibility, performance, and ease of use. T-SGX relies on a commodity component of the Intel processor (since Haswell), called Transactional Synchronization Extensions (TSX), which implements a restricted form of hardware transactional memory. As TSX is implemented as an extension (i.e., snooping the cache protocol), any unusual event, such as an exception or interrupt, that should be handled in its core component, results in an abort of the ongoing transaction. One interesting property is that the TSX abort suppresses the notification of errors to the underlying OS. This means that the OS cannot know whether a page fault has occurred during the transaction. T-SGX, by utilizing this property of TSX, can carefully isolate the effect of attempts to tap running enclaves, thereby completely eradicating the known controlledchannel attack. We have implemented T-SGX as a compiler-level scheme to automatically transform a normal enclave program into a secured enclave program without requiring manual source code modification or annotation. We not only evaluate the security properties of T-SGX, but also demonstrate that it could be applied to all the previously demonstrated attack targets, such as libjpeg, Hunspell, and FreeType. To evaluate the performance of T-SGX, we ported 10 benchmark programs of nbench to the SGX environment. Our evaluation results look promising. T-SGX is † The two lead authors contributed equally to this work. ⋆ The author did part of this work during an intership at Microsoft Research. an order of magnitude faster than the state-of-the-art mitigation schemes. On our benchmarks, T-SGX incurs on average 50% performance overhead and less than 30% storage overhead.",
"title": ""
},
{
"docid": "3081cab6599394a1cc062e1f2e00decf",
"text": "This paper describes the 3Book, a 3D interactive visualization of a codex book as a component for digital library and information-intensive applications. The 3Book is able to represent books of almost unlimited length, allows users to read large format books, and has features to enhance reading and sensemaking.",
"title": ""
},
{
"docid": "33c113db245fb36c3ce8304be9909be6",
"text": "Bring Your Own Device (BYOD) is growing in popularity. In fact, this inevitable and unstoppable trend poses new security risks and challenges to control and manage corporate networks and data. BYOD may be infected by viruses, spyware or malware that gain access to sensitive data. This unwanted access led to the disclosure of information, modify access policy, disruption of service, loss of productivity, financial issues, and legal implications. This paper provides a review of existing literature concerning the access control and management issues, with a focus on recent trends in the use of BYOD. This article provides an overview of existing research articles which involve access control and management issues, which constitute of the recent rise of usage of BYOD devices. This review explores a broad area concerning information security research, ranging from management to technical solution of access control in BYOD. The main aim for this is to investigate the most recent trends touching on the access control issues in BYOD concerning information security and also to analyze the essential and comprehensive requirements needed to develop an access control framework in the future. Keywords— Bring Your Own Device, BYOD, access control, policy, security.",
"title": ""
},
{
"docid": "21025b37c5c172399c63148f1bfa49ab",
"text": "Buffer overflows belong to the most common class of attacks on today’s Internet. Although stack-based variants are still by far more frequent and well-understood, heap-based overflows have recently gained more attention. Several real-world exploits have been published that corrupt heap management information and allow arbitrary code execution with the privileges of the victim process. This paper presents a technique that protects the heap management information and allows for run-time detection of heap-based overflows. We discuss the structure of these attacks and our proposed detection scheme that has been implemented as a patch to the GNU Lib C. We report the results of our experiments, which demonstrate the detection effectiveness and performance impact of our approach. In addition, we discuss different mechanisms to deploy the memory protection.",
"title": ""
},
{
"docid": "b51f3871cf5354c23e5ffd18881fe951",
"text": "As the Internet grows in importance, concerns about online privacy have arisen. We describe the development and validation of three short Internet-administered scales measuring privacy related attitudes ('Privacy Concern') and behaviors ('General Caution' and 'Technical Protection'). Internet Privacy Scales 1 In Press: Journal of the American Society for Information Science and Technology UNCORRECTED proofs. This is a preprint of an article accepted for publication in Journal of the American Society for Information Science and Technology copyright 2006 Wiley Periodicals, Inc. Running Head: INTERNET PRIVACY SCALES Development of measures of online privacy concern and protection for use on the",
"title": ""
},
{
"docid": "05f77aceabb886ea54af07e1bfeb1686",
"text": "The associations between time spent in sleep, sedentary behaviors (SB) and physical activity with health are usually studied without taking into account that time is finite during the day, so time spent in each of these behaviors are codependent. Therefore, little is known about the combined effect of time spent in sleep, SB and physical activity, that together constitute a composite whole, on obesity and cardio-metabolic health markers. Cross-sectional analysis of NHANES 2005-6 cycle on N = 1937 adults, was undertaken using a compositional analysis paradigm, which accounts for this intrinsic codependence. Time spent in SB, light intensity (LIPA) and moderate to vigorous activity (MVPA) was determined from accelerometry and combined with self-reported sleep time to obtain the 24 hour time budget composition. The distribution of time spent in sleep, SB, LIPA and MVPA is significantly associated with BMI, waist circumference, triglycerides, plasma glucose, plasma insulin (all p<0.001), and systolic (p<0.001) and diastolic blood pressure (p<0.003), but not HDL or LDL. Within the composition, the strongest positive effect is found for the proportion of time spent in MVPA. Strikingly, the effects of MVPA replacing another behavior and of MVPA being displaced by another behavior are asymmetric. For example, re-allocating 10 minutes of SB to MVPA was associated with a lower waist circumference by 0.001% but if 10 minutes of MVPA is displaced by SB this was associated with a 0.84% higher waist circumference. The proportion of time spent in LIPA and SB were detrimentally associated with obesity and cardiovascular disease markers, but the association with SB was stronger. For diabetes risk markers, replacing SB with LIPA was associated with more favorable outcomes. Time spent in MVPA is an important target for intervention and preventing transfer of time from LIPA to SB might lessen the negative effects of physical inactivity.",
"title": ""
},
{
"docid": "7471dc4c3020d479457dfbbdac924501",
"text": "Objective:Communicating with families is a core skill for neonatal clinicians, yet formal communication training rarely occurs. This study examined the impact of an intensive interprofessional communication training for neonatology fellows and nurse practitioners.Study Design:Evidence-based, interactive training for common communication challenges in neonatology incorporated didactic sessions, role-plays and reflective exercises. Participants completed surveys before, after, and one month following the training.Result:Five neonatology fellows and eight nurse practitioners participated (n=13). Before the training, participants overall felt somewhat prepared (2.6 on 5 point Likert-type scale) to engage in core communication challenges; afterwards, participants overall felt very well prepared (4.5 on Likert-type scale) (P<0.05). One month later, participants reported frequently practicing the taught skills and felt quite willing to engage in difficult conversations.Conclusion:An intensive communication training program increased neonatology clinicians’ self-perceived competence to face communication challenges which commonly occur, but for which training is rarely provided.",
"title": ""
},
{
"docid": "a2253bf241f7e5f60e889258e4c0f40c",
"text": "BACKGROUND-Software Process Improvement (SPI) is a systematic approach to increase the efficiency and effectiveness of a software development organization and to enhance software products. OBJECTIVE-This paper aims to identify and characterize evaluation strategies and measurements used to assess the impact of different SPI initiatives. METHOD-The systematic literature review includes 148 papers published between 1991 and 2008. The selected papers were classified according to SPI initiative, applied evaluation strategies, and measurement perspectives. Potential confounding factors interfering with the evaluation of the improvement effort were assessed. RESULTS-Seven distinct evaluation strategies were identified, wherein the most common one, “Pre-Post Comparison,” was applied in 49 percent of the inspected papers. Quality was the most measured attribute (62 percent), followed by Cost (41 percent), and Schedule (18 percent). Looking at measurement perspectives, “Project” represents the majority with 66 percent. CONCLUSION-The evaluation validity of SPI initiatives is challenged by the scarce consideration of potential confounding factors, particularly given that “Pre-Post Comparison” was identified as the most common evaluation strategy, and the inaccurate descriptions of the evaluation context. Measurements to assess the short and mid-term impact of SPI initiatives prevail, whereas long-term measurements in terms of customer satisfaction and return on investment tend to be less used.",
"title": ""
},
{
"docid": "c699ede2caeb5953decc55d8e42c2741",
"text": "Traditionally, two distinct approaches have been employed for exploratory factor analysis: maximum likelihood factor analysis and principal component analysis. A third alternative, called regularized exploratory factor analysis, was introduced recently in the psychometric literature. Small sample size is an important issue that has received considerable discussion in the factor analysis literature. However, little is known about the differential performance of these three approaches to exploratory factor analysis in a small sample size scenario. A simulation study and an empirical example demonstrate that regularized exploratory factor analysis may be recommended over the two traditional approaches, particularly when sample sizes are small (below 50) and the sample covariance matrix is near singular.",
"title": ""
}
] |
scidocsrr
|
52944b9b907da2f4956eb0c891f32727
|
Towards Verifiably Ethical Robot Behaviour
|
[
{
"docid": "854c0cc4f9beb2bf03ac58be8bf79e8c",
"text": "Mobile robots have the potential to become the ideal tool to teach a broad range of engineering disciplines. Indeed, mobile robots are getting increasingly complex and accessible. They embed elements from diverse fields such as mechanics, digital electronics, automatic control, signal processing, embedded programming, and energy management. Moreover, they are attractive for students which increases their motivation to learn. However, the requirements of an effective education tool bring new constraints to robotics. This article presents the e-puck robot design, which specifically targets engineering education at university level. Thanks to its particular design, the e-puck can be used in a large spectrum of teaching activities, not strictly related to robotics. Through a systematic evaluation by the students, we show that the epuck fits this purpose and is appreciated by 90 percent of a large sample of students.",
"title": ""
},
{
"docid": "b4b06fc0372537459de882b48152c4c9",
"text": "As humans are being progressively pushed further downstream in the decision-making process of autonomous systems, the need arises to ensure that moral standards, however defined, are adhered to by these robotic artifacts. While meaningful inroads have been made in this area regarding the use of ethical lethal military robots, including work by our laboratory, these needs transcend the warfighting domain and are pervasive, extending to eldercare, robot nannies, and other forms of service and entertainment robotic platforms. This paper presents an overview of the spectrum and specter of ethical issues raised by the advent of these systems, and various technical results obtained to date by our research group, geared towards managing ethical behavior in autonomous robots in relation to humanity. This includes: 1) the use of an ethical governor capable of restricting robotic behavior to predefined social norms; 2) an ethical adaptor which draws upon the moral emotions to allow a system to constructively and proactively modify its behavior based on the consequences of its actions; 3) the development of models of robotic trust in humans and its dual, deception, drawing on psychological models of interdependence theory; and 4) concluding with an approach towards the maintenance of dignity in human-robot relationships.",
"title": ""
}
] |
[
{
"docid": "e807c0b74553a62a0e57caa2665aaa98",
"text": "Reverse genetics in model organisms such as Drosophila melanogaster, Arabidopsis thaliana, zebrafish and rats, efficient genome engineering in human embryonic stem and induced pluripotent stem cells, targeted integration in crop plants, and HIV resistance in immune cells — this broad range of outcomes has resulted from the application of the same core technology: targeted genome cleavage by engineered, sequence-specific zinc finger nucleases followed by gene modification during subsequent repair. Such 'genome editing' is now established in human cells and a number of model organisms, thus opening the door to a range of new experimental and therapeutic possibilities.",
"title": ""
},
{
"docid": "fad6638497886e557d8c55a98e5a00b0",
"text": "Cancer remains a major killer worldwide. Traditional methods of cancer treatment are expensive and have some deleterious side effects on normal cells. Fortunately, the discovery of anticancer peptides (ACPs) has paved a new way for cancer treatment. With the explosive growth of peptide sequences generated in the post genomic age, it is highly desired to develop computational methods for rapidly and effectively identifying ACPs, so as to speed up their application in treating cancer. Here we report a sequence-based predictor called iACP developed by the approach of optimizing the g-gap dipeptide components. It was demonstrated by rigorous cross-validations that the new predictor remarkably outperformed the existing predictors for the same purpose in both overall accuracy and stability. For the convenience of most experimental scientists, a publicly accessible web-server for iACP has been established at http://lin.uestc.edu.cn/server/iACP, by which users can easily obtain their desired results.",
"title": ""
},
{
"docid": "a8af37df01ad45139589e82bd81deb61",
"text": "As technology use continues to rise, especially among young individuals, there are concerns that excessive use of technology may impact academic performance. Researchers have started to investigate the possible negative effects of technology use on college academic performance, but results have been mixed. The following study seeks to expand upon previous studies by exploring the relationship among the use of a wide variety of technology forms and an objective measure of academic performance (GPA) using a 7-day time diary data collection method. The current study also seeks to examine both underclassmen and upperclassmen to see if these groups differ in how they use technology. Upperclassmen spent significantly more time using technology for academic and workrelated purposes, whereas underclassmen spent significantly more time using cell phones, online chatting, and social networking sites. Significant negative correlations with GPA emerged for television, online gaming, adult site, and total technology use categories. Keyword: Technology use, academic performance, post-secondary education.",
"title": ""
},
{
"docid": "f5532b33092d22c97d1b6ebe69de051f",
"text": "Automatic personality recognition is useful for many computational applications, including recommendation systems, dating websites, and adaptive dialogue systems. There have been numerous successful approaches to classify the “Big Five” personality traits from a speaker’s utterance, but these have largely relied on judgments of personality obtained from external raters listening to the utterances in isolation. This work instead classifies personality traits based on self-reported personality tests, which are more valid and more difficult to identify. Our approach, which uses lexical and acoustic-prosodic features, yields predictions that are between 6.4% and 19.2% more accurate than chance. This approach predicts Opennessto-Experience and Neuroticism most successfully, with less accurate recognition of Extroversion. We compare the performance of classification and regression techniques, and also explore predicting personality clusters.",
"title": ""
},
{
"docid": "a88eb6af576d056e8d3871afef725516",
"text": "Clouds play an important role in creating realistic images of outdoor scenes. Many methods have therefore been proposed for displaying realistic clouds. However, the realism of the resulting images depends on many parameters used to render them and it is often difficult to adjust those parameters manually. This paper proposes a method for addressing this problem by solving an inverse rendering problem: given a non-uniform synthetic cloud density distribution, the parameters for rendering the synthetic clouds are estimated using photographs of real clouds. The objective function is defined as the difference between the color histograms of the photograph and the synthetic image. Our method searches for the optimal parameters using genetic algorithms. During the search process, we take into account the multiple scattering of light inside the clouds. The search process is accelerated by precomputing a set of intermediate images. After ten to twenty minutes of precomputation, our method estimates the optimal parameters within a minute.",
"title": ""
},
{
"docid": "93064713fe271a9e173d790de09f2da6",
"text": "Network science is an interdisciplinary endeavor, with methods and applications drawn from across the natural, social, and information sciences. A prominent problem in network science is the algorithmic detection of tightly connected groups of nodes known as communities. We developed a generalized framework of network quality functions that allowed us to study the community structure of arbitrary multislice networks, which are combinations of individual networks coupled through links that connect each node in one network slice to itself in other slices. This framework allows studies of community structure in a general setting encompassing networks that evolve over time, have multiple types of links (multiplexity), and have multiple scales.",
"title": ""
},
{
"docid": "1e30732092d2bcdeff624364c27e4c9c",
"text": "Beliefs that individuals hold about whether emotions are malleable or fixed, also referred to as emotion malleability beliefs, may play a crucial role in individuals' emotional experiences and their engagement in changing their emotions. The current review integrates affective science and clinical science perspectives to provide a comprehensive review of how emotion malleability beliefs relate to emotionality, emotion regulation, and specific clinical disorders and treatment. Specifically, we discuss how holding more malleable views of emotion could be associated with more active emotion regulation efforts, greater motivation to engage in active regulatory efforts, more effort expended regulating emotions, and lower levels of pathological distress. In addition, we explain how extending emotion malleability beliefs into the clinical domain can complement and extend current conceptualizations of major depressive disorder, social anxiety disorder, and generalized anxiety disorder. This may prove important given the increasingly central role emotion dysregulation has been given in conceptualization and intervention for these psychiatric conditions. Additionally, discussion focuses on how emotion beliefs could be more explicitly addressed in existing cognitive therapies. Promising future directions for research are identified throughout the review.",
"title": ""
},
{
"docid": "776e04fa00628e249900b02f1edf9432",
"text": "We propose an algorithm for minimizing the total variation of an image, and provide a proof of convergence. We show applications to image denoising, zooming, and the computation of the mean curvature motion of interfaces.",
"title": ""
},
{
"docid": "1b2d34a38f026b5e24d39cb68c8235ee",
"text": "This book offers a comprehensive introduction to workflow management, the management of business processes with information technology. By defining, analyzing, and redesigning an organization’s resources and operations, workflow management systems ensure that the right information reaches the right person or computer application at the right time. The book provides a basic overview of workflow terminology and organization, as well as detailed coverage of workflow modeling with Petri nets. Because Petri nets make definitions easier to understand for nonexperts, they facilitate communication between designers and users. The book includes a chapter of case studies, review exercises, and a glossary.",
"title": ""
},
{
"docid": "b4ed15850674851fb7e479b7181751d7",
"text": "In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks.",
"title": ""
},
{
"docid": "417307155547a565d03d3f9c2a235b2e",
"text": "Recent deep learning based methods have achieved the state-of-the-art performance for handwritten Chinese character recognition (HCCR) by learning discriminative representations directly from raw data. Nevertheless, we believe that the long-and-well investigated domain-specific knowledge should still help to boost the performance of HCCR. By integrating the traditional normalization-cooperated direction-decomposed feature map (directMap) with the deep convolutional neural network (convNet), we are able to obtain new highest accuracies for both online and offline HCCR on the ICDAR-2013 competition database. With this new framework, we can eliminate the needs for data augmentation and model ensemble, which are widely used in other systems to achieve their best results. This makes our framework to be efficient and effective for both training and testing. Furthermore, although directMap+convNet can achieve the best results and surpass human-level performance, we show that writer adaptation in this case is still effective. A new adaptation layer is proposed to reduce the mismatch between training and test data on a particular source layer. The adaptation process can be efficiently and effectively implemented in an unsupervised manner. By adding the adaptation layer into the pre-trained convNet, it can adapt to the new handwriting styles of particular writers, and the recognition accuracy can be further improved consistently and significantly. This paper gives an overview and comparison of recent deep learning based approaches for HCCR, and also sets new benchmarks for both online and offline HCCR.",
"title": ""
},
{
"docid": "063a1fe002e0f69dcd6f525d8bb864b2",
"text": "Information retrieval over semantic metadata has recently received a great amount of interest in both industry and academia. In particular, discovering complex and meaningful relationships among this data is becoming an active research topic. Just as ranking of documents is a critical component of today’s search engines, the ranking of relationships will be essential in tomorrow’s semantic analytics engines. Building upon our recent work on specifying these semantic relationships, which we refer to as Semantic Associations, we demonstrate a system where these associations are discovered among a large semantic metabase represented in RDF. Additionally we employ ranking techniques to provide users with the most interesting and relevant results.",
"title": ""
},
{
"docid": "c3c1ca3e4e05779bccf4247296df0876",
"text": "Intramedullary nailing is one of the most convenient biological options for treating distal femoral fractures. Because the distal medulla of the femur is wider than the middle diaphysis and intramedullary nails cannot completely fill the intramedullary canal, intramedullary nailing of distal femoral fractures can be difficult when trying to obtain adequate reduction. Some different methods exist for achieving reduction. The purpose of this study was determine whether the use of blocking screws resolves varus or valgus and translation and recurvatum deformities, which can be encountered in antegrade and retrograde intramedullary nailing. Thirty-four patients with distal femoral fractures underwent intramedullary nailing between January 2005 and June 2011. Fifteen patients treated by intramedullary nailing and blocking screws were included in the study. Six patients had distal diaphyseal fractures and 9 had distal diaphyseo-metaphyseal fractures. Antegrade nailing was performed in 7 patients and retrograde nailing was performed in 8. Reduction during surgery and union during follow-up were achieved in all patients with no significant complications. Mean follow-up was 26.6 months. Mean time to union was 12.6 weeks. The main purpose of using blocking screws is to achieve reduction, but they are also useful for maintaining permanent reduction. When inserting blocking screws, the screws must be placed 1 to 3 cm away from the fracture line to avoid from propagation of the fracture. When applied properly and in an adequate way, blocking screws provide an efficient solution for deformities encountered during intramedullary nailing of distal femur fractures.",
"title": ""
},
{
"docid": "c0d8842983a2d7952de1c187a80479ac",
"text": "Two new topologies of three-phase segmented rotor switched reluctance machine (SRM) that enables the use of standard voltage source inverters (VSIs) for its operation are presented. The topologies has shorter end-turn length, axial length compared to SRM topologies that use three-phase inverters; compared to the conventional SRM (CSRM), these new topologies has the advantage of shorter flux paths that results in lower core losses. FEA based optimization have been performed for a given design specification. The new concentrated winding segmented SRMs demonstrate competitive performance with three-phase standard inverters compared to CSRM.",
"title": ""
},
{
"docid": "51b7cf820e3a46b5daeee6eb83058077",
"text": "Previous taxonomies of software change have focused on the purpose of the change (i.e., the why) rather than the underlying mechanisms. This paper proposes a taxonomy of software change based on characterizing the mechanisms of change and the factors that influence these mechanisms. The ultimate goal of this taxonomy is to provide a framework that positions concrete tools, formalisms and methods within the domain of software evolution. Such a framework would considerably ease comparison between the various mechanisms of change. It would also allow practitioners to identify and evaluate the relevant tools, methods and formalisms for a particular change scenario. As an initial step towards this taxonomy, the paper presents a framework that can be used to characterize software change support tools and to identify the factors that impact on the use of these tools. The framework is evaluated by applying it to three different change support tools and by comparing these tools based on this analysis. Copyright c © 2005 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "a9bc624da2e1fe5787d5a1da63f0bc52",
"text": "While research studies of digital and mobile payment systems in HCI have pointed out design opportunities situated within informal and nuanced mobile contexts, we have not yet understood how we can design digital monies to allow users to use monies more easily in these contexts. In this study, we examined the design of Alipay and WeChat Wallet, two successful mobile payment apps in China, which have been used by Chinese users for purposes such as playing, gifting, and ceremonial practices. Through semi-structured interviews with 24 Chinese users and grounded theory coding, we identified five contexts in which the flexibility and extensive functions of these payment apps have allowed these users to adaptively use digital monies in highly flexible ways. Finally, our analysis arrived at our conceptual frame—special digital monies—to highlight how digital monies, by allowing users to alter and define their transactional rules and pathways, could vastly expand the potential of digital monies to support users beyond standard retail contexts.",
"title": ""
},
{
"docid": "384c0b4e02b1d16eaa42ed12c3f0ae6b",
"text": "In this paper, we discuss the problem of distributing streaming media content, both live and on-demand, to a large number of hosts in a scalable way. Our work is set in the context of the traditional client-server framework. Specifically, we consider the problem that arises when the server is overwhelmed by the volume of requests from its clients. As a solution, we propose Cooperative Networking (CoopNet), where clients cooperate to distribute content, thereby alleviating the load on the server. We discuss the proposed solution in some detail, pointing out the interesting research issues that arise, and present a preliminary evaluation using traces gathered at a busy news site during the flash crowd that occurred on September 11, 2001.",
"title": ""
},
{
"docid": "877a1f7bab575c1a8101ff02ed637767",
"text": "Many language-sensitive tools for detecting plagiarism in natural language documents have been developed, particularly for English. Languageindependent tools exist as well, but are considered restrictive as they usually do not take into account specific language features. Detecting plagiarism in Arabic documents is particularly a challenging task because of the complex linguistic structure of Arabic. In this paper, we present a plagiarism detection tool for comparison of Arabic documents to identify potential similarities. The tool is based on a new comparison algorithm that uses heuristics to compare suspect documents at different hierarch ical levels to avoid unnecessary comparisons. We evaluate its performance in terms of precision and recall on a large data set of Arabic documents, and show its capability in identifying direct and sophisticated copying, such as sentence reordering and synonym substitution. We also demonstrate its advantages over other plagiarism detection tools, including Turnitin, the well-known language-independent tool.",
"title": ""
},
{
"docid": "76753fe26a2ed69c5b7099009c9a094f",
"text": "A total of 82 strains of presumptive Aeromonas spp. were identified biochemically and genetically (16S rDNA-RFLP). The strains were isolated from 250 samples of frozen fish (Tilapia, Oreochromis niloticus niloticus) purchased in local markets in Mexico City. In the present study, we detected the presence of several genes encoding for putative virulence factors and phenotypic activities that may play an important role in bacterial infection. In addition, we studied the antimicrobial patterns of those strains. Molecular identification demonstrated that the prevalent species in frozen fish were Aeromonas salmonicida (67.5%) and Aeromonas bestiarum (20.9%), accounting for 88.3% of the isolates, while the other strains belonged to the species Aeromonas veronii (5.2%), Aeromonas encheleia (3.9%) and Aeromonas hydrophila (2.6%). Detection by polymerase chain reaction (PCR) of genes encoding putative virulence factors common in Aeromonas, such as aerolysin/hemolysin, lipases including the glycerophospholipid-cholesterol acyltransferase (GCAT), serine protease and DNases, revealed that they were all common in these strains. Our results showed that first generation quinolones and second and third generation cephalosporins were the drugs with the best antimicrobial effect against Aeromonas spp. In Mexico, there have been few studies on Aeromonas and its putative virulence factors. The present work therefore highlights an important incidence of Aeromonas spp., with virulence potential and antimicrobial resistance, isolated from frozen fish intended for human consumption in Mexico City.",
"title": ""
}
] |
scidocsrr
|
9bf4522a0451bd810edf653eed4f24cf
|
Web Security: Detection of Cross Site Scripting in PHP Web Application using Genetic Algorithm
|
[
{
"docid": "d3fc62a9858ddef692626b1766898c9f",
"text": "In order to detect the Cross-Site Script (XSS) vulnerabilities in the web applications, this paper proposes a method of XSS vulnerability detection using optimal attack vector repertory. This method generates an attack vector repertory automatically, optimizes the attack vector repertory using an optimization model, and detects XSS vulnerabilities in web applications dynamically. To optimize the attack vector repertory, an optimization model is built in this paper with a machine learning algorithm, reducing the size of the attack vector repertory and improving the efficiency of XSS vulnerability detection. Based on this method, an XSS vulnerability detector is implemented, which is tested on 50 real-world websites. The testing results show that the detector can detect a total of 848 XSS vulnerabilities effectively in 24 websites.",
"title": ""
},
{
"docid": "77b18fe7f6a2af7aaaafc20bc7b1a5e7",
"text": "Recently, machine-learning based vulnerability prediction models are gaining popularity in web security space, as these models provide a simple and efficient way to handle web application security issues. Existing state-of-art Cross-Site Scripting (XSS) vulnerability prediction approaches do not consider the context of the user-input in output-statement, which is very important to identify context-sensitive security vulnerabilities. In this paper, we propose a novel feature extraction algorithm to extract basic and context features from the source code of web applications. Our approach uses these features to build various machine-learning models for predicting context-sensitive Cross-Site Scripting (XSS) security vulnerabilities. Experimental results show that the proposed features based prediction models can discriminate vulnerable code from non-vulnerable code at a very low false rate.",
"title": ""
}
] |
[
{
"docid": "495143978d38979b64c3556a77740979",
"text": "We address the practical problems of estimating the information relations that characterize large networks. Building on methods developed for analysis of the neural code, we show that reliable estimates of mutual information can be obtained with manageable computational effort. The same methods allow estimation of higher order, multi–information terms. These ideas are illustrated by analyses of gene expression, financial markets, and consumer preferences. In each case, information theoretic measures correlate with independent, intuitive measures of the underlying structures in the system.",
"title": ""
},
{
"docid": "9680944f9e6b4724bdba752981845b68",
"text": "A software product line is a set of program variants, typically generated from a common code base. Feature models describe variability in product lines by documenting features and their valid combinations. In product-line engineering, we need to reason about variability and program variants for many different tasks. For example, given a feature model, we might want to determine the number of all valid feature combinations or compute specific feature combinations for testing. However, we found that contemporary reasoning approaches can only reason about feature combinations, not about program variants, because they do not take abstract features into account. Abstract features are features used to structure a feature model that, however, do not have any impact at implementation level. Using existing feature-model reasoning mechanisms for program variants leads to incorrect results. Hence, although abstract features represent domain decisions that do not affect the generation of a program variant. We raise awareness of the problem of abstract features for different kinds of analyses on feature models. We argue that, in order to reason about program variants, abstract features should be made explicit in feature models. We present a technique based on propositional formulas that enables to reason about program variants rather than feature combinations. In practice, our technique can save effort that is caused by considering the same program variant multiple times, for example, in product-line testing.",
"title": ""
},
{
"docid": "b6f9d5015fddbf92ab44ae6ce2f7d613",
"text": "Emojis are small images that are commonly included in social media text messages. The combination of visual and textual content in the same message builds up a modern way of communication, that automatic systems are not used to deal with. In this paper we extend recent advances in emoji prediction by putting forward a multimodal approach that is able to predict emojis in Instagram posts. Instagram posts are composed of pictures together with texts which sometimes include emojis. We show that these emojis can be predicted by using the text, but also using the picture. Our main finding is that incorporating the two synergistic modalities, in a combined model, improves accuracy in an emoji prediction task. This result demonstrates that these two modalities (text and images) encode different information on the use of emojis and therefore can complement each other.",
"title": ""
},
{
"docid": "af6f5ef41a3737975893f95796558900",
"text": "In this work, we propose a multi-task convolutional neural network learning approach that can simultaneously perform iris localization and presentation attack detection (PAD). The proposed multi-task PAD (MT-PAD) is inspired by an object detection method which directly regresses the parameters of the iris bounding box and computes the probability of presentation attack from the input ocular image. Experiments involving both intra-sensor and cross-sensor scenarios suggest that the proposed method can achieve state-of-the-art results on publicly available datasets. To the best of our knowledge, this is the first work that performs iris detection and iris presentation attack detection simultaneously.",
"title": ""
},
{
"docid": "fc94c6fb38198c726ab3b417c3fe9b44",
"text": "Tremor is a rhythmical and involuntary oscillatory movement of a body part and it is one of the most common movement disorders. Orthotic devices have been under investigation as a noninvasive tremor suppression alternative to medication or surgery. The challenge in musculoskeletal tremor suppression is estimating and attenuating the tremor motion without impeding the patient's intentional motion. In this research a robust tremor suppression algorithm was derived for patients with pathological tremor in the upper limbs. First the motion in the tremor frequency range is estimated using a high-pass filter. Then, by applying the backstepping method the appropriate amount of torque is calculated to drive the output of the estimator toward zero. This is equivalent to an estimation of the tremor torque. It is shown that the arm/orthotic device control system is stable and the algorithm is robust despite inherent uncertainties in the open-loop human arm joint model. A human arm joint simulator, capable of emulating tremorous motion of a human arm joint was used to evaluate the proposed suppression algorithm experimentally for two types of tremor, Parkinson and essential. Experimental results show 30-42 dB (97.5-99.2%) suppression of tremor with minimal effect on the intentional motion.",
"title": ""
},
{
"docid": "98a65cca7217dfa720dd4ed2972c3bdd",
"text": "Intramuscular fat percentage (IMF%) has been shown to have a positive influence on the eating quality of red meat. Selection of Australian lambs for increased lean tissue and reduced carcass fatness using Australian Sheep Breeding Values has been shown to decrease IMF% of the Muscularis longissimus lumborum. The impact this selection has on the IMF% of other muscle depots is unknown. This study examined IMF% in five different muscles from 400 lambs (M. longissimus lumborum, Muscularis semimembranosus, Muscularis semitendinosus, Muscularis supraspinatus, Muscularis infraspinatus). The sires of these lambs had a broad range in carcass breeding values for post-weaning weight, eye muscle depth and fat depth over the 12th rib (c-site fat depth). Results showed IMF% to be highest in the M. supraspinatus (4.87 ± 0.1, P<0.01) and lowest in the M. semimembranosus (3.58 ± 0.1, P<0.01). Hot carcass weight was positively associated with IMF% of all muscles. Selection for decreasing c-site fat depth reduced IMF% in the M. longissimus lumborum, M. semimembranosus and M. semitendinosus. Higher breeding values for post-weaning weight and eye muscle depth increased and decreased IMF%, respectively, but only in the lambs born as multiples and raised as singles. For each per cent increase in lean meat yield percentage (LMY%), there was a reduction in IMF% of 0.16 in all five muscles examined. Given the drive within the lamb industry to improve LMY%, our results indicate the importance of continued monitoring of IMF% throughout the different carcass regions, given its importance for eating quality.",
"title": ""
},
{
"docid": "10a0f370ad3e9c3d652e397860114f90",
"text": "Statistical data associated with geographic regions is nowadays globally available in large amounts and hence automated methods to visually display these data are in high demand. There are several well-established thematic map types for quantitative data on the ratio-scale associated with regions: choropleth maps, cartograms, and proportional symbol maps. However, all these maps suffer from limitations, especially if large data values are associated with small regions. To overcome these limitations, we propose a novel type of quantitative thematic map, the necklace map. In a necklace map, the regions of the underlying two-dimensional map are projected onto intervals on a one-dimensional curve (the necklace) that surrounds the map regions. Symbols are scaled such that their area corresponds to the data of their region and placed without overlap inside the corresponding interval on the necklace. Necklace maps appear clear and uncluttered and allow for comparatively large symbol sizes. They visualize data sets well which are not proportional to region sizes. The linear ordering of the symbols along the necklace facilitates an easy comparison of symbol sizes. One map can contain several nested or disjoint necklaces to visualize clustered data. The advantages of necklace maps come at a price: the association between a symbol and its region is weaker than with other types of maps. Interactivity can help to strengthen this association if necessary. We present an automated approach to generate necklace maps which allows the user to interactively control the final symbol placement. We validate our approach with experiments using various data sets and maps.",
"title": ""
},
{
"docid": "619c905f7ef5fa0314177b109e0ec0e6",
"text": "The aim of this review is to systematically summarise qualitative evidence about work-based learning in health care organisations as experienced by nursing staff. Work-based learning is understood as informal learning that occurs inside the work community in the interaction between employees. Studies for this review were searched for in the CINAHL, PubMed, Scopus and ABI Inform ProQuest databases for the period 2000-2015. Nine original studies met the inclusion criteria. After the critical appraisal by two researchers, all nine studies were selected for the review. The findings of the original studies were aggregated, and four statements were prepared, to be utilised in clinical work and decision-making. The statements concerned the following issues: (1) the culture of the work community; (2) the physical structures, spaces and duties of the work unit; (3) management; and (4) interpersonal relations. Understanding the nurses' experiences of work-based learning and factors behind these experiences provides an opportunity to influence the challenges of learning in the demanding context of health care organisations.",
"title": ""
},
{
"docid": "d26ce319db7b1583347d34ff8251fbc0",
"text": "The study of metacognition can shed light on some fundamental issues about consciousness and its role in behavior. Metacognition research concerns the processes by which people self reflect on their own cognitive and memory processes (monitoring), and how they put their metaknowledge to use in regulating their information processing and behavior (control). Experimental research on metacognition has addressed the following questions: First, what are the bases of metacognitive judgments that people make in monitoring their learning, remembering, and performance? Second, how valid are such judgments and what are the factors that affect the correspondence between subjective and objective indexes of knowing? Third, what are the processes that underlie the accuracy and inaccuracy of metacognitive judgments? Fourth, how does the output of metacognitive monitoring contribute to the strategic regulation of learning and remembering? Finally, how do the metacognitive processes of monitoring and control affect actual performance? Research addressing these questions is reviewed, emphasizing its implication for issues concerning consciousness, in particular, the genesis of subjective experience, the function of self-reflective consciousness, and the cause-and-effect relation between subjective experience and behavior.",
"title": ""
},
{
"docid": "7cb1dd53d28575f36ef49cacd9d3fcf6",
"text": "A base-station bandpass filter using compact stepped combline resonators is presented. The bandpass filter consists of 4 resonators, has a center-frequency of 2.0175 GHz, a bandwidth of 15 MHz and cross-coupling by a cascaded quadruplet for improved blocking performance. The combline resonators have different size. Therefore, different temperature compensation arrangements need to be applied to guarantee stable performance in the temperature range from -40deg C to 85deg C. The layout will be discussed. A novel cross coupling assembly is introduced. Furthermore, measurement results are shown.",
"title": ""
},
{
"docid": "837d1ef60937df15afc320b2408ad7b0",
"text": "Zero-shot learning has tremendous application value in complex computer vision tasks, e.g. image classification, localization, image captioning, etc., for its capability of transferring knowledge from seen data to unseen data. Many recent proposed methods have shown that the formulation of a compatibility function and its generalization are crucial for the success of a zero-shot learning model. In this paper, we formulate a softmax-based compatibility function, and more importantly, propose a regularized empirical risk minimization objective to optimize the function parameter which leads to a better model generalization. In comparison to eight baseline models on four benchmark datasets, our model achieved the highest average ranking. Our model was effective even when the training set size was small and significantly outperforming an alternative state-of-the-art model in generalized zero-shot recognition tasks.",
"title": ""
},
{
"docid": "e797fbf7b53214df32d5694527ce5ba3",
"text": "One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model 1 employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results.",
"title": ""
},
{
"docid": "1b100af2f1d2591d1e34a6be4245624c",
"text": "Urbanisation has become a severe threat to pristine natural areas, causing habitat loss and affecting indigenous animals. Species occurring within an urban fragmented landscape must cope with changes in vegetation type as well as high degrees of anthropogenic disturbance, both of which are possible key mechanisms contributing to behavioural changes and perceived stressors. We attempted to elucidate the effects of urbanisation on the African lesser bushbaby, Galago moholi, by (1) recording activity budgets and body condition (body mass index, BMI) of individuals of urban and rural populations and (2) further determining adrenocortical activity in both populations as a measure of stress via faecal glucocorticoid metabolite (fGCM) levels, following successful validation of an appropriate enzyme immunoassay test system (adrenocorticotropic hormone (ACTH) challenge test). We found that both sexes of the urban population had significantly higher BMIs than their rural counterparts, while urban females had significantly higher fGCM concentrations than rural females. While individuals in the urban population fed mainly on provisioned anthropogenic food sources and spent comparatively more time resting and engaging in aggressive interactions, rural individuals fed almost exclusively on tree exudates and spent more time moving between food sources. Although interactions with humans are likely to be lower in nocturnal than in diurnal species, our findings show that the impact of urbanisation on nocturnal species is still considerable, affecting a range of ecological and physiological aspects.",
"title": ""
},
{
"docid": "418fc1513e2b6fe479a6dc0f981afeb2",
"text": "Multimedia content feeds an ever increasing fraction of the Internet traffic. Video streaming is one of the most important applications driving this trend. Adaptive video streaming is a relevant advancement with respect to classic progressive download streaming such as the one employed by YouTube. It consists in dynamically adapting the content bitrate in order to provide the maximum Quality of Experience, given the current available bandwidth, while ensuring a continuous reproduction. In this paper we propose a Quality Adaptation Controller (QAC) for live adaptive video streaming designed by employing feedback control theory. An experimental comparison with Akamai adaptive video streaming has been carried out. We have found the following main results: 1) QAC is able to throttle the video quality to match the available bandwidth with a transient of less than 30s while ensuring a continuous video reproduction; 2) QAC fairly shares the available bandwidth both in the cases of a concurrent TCP greedy connection or a concurrent video streaming flow; 3) Akamai underutilizes the available bandwidth due to the conservativeness of its heuristic algorithm; moreover, when abrupt available bandwidth reductions occur, the video reproduction is affected by interruptions.",
"title": ""
},
{
"docid": "f93dac471e3d7fa79c740b35fbde0558",
"text": "In settings where only unlabeled speech data is available, speech technology needs to be developed without transcriptions, pronunciation dictionaries, or language modelling text. A similar problem is faced when modeling infant language acquisition. In these cases, categorical linguistic structure needs to be discovered directly from speech audio. We present a novel unsu-pervised Bayesian model that segments unlabeled speech and clusters the segments into hypothesized word groupings. The result is a complete unsupervised tokenization of the input speech in terms of discovered word types. In our approach, a potential word segment (of arbitrary length) is embedded in a fixed-dimensional acoustic vector space. The model, implemented as a Gibbs sampler, then builds a whole-word acoustic model in this space while jointly performing segmentation. We report word error rates in a small-vocabulary connected digit recognition task by mapping the unsupervised decoded output to ground truth transcriptions. The model achieves around 20% error rate, outperforming a previous HMM-based system by about 10% absolute. Moreover, in contrast to the baseline, our model does not require a pre-specified vocabulary size.",
"title": ""
},
{
"docid": "9097bf29a9ad2b33919e0667d20bf6d7",
"text": "Object detection, though gaining popularity, has largely been limited to detection from the ground or from satellite imagery. Aerial images, where the target may be obfuscated from the environmental conditions, angle-of-attack, and zoom level, pose a more significant challenge to correctly detect targets in. This paper describes the implementation of a regional convolutional neural network to locate and classify objects across several categories in complex, aerial images. Our current results show promise in detecting and classifying objects. Further adjustments to the network and data input should increase the localization and classification accuracies.",
"title": ""
},
{
"docid": "d050730d7a5bd591b805f1b9729b0f2d",
"text": "In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets.",
"title": ""
},
{
"docid": "1eda9ea5678debcc886c996162fa475c",
"text": "The main purpose of the study is to examine the impact of parent’s occupation and family income on children performance. For this study a survey was conducted in Southern Punjab. The sample of 15oo parents were collected through a questionnaire using probability sampling technique that is Simple Random Sampling. All the analysis has been carried out on SPSS (Statistical Package for the Social Sciences). Chisquare test is applied to test the effect of parent’s occupation and family income on children’s performance. The results of the study specify that parent’soccupation and family incomehave significant impact on children’s performance.Parents play an important role in child development. Parents with good economic status provide better facilities to their children, results in better performance of the children.",
"title": ""
},
{
"docid": "539dc7f8657f83ac2ae9590a283c7321",
"text": "This paper presents a review on Optical Character Recognition Techniques. Optical Character recognition (OCR) is a technology that allows machines to automatically recognize the characters through an optical mechanism. OCR can be described as Mechanical or electronic conversion of scanned images where images can be handwritten, typewritten or printed text. It converts the images into machine-encoded text that can be used in machine translation, text-to-speech and text mining. Various techniques are available for character recognition in optical character recognition system. This material can be useful for the researchers who wish to work in character recognition area.",
"title": ""
}
] |
scidocsrr
|
cb2abb4eac56c80a1bdb963082ba4938
|
Rapid Manufacture of Novel Variable Impedance Robots
|
[
{
"docid": "59f29d3795e747bb9cee8fcbf87cb86f",
"text": "This paper introduces the development of a semi-active friction based variable physical damping actuator (VPDA) unit. The realization of this unit aims to facilitate the control of compliant robotic joints by providing physical variable damping on demand assisting on the regulation of the oscillations induced by the introduction of compliance. The mechatronics details and the dynamic model of the damper are introduced. The proposed variable damper mechanism is evaluated on a simple 1-DOF compliant joint linked to the ground through a torsion spring. This flexible connection emulates a compliant joint, generating oscillations when the link is perturbed. Preliminary results are presented to show that the unit and the proposed control scheme are capable of replicating simulated relative damping values with good fidelity.",
"title": ""
}
] |
[
{
"docid": "a4d294547c92296a2ea3222dc8d92afe",
"text": "Energy theft is a very common problem in countries like India where consumers of energy are increasing consistently as the population increases. Utilities in electricity system are destroying the amounts of revenue each year due to energy theft. The newly designed AMR used for energy measurements reveal the concept and working of new automated power metering system but this increased the Electricity theft forms administrative losses because of not regular interval checkout at the consumer's residence. It is quite impossible to check and solve out theft by going every customer's door to door. In this paper, a new procedure is followed based on MICROCONTROLLER Atmega328P to detect and control the energy meter from power theft and solve it by remotely disconnect and reconnecting the service (line) of a particular consumer. An SMS will be sent automatically to the utility central server through GSM module whenever unauthorized activities detected and a separate message will send back to the microcontroller in order to disconnect the unauthorized supply. A unique method is implemented by interspersed the GSM feature into smart meters with Solid state relay to deal with the non-technical losses, billing difficulties, and voltage fluctuation complication.",
"title": ""
},
{
"docid": "5779057b8db7eb79dd5ca5332a76dd16",
"text": "Memory encoding and recall involving complex, effortful cognitive processes are impaired by alcohol primarily due to impairment of a select few, but crucial, cortical areas. This review shows how alcohol affects some, but not all, aspects of eyewitnesses' oral free recall performance. The principal results, so far, are that: a) free recall reports by intoxicated witnesses (at the investigated BAC-levels) may contain less, but as accurate, information as reports by sober witnesses; b) immediate reports given by intoxicated witnesses may yield more information compared to reports by sober witnesses given after a one week delay; c) an immediate interview may enhance both intoxicated and sober witnesses' ability to report information in a later interview; and d) reminiscence seems to occur over repeated interviews and the new information seems to be as accurate as the previously reported information. Based on this, recommendations are given for future research to enhance understanding of the multifaceted impact of alcohol on witnesses' oral free recall of violent crimes.",
"title": ""
},
{
"docid": "fcc092e71c7a0b38edb23e4eb92dfb21",
"text": "In this work, we focus on semantic parsing of natural language conversations. Most existing methods for semantic parsing are based on understanding the semantics of a single sentence at a time. However, understanding conversations also requires an understanding of conversational context and discourse structure across sentences. We formulate semantic parsing of conversations as a structured prediction task, incorporating structural features that model the ‘flow of discourse’ across sequences of utterances. We create a dataset for semantic parsing of conversations, consisting of 113 real-life sequences of interactions of human users with an automated email assistant. The data contains 4759 natural language statements paired with annotated logical forms. Our approach yields significant gains in performance over traditional semantic parsing.",
"title": ""
},
{
"docid": "f37fb443aaa8194ee9fa8ba496e6772a",
"text": "Current Light Field (LF) cameras offer fixed resolution in space, time and angle which is decided a-priori and is independent of the scene. These cameras either trade-off spatial resolution to capture single-shot LF or tradeoff temporal resolution by assuming a static scene to capture high spatial resolution LF. Thus, capturing high spatial resolution LF video for dynamic scenes remains an open and challenging problem. We present the concept, design and implementation of a LF video camera that allows capturing high resolution LF video. The spatial, angular and temporal resolution are not fixed a-priori and we exploit the scene-specific redundancy in space, time and angle. Our reconstruction is motion-aware and offers a continuum of resolution tradeoff with increasing motion in the scene. The key idea is (a) to design efficient multiplexing matrices that allow resolution tradeoffs, (b) use dictionary learning and sparse representations for robust reconstruction, and (c) perform local motion-aware adaptive reconstruction. We perform extensive analysis and characterize the performance of our motion-aware reconstruction algorithm. We show realistic simulations using a graphics simulator as well as real results using a LCoS based programmable camera. We demonstrate novel results such as high resolution digital refocusing for dynamic moving objects.",
"title": ""
},
{
"docid": "6330bfa6be0361e2c0d2985372db9f0a",
"text": "The increasing pervasiveness of the internet, broadband connections and the emergence of digital compression technologies have dramatically changed the face of digital music piracy. Digitally compressed music files are essentially a perfect public economic good, and illegal copying of these files has increasingly become rampant. This paper presents a study on the behavioral dynamics which impact the piracy of digital audio files, and provides a contrast with software piracy. Our results indicate that the general ethical model of software piracy is also broadly applicable to audio piracy. However, significant enough differences with software underscore the unique dynamics of audio piracy. Practical implications that can help the recording industry to effectively combat piracy, and future research directions are highlighted.",
"title": ""
},
{
"docid": "58d4b95cc0ce39126c962e88b1bd6ba1",
"text": "The quality of image encryption is commonly measured by the Shannon entropy over the ciphertext image. However, this measurement does not consider to the randomness of local image blocks and is inappropriate for scrambling based image encryption methods. In this paper, a new information entropy-based randomness measurement for image encryption is introduced which, for the first time, answers the question of whether a given ciphertext image is sufficiently random-like. It measures the randomness over the ciphertext in a fairer way by calculating the averaged entropy of a series of small image blocks within the entire test image. In order to fulfill both quantitative and qualitative measurement, the expectation and the variance of this averaged block entropy for a true-random image are strictly derived and corresponding numerical reference tables are also provided. Moreover, a hypothesis test at significance α-level is given to help accept or reject the hypothesis that the test image is ideally encrypted/random-like. Simulation results show that the proposed test is able to give both effectively quantitative and qualitative results for image encryption. The same idea can also be applied to measure other digital data, like audio and video.",
"title": ""
},
{
"docid": "c346820b43f99aa6714900c5b110db13",
"text": "BACKGROUND\nDiabetes Mellitus (DM) is a chronic disease that is considered a global public health problem. Education and self-monitoring by diabetic patients help to optimize and make possible a satisfactory metabolic control enabling improved management and reduced morbidity and mortality. The global growth in the use of mobile phones makes them a powerful platform to help provide tailored health, delivered conveniently to patients through health apps.\n\n\nOBJECTIVE\nThe aim of our study was to evaluate the efficacy of mobile apps through a systematic review and meta-analysis to assist DM patients in treatment.\n\n\nMETHODS\nWe conducted searches in the electronic databases MEDLINE (Pubmed), Cochrane Register of Controlled Trials (CENTRAL), and LILACS (Latin American and Caribbean Health Sciences Literature), including manual search in references of publications that included systematic reviews, specialized journals, and gray literature. We considered eligible randomized controlled trials (RCTs) conducted after 2008 with participants of all ages, patients with DM, and users of apps to help manage the disease. The meta-analysis of glycated hemoglobin (HbA1c) was performed in Review Manager software version 5.3.\n\n\nRESULTS\nThe literature search identified 1236 publications. Of these, 13 studies were included that evaluated 1263 patients. In 6 RCTs, there were a statistical significant reduction (P<.05) of HbA1c at the end of studies in the intervention group. The HbA1c data were evaluated by meta-analysis with the following results (mean difference, MD -0.44; CI: -0.59 to -0.29; P<.001; I²=32%).The evaluation favored the treatment in patients who used apps without significant heterogeneity.\n\n\nCONCLUSIONS\nThe use of apps by diabetic patients could help improve the control of HbA1c. In addition, the apps seem to strengthen the perception of self-care by contributing better information and health education to patients. Patients also become more self-confident to deal with their diabetes, mainly by reducing their fear of not knowing how to deal with potential hypoglycemic episodes that may occur.",
"title": ""
},
{
"docid": "00e60176eca7d86261c614196849a946",
"text": "This paper proposes a novel low-profile dual polarized antenna for 2.4 GHz application. The proposed antenna consists of a circular patch with four curved T-stubs and a differential feeding network. Due to the parasitic loading of the curved T-stubs, the bandwidth has been improved. Good impedance matching and dual-polarization with low cross polarization have been achieved within 2.4–2.5 GHz, which is sufficient for WLAN application. The total thickness of the antenna is only 0.031A,o, which is low-profile when compared with its counterparts.",
"title": ""
},
{
"docid": "a41444799f295e5fc325626fd663d77d",
"text": "Lexicon-based approaches to Twitter sentiment analysis are gaining much popularity due to their simplicity, domain independence, and relatively good performance. These approaches rely on sentiment lexicons, where a collection of words are marked with fixed sentiment polarities. However, words’ sentiment orientation (positive, neural, negative) and/or sentiment strengths could change depending on context and targeted entities. In this paper we present SentiCircle; a novel lexicon-based approach that takes into account the contextual and conceptual semantics of words when calculating their sentiment orientation and strength in Twitter. We evaluate our approach on three Twitter datasets using three different sentiment lexicons. Results show that our approach significantly outperforms two lexicon baselines. Results are competitive but inconclusive when comparing to state-of-art SentiStrength, and vary from one dataset to another. SentiCircle outperforms SentiStrength in accuracy on average, but falls marginally behind in F-measure.",
"title": ""
},
{
"docid": "1dbff7292f9578337781616d4a1bb96a",
"text": "This paper proposes a novel approach and a new benchmark for video summarization. Thereby we focus on user videos, which are raw videos containing a set of interesting events. Our method starts by segmenting the video by using a novel “superframe” segmentation, tailored to raw videos. Then, we estimate visual interestingness per superframe using a set of low-, midand high-level features. Based on this scoring, we select an optimal subset of superframes to create an informative and interesting summary. The introduced benchmark comes with multiple human created summaries, which were acquired in a controlled psychological experiment. This data paves the way to evaluate summarization methods objectively and to get new insights in video summarization. When evaluating our method, we find that it generates high-quality results, comparable to manual, human-created summaries.",
"title": ""
},
{
"docid": "368c769f4427c213c68d1b1d7a0e4ca9",
"text": "The goal of this paper is to perform 3D object detection in the context of autonomous driving. Our method aims at generating a set of high-quality 3D object proposals by exploiting stereo imagery. We formulate the problem as minimizing an energy function that encodes object size priors, placement of objects on the ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. We then exploit a CNN on top of these proposals to perform object detection. In particular, we employ a convolutional neural net (CNN) that exploits context and depth information to jointly regress to 3D bounding box coordinates and object pose. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. When combined with the CNN, our approach outperforms all existing results in object detection and orientation estimation tasks for all three KITTI object classes. Furthermore, we experiment also with the setting where LIDAR information is available, and show that using both LIDAR and stereo leads to the best result.",
"title": ""
},
{
"docid": "78d88298e0b0e197f44939ee96210778",
"text": "Cyber-security research and development for SCADA is being inhibited by the lack of available SCADA attack datasets. This paper presents a modular dataset generation framework for SCADA cyber-attacks, to aid the development of attack datasets. The presented framework is based on requirements derived from related prior research, and is applicable to any standardised or proprietary SCADA protocol. We instantiate our framework and validate the requirements using a Python implementation. This paper provides experiments of the framework's usage on a state-of-the-art DNP3 critical infrastructure test-bed, thus proving framework's ability to generate SCADA cyber-attack datasets.",
"title": ""
},
{
"docid": "73be556cf24bfe8362363c8a0b835533",
"text": "This paper presents a low cost solution for energy harvester based on a bistable clamped-clamped PET (PolyEthyleneTerephthalate) beam and two piezoelectric transducers. The beam switching is activated by environmental vibrations. The mechanical-to-electrical energy conversion is performed by two piezoelectric transducers laterally installed to experience beam impacts each time the device switches from one stable state to the other one. Main advantages of the proposed approach are related to the wide frequency band assuring high device efficiency and the adopted low cost technology.",
"title": ""
},
{
"docid": "f6266e5c4adb4fa24cc353dccccaf6db",
"text": "Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widelyused topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways.",
"title": ""
},
{
"docid": "ce22073b8dbc3a910fa8811a2a8e5c87",
"text": "Ethernet is going to play a major role in automotive communications, thus representing a significant paradigm shift in automotive networking. Ethernet technology will allow for multiple in-vehicle systems (such as, multimedia/infotainment, camera-based advanced driver assistance and on-board diagnostics) to simultaneously access information over a single unshielded twisted pair cable. The leading technology for automotive applications is the IEEE Audio Video Bridging (AVB), which offers several advantages, such as open specification, multiple sources of electronic components, high bandwidth, the compliance with the challenging EMC/EMI automotive requirements, and significant savings on cabling costs, thickness and weight. This paper surveys the state of the art on Ethernet-based automotive communications and especially on the IEEE AVB, with a particular focus on the way to provide support to the so-called scheduled traffic, that is a class of time-sensitive traffic (e.g., control traffic) that is transmitted according to a time schedule.",
"title": ""
},
{
"docid": "4ad3c199ad1ba51372e9f314fc1158be",
"text": "Inner lead bonding (ILB) is used to thermomechanically join the Cu inner leads on a flexible film tape and Au bumps on a driver IC chip to form electrical paths. With the newly developed film carrier assembly technology, called chip on film (COF), the bumps are prepared separately on a film tape substrate and bonded on the finger lead ends beforehand; therefore, the assembly of IC chips can be made much simpler and cheaper. In this paper, three kinds of COF samples, namely forming, wrinkle, and flat samples, were prepared using conventional gang bonder. The peeling test was used to examine the bondability of ILB in terms of the adhesion strength between the inner leads and the bumps. According to the peeling test results, flat samples have competent strength, less variation, and better appearance than when using flip-chip bonder.",
"title": ""
},
{
"docid": "152c11ef8449d53072bbdb28432641fa",
"text": "Flexible intelligent electronic devices (IEDs) are highly desirable to support free allocation of function to IED by means of software reconfiguration without any change of hardware. The application of generic hardware platforms and component-based software technology seems to be a good solution. Due to the advent of IEC 61850, generic hardware platforms with a standard communication interface can be used to implement different kinds of functions with high flexibility. The remaining challenge is the unified function model that specifies various software components with appropriate granularity and provides a framework to integrate them efficiently. This paper proposes the function-block (FB)-based function model for flexible IEDs. The standard FBs are established by combining the IEC 61850 model and the IEC 61499 model. The design of a simplified distance protection IED using standard FBs is described and investigated. The testing results of the prototype system in MATLAB/Simulink demonstrate the feasibility and flexibility of FB-based IEDs.",
"title": ""
},
{
"docid": "5686b87484f2e78da2c33ed03b1a536c",
"text": "Although an automated flexible production cell is an intriguing prospect for small to median enterprises (SMEs) in current global market conditions, the complexity of programming remains one of the major hurdles preventing automation using industrial robots for SMEs. This paper provides a comprehensive review of the recent research progresses on the programming methods for industrial robots, including online programming, offline programming (OLP), and programming using Augmented Reality (AR). With the development of more powerful 3D CAD/PLM software, computer vision, sensor technology, etc. new programming methods suitable for SMEs are expected to grow in years to come. (C) 2011 Elsevier Ltd. All rights reserved.\"",
"title": ""
},
{
"docid": "061fc82fbb5325a8a590b1480734861d",
"text": "Introduction More than 24 million cases of human papillomavirus (HPV) infection occur in adults in the United States, with an estimated 1 million new cases developing each year. The number of outpatient visits for adults who have venereal warts (condyloma acuminata) increased fivefold from 1966 to 1981. (1) HPV infections in children may present as common skin warts, anogenital warts (AGW), oral and laryngeal papillomas, and subclinical infections. The increased incidence of AGW in children has paralleled that of adults. AGW in children present a unique diagnostic challenge: Is the HPV infection a result of child sexual abuse (CSA), which requires reporting to Child Protective Services (CPS), or acquired through an otherwise innocuous mechanism? Practitioners must balance “missing” a case of CSA if they do not report to CPS against reporting to CPS and having parents or other caregivers potentially suffer false accusation and its potential ramifications, which may include losing custody of children. In the past, simply identifying AGW in a young child was considered indicative of CSA by some experts. However, there is no defined national standard beyond the limited guidance provided in the 2005 American Academy of Pediatrics (AAP) Policy Statement, which states that AGW are suspicious for CSA if not perinatally acquired and the rare vertical, nonsexual means of infection have been excluded. (2) Guidance in determining perinatal acquisition or nonsexual transmission is not provided. This review examines the pathophysiology of HPV causing AGW in children and adolescents, diagnostic challenges, treatment options, and a clinical pathway for the evaluation of young children who have AGW when CSA is of concern.",
"title": ""
},
{
"docid": "b38939ec3c6f8e10553f934ceab401ff",
"text": "According to recent work in the new field of lexical pragmatics, the meanings of words are frequently pragmatically adjusted and fine-tuned in context, so that their contribution to the proposition expressed is different from their lexically encoded sense. Well-known examples include lexical narrowing (e.g. ‘drink’ used to mean ALCOHOLIC DRINK), approximation (or loosening) (e.g. ‘flat’ used to mean RELATIVELY FLAT) and metaphorical extension (e.g. ‘bulldozer’ used to mean FORCEFUL PERSON). These three phenomena are often studied in isolation from each other and given quite distinct kinds of explanation. In this chapter, we will propose a more unified account. We will try to show that narrowing, loosening and metaphorical extension are simply different outcomes of a single interpretive process which creates an ad hoc concept, or occasion-specific sense, based on interaction among encoded concepts, contextual information and pragmatic expectations or principles. We will outline an inferential account of the lexical adjustment process using the framework of relevance theory, and compare it with some alternative accounts. * This work is part of an AHRC-funded project ‘A Unified Theory of Lexical Pragmatics’ (AR16356). We are grateful to our research assistants, Patricia Kolaiti, Tim Wharton and, in particular, Rosa Vega Moreno, whose PhD work on metaphor we draw on in this paper, and to Vladimir Žegarac, François Recanati, Nausicaa Pouscoulous, Paula Rubio Fernandez and Hanna Stoever, for helpful discussions. We would also like to thank Dan Sperber for sharing with us many valuable insights on metaphor and on lexical pragmatics more generally.",
"title": ""
}
] |
scidocsrr
|
fb09a2ee30dab464632f395e45a61300
|
Anticipation and next action forecasting in video: an end-to-end model with memory
|
[
{
"docid": "6a72b09ce61635254acb0affb1d5496e",
"text": "We introduce a new large-scale video dataset designed to assess the performance of diverse visual event recognition algorithms with a focus on continuous visual event recognition (CVER) in outdoor areas with wide coverage. Previous datasets for action recognition are unrealistic for real-world surveillance because they consist of short clips showing one action by one individual [15, 8]. Datasets have been developed for movies [11] and sports [12], but, these actions and scene conditions do not apply effectively to surveillance videos. Our dataset consists of many outdoor scenes with actions occurring naturally by non-actors in continuously captured videos of the real world. The dataset includes large numbers of instances for 23 event types distributed throughout 29 hours of video. This data is accompanied by detailed annotations which include both moving object tracks and event examples, which will provide solid basis for large-scale evaluation. Additionally, we propose different types of evaluation modes for visual recognition tasks and evaluation metrics along with our preliminary experimental results. We believe that this dataset will stimulate diverse aspects of computer vision research and help us to advance the CVER tasks in the years ahead.",
"title": ""
}
] |
[
{
"docid": "9f6fb1de80f4500384097978c3712c68",
"text": "Reflection is a language feature which allows to analyze and transform the behavior of classes at the runtime. Reflection is used for software debugging and testing. Malware authors can leverage reflection to subvert the malware detection by static analyzers. Reflection initializes the class, invokes any method of class, or accesses any field of class. But, instead of utilizing usual programming language syntax, reflection passes classes/methods etc. as parameters to reflective APIs. As a consequence, these parameters can be constructed dynamically or can be encrypted by malware. These cannot be detected by state-of-the-art static tools. We propose EspyDroid, a system that combines dynamic analysis with code instrumentation for a more precise and automated detection of malware employing reflection. We evaluate EspyDroid on 28 benchmark apps employing major reflection categories. Our technique show improved results over FlowDroid via detection of additional undetected flows. These flows have potential to leak sensitive and private information of the users, through various sinks.",
"title": ""
},
{
"docid": "bb2e7ee3a447fd5bad57f2acd0f6a259",
"text": "A new cavity arrangement, namely, the generalized TM dual-mode cavity, is presented in this paper. In contrast with the previous contributions on TM dual-mode filters, the generalized TM dual-mode cavity allows the realization of both symmetric and asymmetric filtering functions, simultaneously exploiting the maximum number of finite frequency transmission zeros. The high design flexibility in terms of number and position of transmission zeros is obtained by exciting and exploiting a set of nonresonating modes. Five structure parameters are used to fully control its equivalent transversal topology. The relationship between structure parameters and filtering function realized is extensively discussed. The design of multiple cavity filters is presented along with the experimental results of a sixth-order filter having six asymmetrically located transmission zeros.",
"title": ""
},
{
"docid": "e8a69f68bc1647c69431ce88a0728777",
"text": "Contrary to popular perception, qualitative research can produce vast amounts of data. These may include verbatim notes or transcribed recordings of interviews or focus groups, jotted notes and more detailed “fieldnotes” of observational research, a diary or chronological account, and the researcher’s reflective notes made during the research. These data are not necessarily small scale: transcribing a typical single interview takes several hours and can generate 20-40 pages of single spaced text. Transcripts and notes are the raw data of the research. They provide a descriptive record of the research, but they cannot provide explanations. The researcher has to make sense of the data by sifting and interpreting them.",
"title": ""
},
{
"docid": "1f0fd314cdc4afe7b7716ca4bd681c16",
"text": "Automatic speech recognition can potentially benefit from the lip motion patterns, complementing acoustic speech to improve the overall recognition performance, particularly in noise. In this paper we propose an audio-visual fusion strategy that goes beyond simple feature concatenation and learns to automatically align the two modalities, leading to enhanced representations which increase the recognition accuracy in both clean and noisy conditions. We test our strategy on the TCD-TIMIT and LRS2 datasets, designed for large vocabulary continuous speech recognition, applying three types of noise at different power ratios. We also exploit state of the art Sequence-to-Sequence architectures, showing that our method can be easily integrated. Results show relative improvements from 7% up to 30% on TCD-TIMIT over the acoustic modality alone, depending on the acoustic noise level. We anticipate that the fusion strategy can easily generalise to many other multimodal tasks which involve correlated modalities.",
"title": ""
},
{
"docid": "ed28faf2ff89ac4da642593e1b7eef9c",
"text": "Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA), which has a physically large aperture, and a practical uniform cylindrical array (UCA), which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels.",
"title": ""
},
{
"docid": "3e5312f6d3c02d8df2903ea80c1bbae5",
"text": "Stroke has now become the leading cause of severe disability. Rehabilitation robots are gradually becoming popular for stroke rehabilitation to improve motor recovery, as robotic technology can assist, enhance, and further quantify rehabilitation training for stroke patients. However, most of the available rehabilitation robots are complex and involve multiple degrees-of-freedom (DOFs) causing it to be very expensive and huge in size. Rehabilitation robots should be useful but also need to be affordable and portable enabling more patients to afford and train independently at home. This paper presents a development of an affordable, portable and compact rehabilitation robot that implements different rehabilitation strategies for stroke patient to train forearm and wrist movement in an enhanced virtual reality environment with haptic feedback.",
"title": ""
},
{
"docid": "691f5f53582ceedaa51812307778b4db",
"text": "This paper looks at how a vulnerability management (VM) process could be designed & implemented within an organization. Articles and studies about VM usually focus mainly on the technology aspects of vulnerability scanning. The goal of this study is to call attention to something that is often overlooked: a basic VM process which could be easily adapted and implemented in any part of the organization. Implementing a vulnerability management process 2 Tom Palmaers",
"title": ""
},
{
"docid": "423d15bbe1c47bc6225030307fc8e379",
"text": "In a secret sharing scheme, a datumd is broken into shadows which are shared by a set of trustees. The family {P′⊆P:P′ can reconstructd} is called the access structure of the scheme. A (k, n)-threshold scheme is a secret sharing scheme having the access structure {P′⊆P: |P′|≥k}. In this paper, by observing a simple set-theoretic property of an access structure, we propose its mathematical definition. Then we verify the definition by proving that every family satisfying the definition is realized by assigning two more shadows of a threshold scheme to trustees.",
"title": ""
},
{
"docid": "84307c2dd94ebe89c46a535b31b4b51b",
"text": "Building systems that autonomously create temporal abstractions from data is a key challenge in scaling learning and planning in reinforcement learning. One popular approach for addressing this challenge is the options framework [41]. However, only recently in [1] was a policy gradient theorem derived for online learning of general purpose options in an end to end fashion. In this work, we extend previous work on this topic that only focuses on learning a two-level hierarchy including options and primitive actions to enable learning simultaneously at multiple resolutions in time. We achieve this by considering an arbitrarily deep hierarchy of options where high level temporally extended options are composed of lower level options with finer resolutions in time. We extend results from [1] and derive policy gradient theorems for a deep hierarchy of options. Our proposed hierarchical option-critic architecture is capable of learning internal policies, termination conditions, and hierarchical compositions over options without the need for any intrinsic rewards or subgoals. Our empirical results in both discrete and continuous environments demonstrate the efficiency of our framework.",
"title": ""
},
{
"docid": "9c780c4d37326ce2a5e2838481f48456",
"text": "A maximum power point tracker has been previously developed for the single high performance triple junction solar cell for hybrid and electric vehicle applications. The maximum power point tracking (MPPT) control method is based on the incremental conductance (IncCond) but removes the need for current sensors. This paper presents the hardware implementation of the maximum power point tracker. Significant efforts have been made to reduce the size to 18 mm times 21 mm (0.71 in times 0.83 in) and the cost to close to $5 US. This allows the MPPT hardware to be integrable with a single solar cell. Precision calorimetry measurements are employed to establish the converter power loss and confirm that an efficiency of 96.2% has been achieved for the 650-mW converter with 20-kHz switching frequency. Finally, both the static and the dynamic tests are conducted to evaluate the tracking performances of the MPPT hardware. The experimental results verify a tracking efficiency higher than 95% under three different insolation levels and a power loss less than 5% of the available cell power under instantaneous step changes between three insolation levels.",
"title": ""
},
{
"docid": "6abc9ea6e1d5183e589194db8520172c",
"text": "Smart decision making at the tactical level is important for Artificial Intelligence (AI) agents to perform well in the domain of real-time strategy (RTS) games. This paper presents a Bayesian model that can be used to predict the outcomes of isolated battles, as well as predict what units are needed to defeat a given army. Model parameters are learned from simulated battles, in order to minimize the dependency on player skill. We apply our model to the game of StarCraft, with the end-goal of using the predictor as a module for making high-level combat decisions, and show that the model is capable of making accurate predictions.",
"title": ""
},
{
"docid": "3255b89b7234595e7078a012d4e62fa7",
"text": "Virtual assistants such as IFTTT and Almond support complex tasks that combine open web APIs for devices and web services. In this work, we explore semantic parsing to understand natural language commands for these tasks and their compositions. We present the ThingTalk dataset, which consists of 22,362 commands, corresponding to 2,681 distinct programs in ThingTalk, a language for compound virtual assistant tasks. To improve compositionality of multiple APIs, we propose SEQ2TT, a Seq2Seq extension using a bottom-up encoding of grammar productions for programs and a maxmargin loss. On the ThingTalk dataset, SEQ2TT obtains 84% accuracy on trained programs and 67% on unseen combinations, an improvement of 12% over a basic sequence-to-sequence model with attention.",
"title": ""
},
{
"docid": "ac2e1a27ae05819d213efe7d51d1b988",
"text": "Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT) / Internet of Everything (IoE), and Cyber Physical Systems (CSP) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world under unpredictable, harsh, and energy-/power-constrained scenarios. Therefore, such systems need to support not only the high performance capabilities at tight power/energy envelop, but also need to be intelligent/cognitive, self-learning, and robust. As a result, a hype in the artificial intelligence research (e.g., deep learning and other machine learning techniques) has surfaced in numerous communities. This paper discusses the challenges and opportunities for building energy-efficient and adaptive architectures for machine learning. In particular, we focus on brain-inspired emerging computing paradigms, such as approximate computing; that can further reduce the energy requirements of the system. First, we guide through an approximate computing based methodology for development of energy-efficient accelerators, specifically for convolutional Deep Neural Networks (DNNs). We show that in-depth analysis of datapaths of a DNN allows better selection of Approximate Computing modules for energy-efficient accelerators. Further, we show that a multi-objective evolutionary algorithm can be used to develop an adaptive machine learning system in hardware. At the end, we summarize the challenges and the associated research roadmap that can aid in developing energy-efficient and adaptable hardware accelerators for machine learning.",
"title": ""
},
{
"docid": "6e198119c72a796bc0b56280503fec18",
"text": "Therapeutic activities of drugs are often influenced by co-administration of drugs that may cause inevitable drug-drug interactions (DDIs) and inadvertent side effects. Prediction and identification of DDIs are extremely vital for the patient safety and success of treatment modalities. A number of computational methods have been employed for the prediction of DDIs based on drugs structures and/or functions. Here, we report on a computational method for DDIs prediction based on functional similarity of drugs. The model was set based on key biological elements including carriers, transporters, enzymes and targets (CTET). The model was applied for 2189 approved drugs. For each drug, all the associated CTETs were collected, and the corresponding binary vectors were constructed to determine the DDIs. Various similarity measures were conducted to detect DDIs. Of the examined similarity methods, the inner product-based similarity measures (IPSMs) were found to provide improved prediction values. Altogether, 2,394,766 potential drug pairs interactions were studied. The model was able to predict over 250,000 unknown potential DDIs. Upon our findings, we propose the current method as a robust, yet simple and fast, universal in silico approach for identification of DDIs. We envision that this proposed method can be used as a practical technique for the detection of possible DDIs based on the functional similarities of drugs.",
"title": ""
},
{
"docid": "0cce6366df945f079dbb0b90d79b790e",
"text": "Fourier ptychographic microscopy (FPM) is a recently developed imaging modality that uses angularly varying illumination to extend a system's performance beyond the limit defined by its optical components. The FPM technique applies a novel phase-retrieval procedure to achieve resolution enhancement and complex image recovery. In this Letter, we compare FPM data to theoretical prediction and phase-shifting digital holography measurement to show that its acquired phase maps are quantitative and artifact-free. We additionally explore the relationship between the achievable spatial and optical thickness resolution offered by a reconstructed FPM phase image. We conclude by demonstrating enhanced visualization and the collection of otherwise unobservable sample information using FPM's quantitative phase.",
"title": ""
},
{
"docid": "6de3aca18d6c68f0250c8090ee042a4e",
"text": "JavaScript is widely used by web developers and the complexity of JavaScript programs has increased over the last year. Therefore, the need for program analysis for JavaScript is evident. Points-to analysis for JavaScript is to determine the set of objects to which a reference variable or an object property may point. Points-to analysis for JavaScript is a basis for further program analyses for JavaScript. It has a wide range of applications in code optimization and software engineering tools. However, points-to analysis for JavaScript has not yet been developed.\n JavaScript has dynamic features such as the runtime modification of objects through addition of properties or updating of methods. We propose a points-to analysis for JavaScript which precisely handles the dynamic features of JavaScript. Our work is the first attempt to analyze the points-to behavior of JavaScript. We evaluate the analysis on a set of JavaScript programs. We also apply the analysis to a code optimization technique to show that the analysis can be practically useful.",
"title": ""
},
{
"docid": "a3b3380940613a5fb704727e41e9907a",
"text": "Stackelberg Security Games (SSG) have been widely applied for solving real-world security problems - with a significant research emphasis on modeling attackers' behaviors to handle their bounded rationality. However, access to real-world data (used for learning an accurate behavioral model) is often limited, leading to uncertainty in attacker's behaviors while modeling. This paper therefore focuses on addressing behavioral uncertainty in SSG with the following main contributions: 1) we present a new uncertainty game model that integrates uncertainty intervals into a behavioral model to capture behavioral uncertainty, and 2) based on this game model, we propose a novel robust algorithm that approximately computes the defender's optimal strategy in the worst-case scenario of uncertainty. We show that our algorithm guarantees an additive bound on its solution quality.",
"title": ""
},
{
"docid": "5998ce035f4027c6713f20f8125ec483",
"text": "As the use of automotive radar increases, performance limitations associated with radar-to-radar interference will become more significant. In this paper, we employ tools from stochastic geometry to characterize the statistics of radar interference. Specifically, using two different models for the spatial distributions of vehicles, namely, a Poisson point process and a Bernoulli lattice process, we calculate for each case the interference statistics and obtain analytical expressions for the probability of successful range estimation. This paper shows that the regularity of the geometrical model appears to have limited effect on the interference statistics, and so it is possible to obtain tractable tight bounds for the worst case performance. A technique is proposed for designing the duty cycle for the random spectrum access, which optimizes the total performance. This analytical framework is verified using Monte Carlo simulations.",
"title": ""
},
{
"docid": "de5fd8ae40a2d078101d5bb1859f689b",
"text": "The number and variety of mobile multicast applications are growing at an unprecedented and unanticipated pace. Mobile network providers are in front of a dramatic increase in multicast traffic load, and this growth is forecasted to continue in fifth-generation (5G) networks. The major challenges come from the fact that multicast traffic not only targets groups of end-user devices; it also involves machine-type communications (MTC) for the Internet of Things (IoT). The increase in the MTC load, predicted for 5G, calls into question the effectiveness of the current multimedia broadcast multicast service (MBMS). The aim of this paper is to provide a survey of 5G challenges in the view of effective management of multicast applications, and to identify how to enhance the mobile network architecture to enable multicast applications in future 5G scenarios. By accounting for the presence of both human and machine-related traffic, strengths and weaknesses of the state-of-the-art achievements in multicasting are critically analyzed to provide guidelines for future research on 5G networks and more conscious design choices.",
"title": ""
},
{
"docid": "109838175d109002e022115d84cae0fa",
"text": "We present a probabilistic variant of the recently introduced maxout unit. The success of deep neural networks utilizing maxout can partly be attributed to favorable performance under dropout, when compared to rectified linear units. It however also depends on the fact that each maxout unit performs a pooling operation over a group of linear transformations and is thus partially invariant to changes in its input. Starting from this observation we ask the question: Can the desirable properties of maxout units be preserved while improving their invariance properties ? We argue that our probabilistic maxout (probout) units successfully achieve this balance. We quantitatively verify this claim and report classification performance matching or exceeding the current state of the art on three challenging image classification benchmarks (CIFAR-10, CIFAR-100 and SVHN).",
"title": ""
}
] |
scidocsrr
|
cf6f80403f06d4bb848d729b36bc4e19
|
Trajectory Planning Design Equations and Control of a 4 - axes Stationary Robotic Arm
|
[
{
"docid": "53b43126d066f5e91d7514f5da754ef3",
"text": "This paper describes a computationally inexpensive, yet high performance trajectory generation algorithm for omnidirectional vehicles. It is shown that the associated nonlinear control problem can be made tractable by restricting the set of admissible control functions. The resulting problem is linear with coupled control efforts and a near-optimal control strategy is shown to be piecewise constant (bang-bang type). A very favorable trade-off between optimality and computational efficiency is achieved. The proposed algorithm is based on a small number of evaluations of simple closed-form expressions and is thus extremely efficient. The low computational cost makes this method ideal for path planning in dynamic environments.",
"title": ""
}
] |
[
{
"docid": "261e3c6f2826473d9128d4c763ffaa41",
"text": "Since remote sensing provides more and more sensors and techniques to accumulate data on urban regions, three-dimensional representations of these complex environments gained much interest for various applications. In order to obtain three-dimensional representations, one of the most practical ways is to generate Digital Surface Models (DSMs) using very high resolution remotely sensed images from two or more viewing directions, or by using LIDAR sensors. Due to occlusions, matching errors and interpolation techniques these DSMs do not exhibit completely steep walls, and in order to obtain real three-dimensional urban models including objects like buildings from these DSMs, advanced methods are needed. A novel approach based on building shape detection, height estimation, and rooftop reconstruction is proposed to achieve realistic three-dimensional building representations. Our automatic approach consists of three main modules as; detection of complex building shapes, understanding rooftop type, and three-dimensional building model reconstruction based on detected shape and rooftop type. Besides the development of the methodology, the goal is to investigate the applicability and accuracy which can be accomplished in this context for different stereo sensor data. We use DSMs of Munich city which are obtained from different satellite (Cartosat-1, Ikonos, WorldView-2) and airborne sensors (3K camera, HRSC, and LIDAR). The paper later focuses on a quantitative comparisons of the outputs from the different multi-view sensors for a better understanding of qualities, capabilities and possibilities for applications. Results look very promising even for the DSMs derived from satellite data.",
"title": ""
},
{
"docid": "693c29b040bb37142d95201589b24d0d",
"text": "We are overwhelmed by the response to IJEIS. This response reflects the importance of the subject of enterprise information systems in global market and enterprise environments. We have some exciting special issues forthcoming in 2006. The first two issues will feature: (i) information and knowledge based approaches to improving performance in organizations, and (ii) hard and soft modeling tools and approaches to data and information management in real life projects and systems. IJEIS encourages researchers and practitioners to share their new ideas and results in enterprise information systems design and implementation, and also share relevant technical issues related to the development of such systems. This issue of IJEIS contains five articles dealing with an approach to evaluating ERP software within the acquisition process, uncertainty in ERP-controlled manufacturing systems, a review on IT business value research , methodologies for evaluating investment in electronic data interchange, and an ERP implementation model. An overview of the papers follows. The first paper, A Three-Dimensional Approach in Evaluating ERP Software within the Acquisition Process is authored by Verville, Bernadas and Halingten. This paper is based on an extensive study of the evaluation process of the acquisition of an ERP software of four organizations. Three distinct process types and activities were found: vendor's evaluation, functional evaluation , and technical evaluation. This paper provides a perspective on evaluation and sets it apart as modality for action, whose intent is to investigate and uncover by means of specific defined evaluative activities all issues pertinent to ERP software that an organization can use in its decision to acquire a solution that will meet its needs. The use of ERP is becoming increasingly prevalent in many modern manufacturing enterprises. However, knowledge of their performance when perturbed by several significant uncertainties simultaneously is not as widespread as it should have been. Koh, Gunasekaran, Saad and Arunachalam authored Uncertainty in ERP-Controlled Manufacturing Systems. The paper presents a developmental and experimental work on modeling uncertainty within an ERP multi-product, multi-level dependent demand manufacturing planning and scheduling system in a simulation model developed using ARENA/ SIMAN. To enumerate how uncertainty af",
"title": ""
},
{
"docid": "b1c6d95b297409a7b47d8fa7e6da6831",
"text": "~I \"e have modified the original model of selective attention, which was previmtsly proposed by Fukushima, and e~tended its ability to recognize attd segment connected characters in cmwive handwriting. Although the or~¢inal model q/'sdective attention ah'ead)' /tad the abilio' to recognize and segment patterns, it did not alwa)w work well when too many patterns were presented simuhaneousl): In order to restrict the nttmher q/patterns to be processed simultaneousO; a search controller has been added to the original model. Tlw new mode/mainly processes the patterns contained in a small \"search area, \" which is mo~vd b)' the search controller A ptvliminao' ev~eriment with compltter simttlatiott has shown that this approach is promisittg. The recogttition arid segmentation q[k'haracters can be sttcces~[itl even thottgh each character itt a handwritten word changes its .shape h)\" the e[]'ect o./the charactetw",
"title": ""
},
{
"docid": "102bec350390b46415ae07128cb4e77f",
"text": "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.",
"title": ""
},
{
"docid": "8296954ffde770f611d86773f72fb1b4",
"text": "Group and async. commit? Better I/O performance But contention unchanged It reduces buffer contention, but... Log space partitioning: by page or xct? – Impacts locality, recovery strategy Dependency tracking: before commit, T4 must persist log records written by: – itself – direct xct deps: T4 T2 – direct page deps: T4 T3 – transitive deps: T4 {T3, T2} T1 Storage is slow – T4 flushes all four logs upon commit (instead of one) Log work (20%) Log contention (46%) Other work (21%) CPU cycles: Lock manager Other contention",
"title": ""
},
{
"docid": "91e8516d2e7e1e9de918251ac694ee08",
"text": "High performance 3D integration Systems need a higher interconnect density between the die than traditional μbump interconnects can offer. For ultra-fine pitches interconnect pitches below 5μm a different solution is required. This paper describes a hybrid wafer-to-wafer (W2W) bonding approach that uses Cu damascene patterned surface bonding, allowing to scale down the interconnection pitch below 5 μm, potentially even down to 1μm, depending on the achievable W2W bonding accuracy. The bonding method is referred to as hybrid bonding since the bonding of the Cu/dielectric damascene surfaces leads simultaneously to metallic and dielectric bonding. In this paper, the integration flow for 300mm hybrid wafer bonding at 3.6μm and 1.8μm pitch will be described using a novel, alternative, non-oxide Cu/dielectric damascene process. Optimization of the surface preparation before bonding will be discussed. Of particular importance is the wafer chemical-mechanical-polishing (CMP) process and the pre-bonding wafer treatment. Using proper surface activation and very low roughness dielectrics, void-free room temperature bonding can be achieved. High bonding strengths are obtained, even using low temperature anneal (250°C). The process flow also integrates the use of a 5μm diameter, 50μm deep via-middle through-silicon-vias (TSV) to connect the wafer interfaces to the external wafer backside.",
"title": ""
},
{
"docid": "700191eaaaf0bdd293fc3bbd24467a32",
"text": "SMART (Semantic web information Management with automated Reasoning Tool) is an open-source project, which aims to provide intuitive tools for life scientists for represent, integrate, manage and query heterogeneous and distributed biological knowledge. SMART was designed with interoperability and extensibility in mind and uses AJAX, SVG and JSF technologies, RDF, OWL, SPARQL semantic web languages, triple stores (i.e. Jena) and DL reasoners (i.e. Pellet) for the automated reasoning. Features include semantic query composition and validation using DL reasoners, a graphical representation of the query, a mapping of DL queries to SPARQL, and the retrieval of pre-computed inferences from an RDF triple store. With a use case scenario, we illustrate how a biological scientist can intuitively query the yeast knowledge base and navigate the results. Continued development of this web-based resource for the biological semantic web will enable new information retrieval opportunities for the life sciences.",
"title": ""
},
{
"docid": "07c34b068cc1217de2e623122a22d2b0",
"text": "Rheumatoid arthritis (RA) is a bone destructive autoimmune disease. Many patients with RA recognize fluctuations of their joint synovitis according to changes of air pressure, but the correlations between them have never been addressed in large-scale association studies. To address this point we recruited large-scale assessments of RA activity in a Japanese population, and performed an association analysis. Here, a total of 23,064 assessments of RA activity from 2,131 patients were obtained from the KURAMA (Kyoto University Rheumatoid Arthritis Management Alliance) database. Detailed correlations between air pressure and joint swelling or tenderness were analyzed separately for each of the 326 patients with more than 20 assessments to regulate intra-patient correlations. Association studies were also performed for seven consecutive days to identify the strongest correlations. Standardized multiple linear regression analysis was performed to evaluate independent influences from other meteorological factors. As a result, components of composite measures for RA disease activity revealed suggestive negative associations with air pressure. The 326 patients displayed significant negative mean correlations between air pressure and swellings or the sum of swellings and tenderness (p = 0.00068 and 0.00011, respectively). Among the seven consecutive days, the most significant mean negative correlations were observed for air pressure three days before evaluations of RA synovitis (p = 1.7 × 10(-7), 0.00027, and 8.3 × 10(-8), for swellings, tenderness and the sum of them, respectively). Standardized multiple linear regression analysis revealed these associations were independent from humidity and temperature. Our findings suggest that air pressure is inversely associated with synovitis in patients with RA.",
"title": ""
},
{
"docid": "f1e0565fbc19791ed636c146a9c2dfcc",
"text": "It is well established that value stocks outperform glamour stocks, yet considerable debate exists about whether the return differential reflects compensation for risk or mispricing. Under mispricing explanations, prices of glamour (value) firms reflect systematically optimistic (pessimistic) expectations; thus, the value/glamour effect should be concentrated (absent) among firms with (without) ex ante identifiable expectation errors. Classifying firms based upon whether expectations implied by current pricing multiples are congruent with the strength of their fundamentals, we document that value/glamour returns and ex post revisions to market expectations are predictably concentrated (absent) among firms with ex ante biased (unbiased) market expectations.",
"title": ""
},
{
"docid": "ad7f49832562d27534f11b162e28f51b",
"text": "Gaze is an important component of social interaction. The function, evolution and neurobiology of gaze processing are therefore of interest to a number of researchers. This review discusses the evolutionary role of social gaze in vertebrates (focusing on primates), and a hypothesis that this role has changed substantially for primates compared to other animals. This change may have been driven by morphological changes to the face and eyes of primates, limitations in the facial anatomy of other vertebrates, changes in the ecology of the environment in which primates live, and a necessity to communicate information about the environment, emotional and mental states. The eyes represent different levels of signal value depending on the status, disposition and emotional state of the sender and receiver of such signals. There are regions in the monkey and human brain which contain neurons that respond selectively to faces, bodies and eye gaze. The ability to follow another individual's gaze direction is affected in individuals with autism and other psychopathological disorders, and after particular localized brain lesions. The hypothesis that gaze following is \"hard-wired\" in the brain, and may be localized within a circuit linking the superior temporal sulcus, amygdala and orbitofrontal cortex is discussed.",
"title": ""
},
{
"docid": "a2013a7c9212829187fff9bfa42665e5",
"text": "As companies increase their efforts in retaining customers, being able to predict accurately ahead of time, whether a customer will churn in the foreseeable future is an extremely powerful tool for any marketing team. The paper describes in depth the application of Deep Learning in the problem of churn prediction. Using abstract feature vectors, that can generated on any subscription based company’s user event logs, the paper proves that through the use of the intrinsic property of Deep Neural Networks (learning secondary features in an unsupervised manner), the complete pipeline can be applied to any subscription based company with extremely good churn predictive performance. Furthermore the research documented in the paper was performed for Framed Data (a company that sells churn prediction as a service for other companies) in conjunction with the Data Science Institute at Lancaster University, UK. This paper is the intellectual property of Framed Data.",
"title": ""
},
{
"docid": "93a2d7072ab88ad77c23f7c1dc5a129c",
"text": "In recent decades, the need for efficient and effective image search from large databases has increased. In this paper, we present a novel shape matching framework based on structures common to similar shapes. After representing shapes as medial axis graphs, in which nodes show skeleton points and edges connect nearby points, we determine the critical nodes connecting or representing a shape’s different parts. By using the shortest path distance from each skeleton (node) to each of the critical nodes, we effectively retrieve shapes similar to a given query through a transportation-based distance function. To improve the effectiveness of the proposed approach, we employ a unified framework that takes advantage of the feature representation of the proposed algorithm and the classification capability of a supervised machine learning algorithm. A set of shape retrieval experiments including a comparison with several well-known approaches demonstrate the proposed algorithm’s efficacy and perturbation experiments show its robustness.",
"title": ""
},
{
"docid": "4e4e65f9ee3555f2b3ee134f3ab5ca7d",
"text": "Conventional wisdom has regarded low self-esteem as an important cause of violence, but the opposite view is theoretically viable. An interdisciplinary review of evidence about aggression, crime, and violence contradicted the view that low self-esteem is an important cause. Instead, violence appears to be most commonly a result of threatened egotism--that is, highly favorable views of self that are disputed by some person or circumstance. Inflated, unstable, or tentative beliefs in the self's superiority may be most prone to encountering threats and hence to causing violence. The mediating process may involve directing anger outward as a way of avoiding a downward revision of the self-concept.",
"title": ""
},
{
"docid": "36bee0642c30a3ecab2c9a8996084b61",
"text": "Many works related learning from examples to regularization techniques for inverse problems, emphasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regularization algorithms. In particular it is well known that regularization schemes such as Tikhonov regularization can be effectively used in the context of learning and are closely related to algorithms such as support vector machines. Nevertheless the connection with inverse problem was considered only for the discrete (finite sample) problem which is solved in practice and the probabilistic aspects of learning from examples were not taken into account. In this paper we provide a natural extension of such analysis to the continuous (population) case and analyse the interplay between the discrete and continuous problems. From a theoretical point of view, this allows to draw a clear connection between the consistency approach imposed in learning theory, and the stability convergence property used in ill-posed inverse problems. The main mathematical result of the paper is a new probabilistic bound for the regularized least-squares algorithm. By means of standard results on the approximation term, the consistency of the algorithm easily follows.",
"title": ""
},
{
"docid": "6c15a9ec021ec38cf65532d06472be9d",
"text": "The aim of this article is to present a case study of usage of one of the data mining methods, neural network, in knowledge discovery from databases in the banking industry. Data mining is automated process of analysing, organization or grouping a large set of data from different perspectives and summarizing it into useful information using special algorithms. Data mining can help to resolve banking problems by finding some regularity, causality and correlation to business information which are not visible at first sight because they are hidden in large amounts of data. In this paper, we used one of the data mining methods, neural network, within the software package Alyuda NeuroInteligence to predict customer churn in bank. The focus on customer churn is to determinate the customers who are at risk of leaving and analysing whether those customers are worth retaining. Neural network is statistical learning model inspired by biological neural and it is used to estimate or approximate functions that can depend on a large number of inputs which are generally unknown. Although the method itself is complicated, there are tools that enable the use of neural networks without much prior knowledge of how they operate. The results show that clients who use more bank services (products) are more loyal, so bank should focus on those clients who use less than three products, and offer them products according to their needs. Similar results are obtained for different network topologies.",
"title": ""
},
{
"docid": "fe42cf28ff020c35d3a3013bb249c7d8",
"text": "Sensors and actuators are the core components of all mechatronic systems used in a broad range of diverse applications. A relatively new and rapidly evolving area is the one of rehabilitation and assistive devices that comes to support and improve the quality of human life. Novel exoskeletons have to address many functional and cost-sensitive issues such as safety, adaptability, customization, modularity, scalability, and maintenance. Therefore, a smart variable stiffness actuator was developed. The described approach was to integrate in one modular unit a compliant actuator with all sensors and electronics required for real-time communications and control. This paper also introduces a new method to estimate and control the actuator's torques without using dedicated expensive torque sensors in conditions where the actuator's torsional stiffness can be adjusted by the user. A 6-degrees-of-freedom exoskeleton was assembled and tested using the technology described in this paper, and is introduced as a real-life case study for the mechatronic design, modularity, and integration of the proposed smart actuators, suitable for human–robot interaction. The advantages are discussed together with possible improvements and the possibility of extending the presented technology to other areas of mechatronics.",
"title": ""
},
{
"docid": "db6e3742a0413ad5f44647ab1826b796",
"text": "Endometrial stromal sarcoma is a rare tumor and has unique histopathologic features. Most tumors of this kind occur in the uterus; thus, the vagina is an extremely rare site. A 34-year-old woman presented with endometrial stromal sarcoma arising in the vagina. No correlative endometriosis was found. Because of the uncommon location, this tumor was differentiated from other more common neoplasms of the vagina, particularly embryonal rhabdomyosarcoma and other smooth muscle tumors. Although the pathogenesis of endometrial stromal tumors remains controversial, the most common theory of its origin is heterotopic Müllerian tissue such as endometriosis tissue. Primitive cells of the pelvis and retroperitoneum are an alternative possible origin for the tumor if endometriosis is not present. According to the literature, the tumor has a fairly good prognosis compared with other vaginal sarcomas. Surgery combined with adjuvant radiotherapy appears to be an adequate treatment.",
"title": ""
},
{
"docid": "80ca2b3737895e9222346109ac092637",
"text": "The common ground between figurative language and humour (in the form of jokes) is what Koestler (1964) termed the bisociation of ideas. In both jokes and metaphors, two disparate concepts are brought together, but the nature and the purpose of this conjunction is different in each case. This paper focuses on this notion of boundaries and attempts to go further by asking the question “when does a metaphor become a joke?”. More specifically, the main research questions of the paper are: (a) How do speakers use metaphor in discourse for humorous purposes? (b) What are the (metaphoric) cognitive processes that relate to the creation of humour in discourse? (c) What does the study of humour in discourse reveal about the nature of metaphoricity? This paper answers these questions by examining examples taken from a three-hour conversation, and considers how linguistic theories of humour (Raskin, 1985; Attardo and Raskin, 1991; Attardo, 1994; 2001) and cognitive theories of metaphor and blending (Lakoff and Johnson, 1980; Fauconnier and Turner, 2002) can benefit from each other. Boundaries in Humour and Metaphor The goal of this paper is to explore the relationship between metaphor (and, more generally, blending) and humour, in order to attain a better understanding of the cognitive processes that are involved or even contribute to laughter in discourse. This section will present briefly research in both areas and will identify possible common ground between the two. More specifically, the notion of boundaries will be explored in both areas. The following section explores how metaphor can be used for humorous purposes in discourse by applying relevant theories of humour and metaphor to conversational data. Linguistic theories of humour highlight the importance of duality and tension in humorous texts. Koestler (1964: 51) in discussing comic creativity notes that: The sudden bisociation of an idea or event with two habitually incompatible matrices will produce a comic effect, provided that the narrative, the semantic pipeline, carries the right kind of emotional tension. When the pipe is punctured, and our expectations are fooled, the now redundant tension gushes out in laughter, or is spilled in the gentler form of the sou-rire [my emphasis]. This oft-quoted passage introduces the basic themes and mechanisms that later were explored extensively within contemporary theories of humour: a humorous text must relate to two different and opposing in some way scenarios; this duality is not",
"title": ""
},
{
"docid": "78c54496ada5e4997c72adfeaae3e41f",
"text": "In the past decade, online music streaming services (MSS), e.g. Pandora and Spotify, experienced exponential growth. The sheer volume of music collection makes music recommendation increasingly important and the related algorithms are well-documented. In prior studies, most algorithms employed content-based model (CBM) and/or collaborative filtering (CF) [3]. The former one focuses on acoustic/signal features extracted from audio content, and the latter one investigates music rating and user listening history. Actually, MSS generated user data present significant heterogeneity. Taking user-music relationship as an example, comment, bookmark, and listening history may potentially contribute to music recommendation in very different ways. Furthermore, user and music can be implicitly related via more complex relationships, e.g., user-play-artist-perform-music. From this viewpoint, user-user, music-music or user-music relationship can be much more complex than the classical CF approach assumes. For these reasons, we model music metadata and MSS generated user data in the form of a heterogeneous graph, where 6 different types of nodes interact through 16 types of relationships. We can propose many recommendation hypotheses based on the ways users and songs are connected on this graph, in the form of meta paths. The recommendation problem, then, becomes a (supervised) random walk problem on the heterogeneous graph [2]. Unlike previous heterogeneous graph mining studies, the constructed heterogeneous graph in our case is more complex, and manually formulated meta-path based hypotheses cannot guarantee good performance. In the pilot study [2], we proposed to automatically extract all the potential meta paths within a given length on the heterogeneous graph scheme, evaluate their recommendation performance on the training data, and build a learning to rank model with the best ones. Results show that the new method can significantly enhance the recommendation performance. However, there are two problems with this approach: 1. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). WSDM 2016 February 22-25, 2016, San Francisco, CA, USA c © 2016 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-3716-8/16/02. DOI: http://dx.doi.org/10.1145/2835776.2855088 including the individually best performing meta paths in the learning to rank model neglects the dependency between features; 2. it is very time consuming to calculate graph based features. Traditional feature selection methods would only work if all feature values are readily available, which would make this recommendation approach highly inefficient. In this proposal, we attempt to address these two problems by adapting the feature selection for ranking method (FSR) proposed by Geng, Liu, Qin, and Li [1]. This feature selection method developed specifically for learning to rank tasks evaluates features based on their importance when used alone, and their similarity between each other. Applying this method on the whole set of meta-path based features would be very costly. Alternatively, we use it on sub meta paths that are shared components of multiple full meta paths. We start from sub meta paths of length=1 and only the ones selected by FSR have the chance to grow to sub meta paths of length=2. Then we repeat this process until the selected sub meta paths grow to full ones. During each step, we drop some meta paths because they contain unselected sub meta paths. Finally, we will derive a subset of the original meta paths and save time by extracting values for fewer features. In our preliminary experiment, the proposed method outperforms the original FSR algorithm in both efficiency and effectiveness.",
"title": ""
},
{
"docid": "265e9de6c65996e639fd265be170e039",
"text": "Topical crawling is a young and creative area of research that holds the promise of benefiting from several sophisticated data mining techniques. The use of classification algorithms to guide topical crawlers has been sporadically suggested in the literature. No systematic study, however, has been done on their relative merits. Using the lessons learned from our previous crawler evaluation studies, we experiment with multiple versions of different classification schemes. The crawling process is modeled as a parallel best-first search over a graph defined by the Web. The classifiers provide heuristics to the crawler thus biasing it towards certain portions of the Web graph. Our results show that Naive Bayes is a weak choice for guiding a topical crawler when compared with Support Vector Machine or Neural Network. Further, the weak performance of Naive Bayes can be partly explained by extreme skewness of posterior probabilities generated by it. We also observe that despite similar performances, different topical crawlers cover subspaces on the Web with low overlap.",
"title": ""
}
] |
scidocsrr
|
d854ef98196d90f2aef56af49982a74c
|
A flexible approach for extracting metadata from bibliographic citations
|
[
{
"docid": "bdbbe079493bbfec7fb3cb577c926997",
"text": "A large amount of information on the Web is contained in regularly structured objects, which we call data records. Such data records are important because they often present the essential information of their host pages, e.g., lists of products or services. It is useful to mine such data records in order to extract information from them to provide value-added services. Existing automatic techniques are not satisfactory because of their poor accuracies. In this paper, we propose a more effective technique to perform the task. The technique is based on two observations about data records on the Web and a string matching algorithm. The proposed technique is able to mine both contiguous and non-contiguous data records. Our experimental results show that the proposed technique outperforms existing techniques substantially.",
"title": ""
}
] |
[
{
"docid": "cafa33bb8996d393063e2744f12045b1",
"text": "Latent Semantic Analysis is used as a technique for measuring the coherence of texts. By comparing the vectors for two adjoining segments of text in a highdimensional semantic space, the method provides a characterization of the degree of semantic relatedness between the segments. We illustrate the approach for predicting coherence through re-analyzing sets of texts from two studies that manipulated the coherence of texts and assessed readers' comprehension. The results indicate that the method is able to predict the effect of text coherence on comprehension and is more effective than simple term-term overlap measures. In this manner, LSA can be applied as an automated method that produces coherence predictions similar to propositional modeling. We describe additional studies investigating the application of LSA to analyzing discourse structure and examine the potential of LSA as a psychological model of coherence effects in text comprehension. Measuring Coherence 3 The Measurement of Textual Coherence with Latent Semantic Analysis. In order to comprehend a text, a reader must create a well connected representation of the information in it. This connected representation is based on linking related pieces of textual information that occur throughout the text. The linking of information is a process of determining and maintaining coherence. Because coherence is a central issue to text comprehension, a large number of studies have investigated the process readers use to maintain coherence and to model the readers' representation of the textual information as well as of their previous knowledge (e.g., Lorch & O'Brien, 1995) There are many aspects of a discourse that contribute to coherence, including, coreference, causal relationships, connectives, and signals. For example, Kintsch and van Dijk (Kintsch, 1988; Kintsch & van Dijk, 1978) have emphasized the effect of coreference in coherence through propositional modeling of texts. While coreference captures one aspect of coherence, it is highly correlated with other coherence factors such as causal relationships found in the text (Fletcher, Chrysler, van den Broek, Deaton, & Bloom, 1995; Trabasso, Secco & van den Broek, 1984). Although a propositional model of a text can predict readers' comprehension, a problem with the approach is that in-depth propositional analysis is time consuming and requires a considerable amount of training. Semi-automatic methods of propositional coding (e.g., Turner, 1987) still require a large amount of effort. This degree of effort limits the size of the text that can be analyzed. Thus, most texts analyzed and used in reading comprehension experiments have been small, typically from 50 to 500 words, and almost all are under 1000 words. Automated methods such as readability measures (e.g., Flesch, 1948; Klare, 1963) provide another characterization of the text, however, they do not correlate well with comprehension measures (Britton & Gulgoz, 1991; Kintsch & Vipond, 1979). Thus, while the coherence of a text can be measured, it can often involve considerable effort. In this study, we use Latent Semantic Analysis (LSA) to determine the coherence of texts. A more complete description of the method and approach to using LSA may be found in Deerwester, Dumais, Furnas, Landauer and Harshman, (1990), Landauer and Dumais, (1997), as well as in the preceding article by Landauer, Foltz and Laham (this issue). LSA provides a fully automatic method for comparing units of textual information to each other in order to determine their semantic relatedness. These units of text are compared to each other using a derived measure of their similarity of meaning. This measure is based on a Measuring Coherence 4 powerful mathematical analysis of direct and indirect relations among words and passages in a large training corpus. Semantic relatedness so measured, should correspond to a measure of coherence since it captures the extent to which two text units are discussing semantically related information. Unlike methods which rely on counting literal word overlap between units of text, LSA's comparisons are based on a derived semantic relatedness measure which reflects semantic similarity among synonyms, antonyms, hyponyms, compounds, and other words that tend to be used in similar contexts. In this way, it can reflect coherence due to automatic inferences made by readers as well as to literal surface coreference. In addition, since LSA is automatic, there are no constraints on the size of the text analyzed. This permits analyses of much larger texts to examine aspects of their discourse structure. In order for LSA to be considered an appropriate approach for modeling text coherence, we first establish how well LSA captures elements of coherence that are similar to modeling methods such as propositional models. A re-analysis of two studies that examined the role of coherence in readers' comprehension is described. This re-analysis of the texts produces automatic predictions of the coherence of texts which are then compared to measures of the readers' comprehension. We next describe the application of the method to investigating other features of the discourse structure of texts. Finally, we illustrate how the approach applies both as a tool for text researchers and as a theoretical model of text coherence. General approach for using LSA to measure coherence The primary method for using LSA to make coherence predictions is to compare some unit of text to an adjoining unit of text in order to determine the degree to which the two are semantically related. These units could be sentences, paragraphs or even individual words or whole books. This analysis can then be performed for all pairs of adjoining text units in order to characterize the overall coherence of the text. Coherence predictions have typically been performed at a propositional level, in which a set of propositions all contained within working memory are compared or connected to each other (e.g., Kintsch, 1988, In press). For LSA coherence analyses, using sentences as the basic unit of text appears to be an appropriate corresponding level that can be easily parsed by automated methods. Sentences serve as a good level in that they represent a small set of textual information (e.g., typically 3-7 propositions) and thus would be approximately consistent with the amount of information that is held in short term memory. Measuring Coherence 5 As discussed in the preceding article by Landauer, et al. (this issue), the power of computing semantic relatedness with LSA comes from analyzing a large number of text examples. Thus, for computing the coherence of a target text, it may first be necessary to have another set of texts that contain a large proportion of the terms used in the target text and that have occurrences in many contexts. One approach is to use a large number of encyclopedia articles on similar topics as the target text. A singular value decomposition (SVD) is then performed on the term by article matrix, thereby generating a high dimensional semantic space which contains most of the terms used in the target text. Individual terms, as well as larger text units such as sentences, can be represented as vectors in this space. Each text unit is represented as the weighted average of vectors of the terms it contains. Typically the weighting is by the log entropy transform of each term (see Landauer, et al., this issue). This weighting helps account for both the term's importance in the particular unit as well as the degree to which the term carries information in the domain of discourse in general. The semantic relatedness of two text units can then be compared by determining the cosine between the vectors for the two units. Thus, to find the coherence between the first and second sentence of a text, the cosine between the vectors for the two sentences would be determined. For instance, two sentences that use exactly the same terms with the same frequencies will have a cosine of 1, while two sentences that use no terms that are semantically related, will tend to have cosines near 0 or below. At intermediate levels, sentences containing terms of related meaning, even if none are the same terms or roots will have more moderate cosines. (It is even possible, although in practice very rare, that two sentences with no words of obvious similarity will have similar overall meanings as indicated by similar LSA vectors in the high dimensional semantic space.) Coherence and text comprehension This paper illustrates a complementary approach to propositional modeling for determining coherence, using LSA, and comparing the predicted coherence to measures of the readers' comprehension. For these analyses, the texts and comprehension measures are taken from two previous studies by Britton and Gulgoz (1988), and, McNamara, et al. (1996). In the first study, the text coherence was manipulated primarily by varying the amount of sentence to sentence repetition of particular important content words through analyzing propositional overlap. Simulating its results with LSA demonstrates the degree to which coherence is carried, or at least reflected, in the Measuring Coherence 6 continuity of lexical semantics, and shows that LSA correctly captures these effects. However, for these texts, a simpler literal word overlap measure, absent any explicit propositional or LSA analysis, also predicts comprehension very well. The second set of texts, those from McNamara et al. (1996), manipulates coherence in much subtler ways; often by substituting words and phrases of related meaning but containing different lexical items to provide the conceptual bridges between one sentence and the next. These materials provide a much more rigorous and interesting test of the LSA technique by requiring it to detect underlying meaning similarities in the absence of literal word repetition. The success of this simulation, and its superiority to d",
"title": ""
},
{
"docid": "f34e256296571f9ec1ae25671a7974f0",
"text": "In this paper, we propose a balanced multi-label propagation algorithm (BMLPA) for overlapping community detection in social networks. As well as its fast speed, another important advantage of our method is good stability, which other multi-label propagation algorithms, such as COPRA, lack. In BMLPA, we propose a new update strategy, which requires that community identifiers of one vertex should have balanced belonging coefficients. The advantage of this strategy is that it allows vertices to belong to any number of communities without a global limit on the largest number of community memberships, which is needed for COPRA. Also, we propose a fast method to generate “rough cores”, which can be used to initialize labels for multi-label propagation algorithms, and are able to improve the quality and stability of results. Experimental results on synthetic and real social networks show that BMLPA is very efficient and effective for uncovering overlapping communities.",
"title": ""
},
{
"docid": "afddd19cb7c08820cf6f190d07bed8eb",
"text": "This paper presents a method for stand-still identification of parameters in a permanent magnet synchronous motor (PMSM) fed from an inverter equipped with an three-phase LCtype output filter. Using a special random modulation strategy, the method uses the inverter for broad-band excitation of the PMSM fed through an LC-filter. Based on the measured current response, model parameters for both the filter (L, R, C) and the PMSM (L and R) are estimated: First, the frequency response of the system is estimated using Welch Modified Periodogram method and then an optimization algorithm is used to find the parameters in an analytical reference model that minimize the model error. To demonstrate the practical feasibility of the method, a fully functional drive including an embedded real-time controller has been built. In addition to modulation, data acquisition and control the whole parameter identification method is also implemented on the real-time controller. Based on laboratory experiments on a 22 kW drive, it it concluded that the embedded identification method can estimate the five parameters in less than ten seconds.",
"title": ""
},
{
"docid": "3f015f42359b6fe38302bc13e923d27d",
"text": "Recently, a rapid growth in the population in urban regions demands the provision of services and infrastructure. These needs can be come up wit the use of Internet of Things (IoT) devices, such as sensors, actuators, smartphones and smart systems. This leans to building Smart City towards the next generation Super City planning. However, as thousands of IoT devices are interconnecting and communicating with each other over the Internet to establish smart systems, a huge amount of data, termed as Big Data, is being generated. It is a challenging task to integrate IoT services and to process Big Data in an efficient way when aimed at decision making for future Super City. Therefore, to meet such requirements, this paper presents an IoT-based system for next generation Super City planning using Big Data Analytics. Authors have proposed a complete system that includes various types of IoT-based smart systems like smart home, vehicular networking, weather and water system, smart parking, and surveillance objects, etc., for dada generation. An architecture is proposed that includes four tiers/layers i.e., 1) Bottom Tier-1, 2) Intermediate Tier-1, 3) Intermediate Tier 2, and 4) Top Tier that handle data generation and collections, communication, data administration and processing, and data interpretation, respectively. The system implementation model is presented from the generation and collection of data to the decision making. The proposed system is implemented using Hadoop ecosystem with MapReduce programming. The throughput and processing time results show that the proposed Super City planning system is more efficient and scalable. KeyWoRDS Big Data, Hadoop, IoT, Smart City, Super City",
"title": ""
},
{
"docid": "a89e43a3371f1a4bd9cc7d2d71a363b9",
"text": "Waste management is one of the primary problem that the world faces irrespective of the case of developed or developing country. The key issue in the waste management is that the garbage bin at public places gets overflowed well in advance before the commencement of the next cleaning process. It in turn leads to various hazards such as bad odor & ugliness to that place which may be the root cause for spread of various diseases. To avoid all such hazardous scenario and maintain public cleanliness and health this work is mounted on a smart garbage system. The main theme of the work is to develop a smart intelligent garbage alert system for a proper garbage management. This paper proposes a smart alert system for garbage clearance by giving an alert signal to the municipal web server for instant cleaning of dustbin with proper verification based on level of garbage filling. This process is aided by the ultrasonic sensor which is interfaced with Arduino UNO to check the level of garbage filled in the dustbin and sends the alert to the municipal web server once if garbage is filled. After cleaning the dustbin, the driver confirms the task of emptying the garbage with the aid of RFID Tag. RFID is a computing technology that is used for verification process and in addition, it also enhances the smart garbage alert system by providing automatic identification of garbage filled in the dustbin and sends the status of clean-up to the server affirming that the work is done. The whole process is upheld by an embedded module integrated with RF ID and IOT Facilitation. The real time status of how waste collection is being done could be monitored and followed up by the municipality authority with the aid of this system. In addition to this the necessary remedial / alternate measures could be adapted. An Android application is developed and linked to a web server to intimate the alerts from the microcontroller to the urban office and to perform the remote monitoring of the cleaning process, done by the workers, thereby reducing the manual process of monitoring and verification. The notifications are sent to the Android application using Wi-Fi module.",
"title": ""
},
{
"docid": "323113ab2bed4b8012f3a6df5aae63be",
"text": "Clustering data generally involves some input parameters or heuristics that are usually unknown at the time they are needed. We discuss the general problem of parameters in clustering and present a new approach, TURN, based on boundary detection and apply it to the clustering of web log data. We also present the use of di erent lters on the web log data to focus the clustering results and discuss di erent coeÆcients for de ning similarity in a non-Euclidean space.",
"title": ""
},
{
"docid": "7f14c41cc6ca21e90517961cf12c3c9a",
"text": "Probiotic microorganisms have been documented over the past two decades to play a role in cholesterol-lowering properties via various clinical trials. Several mechanisms have also been proposed and the ability of these microorganisms to deconjugate bile via production of bile salt hydrolase (BSH) has been widely associated with their cholesterol lowering potentials in prevention of hypercholesterolemia. Deconjugated bile salts are more hydrophobic than their conjugated counterparts, thus are less reabsorbed through the intestines resulting in higher excretion into the feces. Replacement of new bile salts from cholesterol as a precursor subsequently leads to decreased serum cholesterol levels. However, some controversies have risen attributed to the activities of deconjugated bile acids that repress the synthesis of bile acids from cholesterol. Deconjugated bile acids have higher binding affinity towards some orphan nuclear receptors namely the farsenoid X receptor (FXR), leading to a suppressed transcription of the enzyme cholesterol 7-alpha hydroxylase (7AH), which is responsible in bile acid synthesis from cholesterol. This notion was further corroborated by our current docking data, which indicated that deconjugated bile acids have higher propensities to bind with the FXR receptor as compared to conjugated bile acids. Bile acids-activated FXR also induces transcription of the IBABP gene, leading to enhanced recycling of bile acids from the intestine back to the liver, which subsequently reduces the need for new bile formation from cholesterol. Possible detrimental effects due to increased deconjugation of bile salts such as malabsorption of lipids, colon carcinogenesis, gallstones formation and altered gut microbial populations, which contribute to other varying gut diseases, were also included in this review. Our current findings and review substantiate the need to look beyond BSH deconjugation as a single factor/mechanism in strain selection for hypercholesterolemia, and/or as a sole mean to justify a cholesterol-lowering property of probiotic strains.",
"title": ""
},
{
"docid": "4f0b28ded91c48913a13bde141a3637f",
"text": "This paper presents our work in mapping the design space of techniques for temporal graph visualisation. We identify two independent dimensions upon which the techniques can be classified: graph structural encoding and temporal encoding. Based on these dimensions, we create a matrix into which we organise existing techniques. We identify gaps in this design space which may prove interesting opportunities for the development of novel techniques. We also consider additional dimensions upon which further useful classification could be made. In organising the disparate existing approaches from a wide range of domains, our classification will assist those new to the research area, and designers and evaluators developing systems for temporal graph data by raising awareness of the range of possible approaches available, and highlighting possible directions for further research.",
"title": ""
},
{
"docid": "de6e139d0b5dc295769b5ddb9abcc4c6",
"text": "1 Abd El-Moniem M. Bayoumi is a graduate TA at the Department of Computer Engineering, Cairo University. He received his BS degree in from Cairo University in 2009. He is currently an RA, working for a research project on developing an innovative revenue management system for the hotel business. He was awarded the IEEE CIS Egypt Chapter’s special award for his graduation project in 2009. Bayoumi is interested to research in machine learning and business analytics; and he is currently working on his MS on stock market prediction.",
"title": ""
},
{
"docid": "4bf3d64ed814ee9b20c66924901183c9",
"text": "In this paper, we introduce GTID, a technique that can actively and passively fingerprint wireless devices and their types using wire-side observations in a local network. GTID exploits information that is leaked as a result of heterogeneity in devices, which is a function of different device hardware compositions and variations in devices' clock skew. We apply statistical techniques on network traffic to create unique, reproducible device and device type signatures, and use artificial neural networks (ANNs) for classification. We demonstrate the efficacy of our technique on both an isolated testbed and a live campus network (during peak hours) using a corpus of 37 devices representing a wide range of device classes (e.g., iPads, iPhones, Google Phones, etc.) and traffic types (e.g., Skype, SCP, ICMP, etc.). Our experiments provided more than 300 GB of traffic captures which we used for ANN training and performance evaluation. In order for any fingerprinting technique to be practical, it must be able to detect previously unseen devices (i.e., devices for which no stored signature is available) and must be able to withstand various attacks. GTID is a fingerprinting technique to detect previously unseen devices and to illustrate its resilience under various attacker models. We measure the performance of GTID by considering accuracy, recall, and processing time and also illustrate how it can be used to complement existing security mechanisms (e.g., authentication systems) and to detect counterfeit devices.",
"title": ""
},
{
"docid": "c68729167831b81a2d694664a4cfa90b",
"text": "Micro aerial vehicles (MAV) pose a challenge in designing sensory systems and algorithms due to their size and weight constraints and limited computing power. We present an efficient 3D multi-resolution map that we use to aggregate measurements from a lightweight continuously rotating laser scanner. We estimate the robot's motion by means of visual odometry and scan registration, aligning consecutive 3D scans with an incrementally built map. By using local multi-resolution, we gain computational efficiency by having a high resolution in the near vicinity of the robot and a lower resolution with increasing distance from the robot, which correlates with the sensor's characteristics in relative distance accuracy and measurement density. Compared to uniform grids, local multi-resolution leads to the use of fewer grid cells without loosing information and consequently results in lower computational costs. We efficiently and accurately register new 3D scans with the map in order to estimate the motion of the MAV and update the map in-flight. In experiments, we demonstrate superior accuracy and efficiency of our registration approach compared to state-of-the-art methods such as GICP. Our approach builds an accurate 3D obstacle map and estimates the vehicle's trajectory in real-time.",
"title": ""
},
{
"docid": "b56a6ce08cf00fefa1a1b303ebf21de9",
"text": "Freesound is an online collaborative sound database where people with diverse interests share recorded sound samples under Creative Commons licenses. It was started in 2005 and it is being maintained to support diverse research projects and as a service to the overall research and artistic community. In this demo we want to introduce Freesound to the multimedia community and show its potential as a research resource. We begin by describing some general aspects of Freesound, its architecture and functionalities, and then explain potential usages that this framework has for research applications.",
"title": ""
},
{
"docid": "f89b282f58ac28975285a24194c209f2",
"text": "Creating pixel art is a laborious process that requires artists to place individual pixels by hand. Although many image editors provide vector-to-raster conversions, the results produced do not meet the standards of pixel art: artifacts such as jaggies or broken lines frequently occur. We describe a novel Pixelation algorithm that rasterizes vector line art while adhering to established conventions used by pixel artists. We compare our results through a user study to those generated by Adobe Illustrator and Photoshop, as well as hand-drawn samples by both amateur and professional pixel artists.",
"title": ""
},
{
"docid": "819de9493806b5baed90d68ebb71bb90",
"text": "ING AND INDEXING SERVICES OR SPECIALIST BIBLIOGRAPHIC DATABASES Major subject A&Is – e.g. Scopus, PubMed, Web of Science, focus on structured access to the highest quality information within a discipline. They typically cover all the key literature but not necessarily all the literature in a discipline. Their utility flows from the perceived certainty and reassurance that they offer to users in providing the authoritative source of search results within a discipline. However, they cannot boast universal coverage of the literature – they provide good coverage of a defined subject niche, but reduce the serendipitous discovery of peripheral material. Also, many A&Is are sold at a premium, which in itself is a barrier to their use. Examples from a wide range of subjects were given in the survey questions to help surveyees understand this classification.",
"title": ""
},
{
"docid": "2ce90f045706cf98f3a0d624828b99b8",
"text": "A promising class of generative models maps points from a simple distribution to a complex distribution through an invertible neural network. Likelihood-based training of these models requires restricting their architectures to allow cheap computation of Jacobian determinants. Alternatively, the Jacobian trace can be used if the transformation is specified by an ordinary differential equation. In this paper, we use Hutchinson’s trace estimator to give a scalable unbiased estimate of the log-density. The result is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures. We demonstrate our approach on high-dimensional density estimation, image generation, and variational inference, achieving the state-of-the-art among exact likelihood methods with efficient sampling.",
"title": ""
},
{
"docid": "b3c81ac4411c2461dcec7be210ce809c",
"text": "The rapid proliferation of the Internet and the cost-effective growth of its key enabling technologies are revolutionizing information technology and creating unprecedented opportunities for developing largescale distributed applications. At the same time, there is a growing concern over the security of Web-based applications, which are rapidly being deployed over the Internet [4]. For example, e-commerce—the leading Web-based application—is projected to have a market exceeding $1 trillion over the next several years. However, this application has already become a security nightmare for both customers and business enterprises as indicated by the recent episodes involving unauthorized access to credit card information. Other leading Web-based applications with considerable information security and privacy issues include telemedicine-based health-care services and online services or businesses involving both public and private sectors. Many of these applications are supported by workflow management systems (WFMSs) [1]. A large number of public and private enterprises are in the forefront of adopting Internetbased WFMSs and finding ways to improve their services and decision-making processes, hence we are faced with the daunting challenge of ensuring the security and privacy of information in such Web-based applications [4]. Typically, a Web-based application can be represented as a three-tier architecture, depicted in the figure, which includes a Web client, network servers, and a back-end information system supported by a suite of databases. For transaction-oriented applications, such as e-commerce, middleware is usually provided between the network servers and back-end systems to ensure proper interoperability. Considerable security challenges and vulnerabilities exist within each component of this architecture. Existing public-key infrastructures (PKIs) provide encryption mechanisms for ensuring information confidentiality, as well as digital signature techniques for authentication, data integrity and non-repudiation [11]. As no access authorization services are provided in this approach, it has a rather limited scope for Web-based applications. The strong need for information security on the Internet is attributable to several factors, including the massive interconnection of heterogeneous and distributed systems, the availability of high volumes of sensitive information at the end systems maintained by corporations and government agencies, easy distribution of automated malicious software by malfeasors, the ease with which computer crimes can be committed anonymously from across geographic boundaries, and the lack of forensic evidence in computer crimes, which makes the detection and prosecution of criminals extremely difficult. Two classes of services are crucial for a secure Internet infrastructure. These include access control services and communication security services. Access James B.D. Joshi,",
"title": ""
},
{
"docid": "67733befe230741c69665218dd256dc0",
"text": "Model reduction of the Markov process is a basic problem in modeling statetransition systems. Motivated by the state aggregation approach rooted in control theory, we study the statistical state compression of a finite-state Markov chain from empirical trajectories. Through the lens of spectral decomposition, we study the rank and features of Markov processes, as well as properties like representability, aggregatability and lumpability. We develop a class of spectral state compression methods for three tasks: (1) estimate the transition matrix of a low-rank Markov model, (2) estimate the leading subspace spanned by Markov features, and (3) recover latent structures of the state space like state aggregation and lumpable partition. The proposed methods provide an unsupervised learning framework for identifying Markov features and clustering states. We provide upper bounds for the estimation errors and nearly matching minimax lower bounds. Numerical studies are performed on synthetic data and a dataset of New York City taxi trips. ∗Anru Zhang is with the Department of Statistics, University of Wisconsin-Madison, Madison, WI 53706, E-mail: [email protected]; Mengdi Wang is with the Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08544, E-mail: [email protected]. †",
"title": ""
},
{
"docid": "02bc5f32c3a0abdd88d035836de479c9",
"text": "Deep learning has shown to be effective for robust and real-time monocular image relocalisation. In particular, PoseNet [22] is a deep convolutional neural network which learns to regress the 6-DOF camera pose from a single image. It learns to localize using high level features and is robust to difficult lighting, motion blur and unknown camera intrinsics, where point based SIFT registration fails. However, it was trained using a naive loss function, with hyper-parameters which require expensive tuning. In this paper, we give the problem a more fundamental theoretical treatment. We explore a number of novel loss functions for learning camera pose which are based on geometry and scene reprojection error. Additionally we show how to automatically learn an optimal weighting to simultaneously regress position and orientation. By leveraging geometry, we demonstrate that our technique significantly improves PoseNets performance across datasets ranging from indoor rooms to a small city.",
"title": ""
},
{
"docid": "27ba6cfdebdedc58ab44b75a15bbca05",
"text": "OBJECTIVES\nTo assess the influence of material/technique selection (direct vs. CAD/CAM inlays) for large MOD composite adhesive restorations and its effect on the crack propensity and in vitro accelerated fatigue resistance.\n\n\nMETHODS\nA standardized MOD slot-type tooth preparation was applied to 32 extracted maxillary molars (5mm depth and 5mm bucco-palatal width) including immediately sealed dentin for the inlay group. Fifteen teeth were restored with direct composite resin restoration (Miris2) and 17 teeth received milled inlays using Paradigm MZ100 block in the CEREC machine. All inlays were adhesively luted with a light curing composite resin (Filtek Z100). Enamel shrinkage-induced cracks were tracked with photography and transillumination. Cyclic isometric chewing (5 Hz) was simulated, starting with a load of 200 N (5000 cycles), followed by stages of 400, 600, 800, 1000, 1200 and 1400 N at a maximum of 30,000 cycles each. Samples were loaded until fracture or to a maximum of 185,000 cycles.\n\n\nRESULTS\nTeeth restored with the direct technique fractured at an average load of 1213 N and two of them withstood all loading cycles (survival=13%); with inlays, the survival rate was 100%. Most failures with Miris2 occurred above the CEJ and were re-restorable (67%), but generated more shrinkage-induced cracks (47% of the specimen vs. 7% for inlays).\n\n\nSIGNIFICANCE\nCAD/CAM MZ100 inlays increased the accelerated fatigue resistance and decreased the crack propensity of large MOD restorations when compared to direct restorations. While both restorative techniques yielded excellent fatigue results at physiological masticatory loads, CAD/CAM inlays seem more indicated for high-load patients.",
"title": ""
}
] |
scidocsrr
|
0a35fd72a697dbf1713858c1861dce7a
|
A Survey of Data Mining and Deep Learning in Bioinformatics
|
[
{
"docid": "5d8f33b7f28e6a8d25d7a02c1f081af1",
"text": "Background The life sciences, biomedicine and health care are increasingly turning into a data intensive science [2-4]. Particularly in bioinformatics and computational biology we face not only increased volume and a diversity of highly complex, multi-dimensional and often weaklystructured and noisy data [5-8], but also the growing need for integrative analysis and modeling [9-14]. Due to the increasing trend towards personalized and precision medicine (P4 medicine: Predictive, Preventive, Participatory, Personalized [15]), biomedical data today results from various sources in different structural dimensions, ranging from the microscopic world, and in particular from the omics world (e.g., from genomics, proteomics, metabolomics, lipidomics, transcriptomics, epigenetics, microbiomics, fluxomics, phenomics, etc.) to the macroscopic world (e.g., disease spreading data of populations in public health informatics), see Figure 1[16]. Just for rapid orientation in terms of size: the Glucose molecule has a size of 900 pm = 900× 10−12m and the Carbon atom approx. 300 pm . A hepatitis virus is relatively large with 45nm = 45× 10−9m and the X-Chromosome much bigger with 7μm = 7× 10−6m . We produce most of the “Big Data” in the omics world, we estimate many Terabytes ( 1TB = 1× 10 Byte = 1000 GByte) of genomics data in each individual, consequently, the fusion of these with Petabytes of proteomics data for personalized medicine results in Exabytes of data (1 EB = 1× 1018 Byte ). Last but not least, this “natural” data is then fused together with “produced” data, e.g., the unstructured information (text) in the patient records, wellness data, the data from physiological sensors, laboratory data etc. these data are also rapidly increasing in size and complexity. Besides the problem of heterogeneous and distributed data, we are confronted with noisy, missing and inconsistent data. This leaves a large gap between the available “dirty” data [17] and the machinery to effectively process the data for the application purposes; moreover, the procedures of data integration and information extraction may themselves introduce errors and artifacts in the data [18]. Although, one may argue that “Big Data” is a buzz word, systematic and comprehensive exploration of all these data is often seen as the fourth paradigm in the investigation of nature after empiricism, theory and computation [19], and provides a mechanism for data driven hypotheses generation, optimized experiment planning, precision medicine and evidence-based medicine. The challenge is not only to extract meaningful information from this data, but to gain knowledge, to discover previously unknown insight, look for patterns, and to make sense of the data [20], [21]. Many different approaches, including statistical and graph theoretical methods, data mining, and machine learning methods, have been applied in the past however with partly unsatisfactory success [22,23] especially in terms of performance [24]. The grand challenge is to make data useful to and useable by the end user [25]. Maybe, the key challenge is interaction, due to the fact that it is the human end user who possesses the problem solving intelligence [26], hence the ability to ask intelligent questions about the data. The problem in the life sciences is that (biomedical) data models are characterized by significant complexity [27], [28], making manual analysis by the end users difficult and often impossible [29]. At the same time, human * Correspondence: [email protected] Research Unit Human-Computer Interaction, Austrian IBM Watson Think Group, Institute for Medical Informatics, Statistics & Documentation, Medical University Graz, Austria Full list of author information is available at the end of the article Holzinger et al. BMC Bioinformatics 2014, 15(Suppl 6):I1 http://www.biomedcentral.com/1471-2105/15/S6/I1",
"title": ""
},
{
"docid": "447bbce2f595af07c8d784d422e7f826",
"text": "MOTIVATION\nRNA-seq technology has been widely adopted as an attractive alternative to microarray-based methods to study global gene expression. However, robust statistical tools to analyze these complex datasets are still lacking. By grouping genes with similar expression profiles across treatments, cluster analysis provides insight into gene functions and networks, and hence is an important technique for RNA-seq data analysis.\n\n\nRESULTS\nIn this manuscript, we derive clustering algorithms based on appropriate probability models for RNA-seq data. An expectation-maximization algorithm and another two stochastic versions of expectation-maximization algorithms are described. In addition, a strategy for initialization based on likelihood is proposed to improve the clustering algorithms. Moreover, we present a model-based hybrid-hierarchical clustering method to generate a tree structure that allows visualization of relationships among clusters as well as flexibility of choosing the number of clusters. Results from both simulation studies and analysis of a maize RNA-seq dataset show that our proposed methods provide better clustering results than alternative methods such as the K-means algorithm and hierarchical clustering methods that are not based on probability models.\n\n\nAVAILABILITY AND IMPLEMENTATION\nAn R package, MBCluster.Seq, has been developed to implement our proposed algorithms. This R package provides fast computation and is publicly available at http://www.r-project.org",
"title": ""
},
{
"docid": "1e4ea38a187881d304ea417f98a608d1",
"text": "Breast cancer represents the second leading cause of cancer deaths in women today and it is the most common type of cancer in women. This paper presents some experiments for tumour detection in digital mammography. We investigate the use of different data mining techniques, neural networks and association rule mining, for anomaly detection and classification. The results show that the two approaches performed well, obtaining a classification accuracy reaching over 70% percent for both techniques. Moreover, the experiments we conducted demonstrate the use and effectiveness of association rule mining in image categorization.",
"title": ""
}
] |
[
{
"docid": "4285d9b4b9f63f22033ce9a82eec2c76",
"text": "To ease large-scale realization of agent applications there is an urgent need for frameworks, methodologies and toolkits that support the effective development of agent systems. Moreover, since one of the main tasks for which agent systems were invented is the integration between heterogeneous software, independently developed agents should be able to interact successfully. In this paper, we present JADE (Java Agent Development Environment), a software framework to build agent systems for the management of networked information resources in compliance with the FIPA specifications for inter-operable intelligent multi-agent systems. The goal of JADE is to simplify development while ensuring standard compliance through a comprehensive set of system services and agents. JADE can then be considered to be an agent middle-ware that implements an efficient agent platform and supports the development of multi-agent systems. It deals with all the aspects that are not peculiar to agent internals and that are independent of the applications, such as message transport, encoding and parsing, or agent life-cycle management. Copyright 2001 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "5923cd462b5b09a3aabd0fbf5c36f00c",
"text": "Exoskeleton robots are used as assistive limbs for elderly persons, rehabilitation for paralyzed persons or power augmentation purposes for healthy persons. The similarity of the exoskeleton robots and human body neuro-muscular system maximizes the device performance. Human body neuro-muscular system provides a flexible and safe movement capability with minimum energy consumption by varying the stiffness of the human joints regularly. Similar to human body, variable stiffness actuators should be used to provide a flexible and safe movement capability in exoskeletons. In the present day, different types of variable stiffness actuator designs are used, and the studies on these actuators are still continuing rapidly. As exoskeleton robots are mobile devices working with the equipment such as batteries, the motors used in the design are expected to have minimal power requirements. In this study, antagonistic, pre-tension and controllable transmission ratio type variable stiffness actuators are compared in terms of energy efficiency and power requirement at an optimal (medium) walking speed for ankle joint. In the case of variable stiffness, the results show that the controllable transmission ratio type actuator compared with the antagonistic design is more efficient in terms of energy consumption and power requirement.",
"title": ""
},
{
"docid": "d60b1a9a23fe37813a24533104a74d70",
"text": "Online display advertising is a multi-billion dollar industry where advertisers promote their products to users by having publishers display their advertisements on popular Web pages. An important problem in online advertising is how to forecast the number of user visits for a Web page during a particular period of time. Prior research addressed the problem by using traditional time-series forecasting techniques on historical data of user visits; (e.g., via a single regression model built for forecasting based on historical data for all Web pages) and did not fully explore the fact that different types of Web pages and different time stamps have different patterns of user visits. In this paper, we propose a series of probabilistic latent class models to automatically learn the underlying user visit patterns among multiple Web pages and multiple time stamps. The last (and the most effective) proposed model identifies latent groups/classes of (i) Web pages and (ii) time stamps with similar user visit patterns, and learns a specialized forecast model for each latent Web page and time stamp class. Compared with a single regression model as well as several other baselines, the proposed latent class model approach has the capability of differentiating the importance of different types of information across different classes of Web pages and time stamps, and therefore has much better modeling flexibility. An extensive set of experiments along with detailed analysis carried out on real-world data from Yahoo! demonstrates the advantage of the proposed latent class models in forecasting online user visits in online display advertising.",
"title": ""
},
{
"docid": "72e4d7729031d63f96b686444c9b446e",
"text": "In this paper we describe the fundamentals of affective gaming from a physiological point of view, covering some of the origins of the genre, how affective videogames operate and current conceptual and technological capabilities. We ground this overview of the ongoing research by taking an in-depth look at one of our own early biofeedback-based affective games. Based on our analysis of existing videogames and our own experience with affective videogames, we propose a new approach to game design based on several high-level design heuristics: assist me, challenge me and emote me (ACE), a series of gameplay \"tweaks\" made possible through affective videogames.",
"title": ""
},
{
"docid": "258c90fe18f120a24d8132550ed85a6e",
"text": "Based on the thorough analysis of the literature, Chap. 1 introduces readers with challenges of STEM-driven education in general and those challenges caused by the use of this paradigm in computer science (CS) education in particular. This analysis enables to motivate our approach we discuss throughout the book. Chapter 1 also formulates objectives, research agenda and topics this book addresses. The objectives of the book are to discuss the concepts and approaches enabling to transform the current CS education paradigm into the STEM-driven one at the school and, to some extent, at the university. We seek to implement this transformation through the integration of the STEM pedagogy, the smart content and smart devices and educational robots into the smart STEM-driven environment, using reuse-based approaches taken from software engineering and CS.",
"title": ""
},
{
"docid": "fcc092e71c7a0b38edb23e4eb92dfb21",
"text": "In this work, we focus on semantic parsing of natural language conversations. Most existing methods for semantic parsing are based on understanding the semantics of a single sentence at a time. However, understanding conversations also requires an understanding of conversational context and discourse structure across sentences. We formulate semantic parsing of conversations as a structured prediction task, incorporating structural features that model the ‘flow of discourse’ across sequences of utterances. We create a dataset for semantic parsing of conversations, consisting of 113 real-life sequences of interactions of human users with an automated email assistant. The data contains 4759 natural language statements paired with annotated logical forms. Our approach yields significant gains in performance over traditional semantic parsing.",
"title": ""
},
{
"docid": "e464cde1434026c17b06716c6a416b7a",
"text": "Three experiments supported the hypothesis that people are more willing to express attitudes that could be viewed as prejudiced when their past behavior has established their credentials as nonprejudiced persons. In Study 1, participants given the opportunity to disagree with blatantly sexist statements were later more willing to favor a man for a stereotypically male job. In Study 2, participants who first had the opportunity to select a member of a stereotyped group (a woman or an African American) for a category-neutral job were more likely to reject a member of that group for a job stereotypically suited for majority members. In Study 3, participants who had established credentials as nonprejudiced persons revealed a greater willingness to express a politically incorrect opinion even when the audience was unaware of their credentials. The general conditions under which people feel licensed to act on illicit motives are discussed.",
"title": ""
},
{
"docid": "314e1b8bbcc0a5735d86bb751d524a93",
"text": "Ubiquinone (coenzyme Q), in addition to its function as an electron and proton carrier in mitochondrial and bacterial electron transport linked to ATP synthesis, acts in its reduced form (ubiquinol) as an antioxidant, preventing the initiation and/or propagation of lipid peroxidation in biological membranes and in serum low-density lipoprotein. The antioxidant activity of ubiquinol is independent of the effect of vitamin E, which acts as a chain-breaking antioxidant inhibiting the propagation of lipid peroxidation. In addition, ubiquinol can efficiently sustain the effect of vitamin E by regenerating the vitamin from the tocopheroxyl radical, which otherwise must rely on water-soluble agents such as ascorbate (vitamin C). Ubiquinol is the only known lipid-soluble antioxidant that animal cells can synthesize de novo, and for which there exist enzymic mechanisms that can regenerate the antioxidant from its oxidized form resulting from its inhibitory effect of lipid peroxidation. These features, together with its high degree of hydrophobicity and its widespread occurrence in biological membranes and in low-density lipoprotein, suggest an important role of ubiquinol in cellular defense against oxidative damage. Degenerative diseases and aging may bc 1 manifestations of a decreased capacity to maintain adequate ubiquinol levels.",
"title": ""
},
{
"docid": "e39494d730b0ad81bf950b68dc4a7854",
"text": "G4LTL-ST automatically synthesizes control code for industrial Programmable Logic Controls (PLC) from timed behavioral specifications of inputoutput signals. These specifications are expressed in a linear temporal logic (LTL) extended with non-linear arithmetic constraints and timing constraints on signals. G4LTL-ST generates code in IEC 61131-3-compatible Structured Text, which is compiled into executable code for a large number of industrial field-level devices. The synthesis algorithm of G4LTL-ST implements pseudo-Boolean abstraction of data constraints and the compilation of timing constraints into LTL, together with a counterstrategy-guided abstraction-refinement synthesis loop. Since temporal logic specifications are notoriously difficult to use in practice, G4LTL-ST supports engineers in specifying realizable control problems by suggesting suitable restrictions on the behavior of the control environment from failed synthesis attempts.",
"title": ""
},
{
"docid": "58bfe45d6f2e8bdb2f641290ee6f0b86",
"text": "Intimate partner violence (IPV) is a common phenomenon worldwide. However, there is a relative dearth of qualitative research exploring IPV in which men are the victims of their female partners. The present study used a qualitative approach to explore how Portuguese men experience IPV. Ten male victims (aged 35–75) who had sought help from domestic violence agencies or from the police were interviewed. Transcripts were analyzed using QSR NVivo10 and coded following thematic analysis. The results enhance our understanding of both the nature and dynamics of the violence that men experience as well as the negative impact of violence on their lives. This study revealed the difficulties that men face in the process of seeking help, namely differences in treatment of men versus women victims. It also highlights that help seeking had a negative emotional impact for most of these men. Finally, this study has important implications for practitioners and underlines macro-level social recommendations for raising awareness about this phenomenon, including the need for changes in victims’ services and advocacy for gender-inclusive campaigns and responses.",
"title": ""
},
{
"docid": "288383c6a6d382b6794448796803699f",
"text": "A transresistance instrumentation amplifier (dual-input transresistance amplifier) was designed, and a prototype was fabricated and tested in a gamma-ray dosimeter. The circuit, explained in this letter, is a differential amplifier which is suitable for amplification of signals from current-source transducers. In the dosimeter application, the amplifier proved superior to a regular (single) transresistance amplifier, giving better temperature stability and better common-mode rejection.",
"title": ""
},
{
"docid": "7476bbec4720e04223d56a71e6bab03e",
"text": "We consider the performance analysis and design optimization of low-density parity check (LDPC) coded multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) systems for high data rate wireless transmission. The tools of density evolution with mixture Gaussian approximations are used to optimize irregular LDPC codes and to compute minimum operational signal-to-noise ratios (SNRs) for ergodic MIMO OFDM channels. In particular, the optimization is done for various MIMO OFDM system configurations, which include a different number of antennas, different channel models, and different demodulation schemes; the optimized performance is compared with the corresponding channel capacity. It is shown that along with the optimized irregular LDPC codes, a turbo iterative receiver that consists of a soft maximum a posteriori (MAP) demodulator and a belief-propagation LDPC decoder can perform within 1 dB from the ergodic capacity of the MIMO OFDM systems under consideration. It is also shown that compared with the optimal MAP demodulator-based receivers, the receivers employing a low-complexity linear minimum mean-square-error soft-interference-cancellation (LMMSE-SIC) demodulator have a small performance loss (< 1dB) in spatially uncorrelated MIMO channels but suffer extra performance loss in MIMO channels with spatial correlation. Finally, from the LDPC profiles that already are optimized for ergodic channels, we heuristically construct small block-size irregular LDPC codes for outage MIMO OFDM channels; as shown from simulation results, the irregular LDPC codes constructed here are helpful in expediting the convergence of the iterative receivers.",
"title": ""
},
{
"docid": "309a20834f17bd87e10f8f1c051bf732",
"text": "Tamper-resistant cryptographic processors are becoming the standard way to enforce data-usage policies. Their origins lie with military cipher machines and PIN processing in banking payment networks, expanding in the 1990s into embedded applications: token vending machines for prepayment electricity and mobile phone credit. Major applications such as GSM mobile phone identification and pay TV set-top boxes have pushed low-cost cryptoprocessors toward ubiquity. In the last five years, dedicated crypto chips have been embedded in devices such as game console accessories and printer ink cartridges, to control product and accessory after markets. The \"Trusted Computing\" initiative will soon embed cryptoprocessors in PCs so they can identify each other remotely. This paper surveys the range of applications of tamper-resistant hardware and the array of attack and defense mechanisms which have evolved in the tamper-resistance arms race.",
"title": ""
},
{
"docid": "81cd2034b2096db2be699821e499dfa8",
"text": "At the US National Library of Medicine we have developed the Unified Medical Language System (UMLS), whose goal it is to provide integrated access to a large number of biomedical resources by unifying the vocabularies that are used to access those resources. The UMLS currently interrelates some 60 controlled vocabularies in the biomedical domain. The UMLS coverage is quite extensive, including not only many concepts in clinical medicine, but also a large number of concepts applicable to the broad domain of the life sciences. In order to provide an overarching conceptual framework for all UMLS concepts, we developed an upper-level ontology, called the UMLS semantic network. The semantic network, through its 134 semantic types, provides a consistent categorization of all concepts represented in the UMLS. The 54 links between the semantic types provide the structure for the network and represent important relationships in the biomedical domain. Because of the growing number of information resources that contain genetic information, the UMLS coverage in this area is being expanded. We recently integrated the taxonomy of organisms developed by the NLM's National Center for Biotechnology Information, and we are currently working together with the developers of the Gene Ontology to integrate this resource, as well. As additional, standard, ontologies become publicly available, we expect to integrate these into the UMLS construct.",
"title": ""
},
{
"docid": "8381e95910a7500cdb37505e64a9331b",
"text": "Previous ensemble streamflow prediction (ESP) studies in Korea reported that modelling error significantly affects the accuracy of the ESP probabilistic winter and spring (i.e. dry season) forecasts, and thus suggested that improving the existing rainfall-runoff model, TANK, would be critical to obtaining more accurate probabilistic forecasts with ESP. This study used two types of artificial neural network (ANN), namely the single neural network (SNN) and the ensemble neural network (ENN), to provide better rainfall-runoff simulation capability than TANK, which has been used with the ESP system for forecasting monthly inflows to the Daecheong multipurpose dam in Korea. Using the bagging method, the ENN combines the outputs of member networks so that it can control the generalization error better than an SNN. This study compares the two ANN models with TANK with respect to the relative bias and the root-mean-square error. The overall results showed that the ENN performed the best among the three rainfall-runoff models. The ENN also considerably improved the probabilistic forecasting accuracy, measured in terms of average hit score, half-Brier score and hit rate, of the present ESP system that used TANK. Therefore, this study concludes that the ENN would be more effective for ESP rainfall-runoff modelling than TANK or an SNN. Copyright 2005 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "584540f486e1bf112eb8abe8731de341",
"text": "This article overviews the diagnosis and management of traumatic injuries to primary teeth. The child's age, ability to cooperate for treatment, and the potential for collateral damage to developing permanent teeth can complicate the management of these injuries. The etiology of these injuries is reviewed including the disturbing role of child abuse. Serious medical complications including head injury, cervical spine injury, and tetanus are discussed. Diagnostic methods and the rationale for treatment of luxation injuries, crown, and crown/root fractures are included. Treatment priorities should include adequate pain control, safe management of the child's behavior, and protection of the developing permanent teeth.",
"title": ""
},
{
"docid": "6fc9000394cc05b2f70909dd2d0c76fb",
"text": "Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.",
"title": ""
},
{
"docid": "795f59c0658a56aa68a9271d591c81a6",
"text": "We present a new kind of network perimeter monitoring strategy, which focuses on recognizing the infection and coordination dialog that occurs during a successful malware infection. BotHunter is an application designed to track the two-way communication flows between internal assets and external entities, developing an evidence trail of data exchanges that match a state-based infection sequence model. BotHunter consists of a correlation engine that is driven by three malware-focused network packet sensors, each charged with detecting specific stages of the malware infection process, including inbound scanning, exploit usage, egg downloading, outbound bot coordination dialog, and outbound attack propagation. The BotHunter correlator then ties together the dialog trail of inbound intrusion alarms with those outbound communication patterns that are highly indicative of successful local host infection. When a sequence of evidence is found to match BotHunter’s infection dialog model, a consolidated report is produced to capture all the relevant events and event sources that played a role during the infection process. We refer to this analytical strategy of matching the dialog flows between internal assets and the broader Internet as dialog-based correlation, and contrast this strategy to other intrusion detection and alert correlation methods. We present our experimental results using BotHunter in both virtual and live testing environments, and discuss our Internet release of the BotHunter prototype. BotHunter is made available both for operational use and to help stimulate research in understanding the life cycle of malware infections.",
"title": ""
},
{
"docid": "1b1953e3dd28c67e7a8648392422df88",
"text": "We examined Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) General Ability Index (GAI) and Full Scale Intelligence Quotient (FSIQ) discrepancies in 100 epilepsy patients; 44% had a significant GAI > FSIQ discrepancy. GAI-FSIQ discrepancies were correlated with the number of antiepileptic drugs taken and duration of epilepsy. Individual antiepileptic drugs differentially interfere with the expression of underlying intellectual ability in this group. FSIQ may significantly underestimate levels of general intellectual ability in people with epilepsy. Inaccurate representations of FSIQ due to selective impairments in working memory and reduced processing speed obscure the contextual interpretation of performance on other neuropsychological tests, and subtle localizing and lateralizing signs may be missed as a result.",
"title": ""
},
{
"docid": "5547f8ad138a724c2cc05ce65f50ebd2",
"text": "As machine learning (ML) technology continues to spread by rapid evolution, the system or service using Machine Learning technology, called ML product, makes big impact on our life, society and economy. Meanwhile, Quality Assurance (QA) for ML product is quite more difficult than hardware, non-ML software and service because performance of ML technology is much better than non-ML technology in exchange for the characteristics of ML product, e.g. low explainability. We must keep rapid evolution and reduce quality risk of ML product simultaneously. In this paper, we show a Quality Assurance Framework for Machine Learning product. Scope of QA in this paper is limited to product evaluation. First, a policy of QA for ML Product is proposed. General principles of product evaluation is introduced and applied to ML product evaluation as a part of the policy. They are composed of A-ARAI: Allowability, Achievability, Robustness, Avoidability and Improvability. A strategy of ML Product Evaluation is constructed as another part of the policy. Quality Integrity Level for ML product is also modelled. Second, we propose a test architecture of ML product testing. It consists of test levels and fundamental test types of ML product testing, including snapshot testing, learning testing and confrontation testing. Finally, we defines QA activity levels for ML product.",
"title": ""
}
] |
scidocsrr
|
1a035c8a688751ae9604f7ed86173e34
|
Scheduling internet of things applications in cloud computing
|
[
{
"docid": "ab5f788eaa10739eb3cd99bf12e424de",
"text": "Successful development of cloud computing paradigm necessitates accurate performance evaluation of cloud data centers. As exact modeling of cloud centers is not feasible due to the nature of cloud centers and diversity of user requests, we describe a novel approximate analytical model for performance evaluation of cloud server farms and solve it to obtain accurate estimation of the complete probability distribution of the request response time and other important performance indicators. The model allows cloud operators to determine the relationship between the number of servers and input buffer size, on one side, and the performance indicators such as mean number of tasks in the system, blocking probability, and probability that a task will obtain immediate service, on the other.",
"title": ""
}
] |
[
{
"docid": "cc85e917ca668a60461ba6848e4c3b42",
"text": "In this paper a generic method for fault detection and isolation (FDI) in manufacturing systems considered as discrete event systems (DES) is presented. The method uses an identified model of the closed loop of plant and controller built on the basis of observed fault free system behavior. An identification algorithm known from literature is used to determine the fault detection model in form of a non-deterministic automaton. New results of how to parameterize this algorithm are reported. To assess the fault detection capability of an identified automaton, probabilistic measures are proposed. For fault isolation, the concept of residuals adapted for DES is used by defining appropriate set operations representing generic fault symptoms. The method is applied to a case study system.",
"title": ""
},
{
"docid": "4c48737ffa2a1e385cd93255ce440584",
"text": "Even though the emerging field of user experience generally acknowledges the importance of aesthetic qualities in interactive products and services, there is a lack of approaches recognizing the fundamentally temporal nature of interaction aesthetics. By means of interaction criticism, I introduce four concepts that begin to characterize the aesthetic qualities of interaction. Pliability refers to the sense of malleability and tightly coupled interaction that makes the use of an interactive visualization captivating. Rhythm is an important characteristic of certain types of interaction, from the sub-second pacing of musical interaction to the hour-scale ebb and flow of peripheral emotional communication. Dramaturgical structure is not only a feature of online role-playing games, but plays an important role in several design genres from the most mundane to the more intellectually sophisticated. Fluency is a way to articulate the gracefulness with which we are able to handle multiple demands for our attention and action in augmented spaces.",
"title": ""
},
{
"docid": "7abad18b2ddc66b07267ef76b109d1c9",
"text": "Modern applications for distributed publish/subscribe systems often require stream aggregation capabilities along with rich data filtering. When compared to other distributed systems, aggregation in pub/sub differentiates itself as a complex problem which involves dynamic dissemination paths that are difficult to predict and optimize for a priori, temporal fluctuations in publication rates, and the mixed presence of aggregated and non-aggregated workloads. In this paper, we propose a formalization for the problem of minimizing communication traffic in the context of aggregation in pub/sub. We present a solution to this minimization problem by using a reduction to the well-known problem of minimum vertex cover in a bipartite graph. This solution is optimal under the strong assumption of complete knowledge of future publications. We call the resulting algorithm \"Aggregation Decision, Optimal with Complete Knowledge\" (ADOCK). We also show that under a dynamic setting without full knowledge, ADOCK can still be applied to produce a low, yet not necessarily optimal, communication cost. We also devise a computationally cheaper dynamic approach called \"Aggregation Decision with Weighted Publication\" (WAD). We compare our solutions experimentally using two real datasets and explore the trade-offs with respect to communication and computation costs.",
"title": ""
},
{
"docid": "b779b82b0ecc316b13129480586ac483",
"text": "Chainspace is a decentralized infrastructure, known as a distributed ledger, that supports user defined smart contracts and executes user-supplied transactions on their objects. The correct execution of smart contract transactions is verifiable by all. The system is scalable, by sharding state and the execution of transactions, and using S-BAC, a distributed commit protocol, to guarantee consistency. Chainspace is secure against subsets of nodes trying to compromise its integrity or availability properties through Byzantine Fault Tolerance (BFT), and extremely highauditability, non-repudiation and ‘blockchain’ techniques. Even when BFT fails, auditing mechanisms are in place to trace malicious participants. We present the design, rationale, and details of Chainspace; we argue through evaluating an implementation of the system about its scaling and other features; we illustrate a number of privacy-friendly smart contracts for smart metering, polling and banking and measure their performance.",
"title": ""
},
{
"docid": "cdb937def5a92e3843a761f57278783e",
"text": "We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear.",
"title": ""
},
{
"docid": "3f9bcd99eac46264ee0920ddcc866d33",
"text": "The advent of easy to use blogging tools is increasing the number of bloggers leading to more diversity in the quality blogspace. The blog search technologies that help users to find “good” blogs are thus more and more important. This paper proposes a new algorithm called “EigenRumor” that scores each blog entry by weighting the hub and authority scores of the bloggers based on eigenvector calculations. This algorithm enables a higher score to be assigned to the blog entries submitted by a good blogger but not yet linked to by any other blogs based on acceptance of the blogger's prior work. General Terms Algorithms, Management, Experimentation",
"title": ""
},
{
"docid": "e1b6de27518c1c17965a891a8d14a1e1",
"text": "Mobile phones are becoming more and more widely used nowadays, and people do not use the phone only for communication: there is a wide variety of phone applications allowing users to select those that fit their needs. Aggregated over time, application usage patterns exhibit not only what people are consistently interested in but also the way in which they use their phones, and can help improving phone design and personalized services. This work aims at mining automatically usage patterns from apps data recorded continuously with smartphones. A new probabilistic framework for mining usage patterns is proposed. Our methodology involves the design of a bag-of-apps model that robustly represents level of phone usage over specific times of the day, and the use of a probabilistic topic model that jointly discovers patterns of usage over multiple applications and describes users as mixtures of such patterns. Our framework is evaluated using 230 000+ hours of real-life app phone log data, demonstrates that relevant patterns of usage can be extracted, and is objectively validated on a user retrieval task with competitive performance.",
"title": ""
},
{
"docid": "06b43b63aafbb70de2601b59d7813576",
"text": "Facial expression recognizers based on handcrafted features have achieved satisfactory performance on many databases. Recently, deep neural networks, e. g. deep convolutional neural networks (CNNs) have been shown to boost performance on vision tasks. However, the mechanisms exploited by CNNs are not well established. In this paper, we establish the existence and utility of feature maps selective to action units in a deep CNN trained by transfer learning. We transfer a network pre-trained on the Image-Net dataset to the facial expression recognition task using the Karolinska Directed Emotional Faces (KDEF), Radboud Faces Database(RaFD) and extended Cohn-Kanade (CK+) database. We demonstrate that higher convolutional layers of the deep CNN trained on generic images are selective to facial action units. We also show that feature selection is critical in achieving robustness, with action unit selective feature maps being more critical in the facial expression recognition task. These results support the hypothesis that both human and deeply learned CNNs use similar mechanisms for recognizing facial expressions.",
"title": ""
},
{
"docid": "33ef3a8f8f218ef38dce647bf232a3a7",
"text": "Network traffic monitoring and analysis-related research has struggled to scale for massive amounts of data in real time. Some of the vertical scaling solutions provide good implementation of signature based detection. Unfortunately these approaches treat network flows across different subnets and cannot apply anomaly-based classification if attacks originate from multiple machines at a lower speed, like the scenario of Peer-to-Peer Botnets. In this paper the authors build up on the progress of open source tools like Hadoop, Hive and Mahout to provide a scalable implementation of quasi-real-time intrusion detection system. The implementation is used to detect Peer-to-Peer Botnet attacks using machine learning approach. The contributions of this paper are as follows: (1) Building a distributed framework using Hive for sniffing and processing network traces enabling extraction of dynamic network features; (2) Using the parallel processing power of Mahout to build Random Forest based Decision Tree model which is applied to the problem of Peer-to-Peer Botnet detection in quasi-real-time. The implementation setup and performance metrics are presented as initial observations and future extensions are proposed. 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "16426be05f066e805e48a49a82e80e2e",
"text": "Ontologies have been developed and used by several researchers in different knowledge domains aiming to ease the structuring and management of knowledge, and to create a unique standard to represent concepts of such a knowledge domain. Considering the computer security domain, several tools can be used to manage and store security information. These tools generate a great amount of security alerts, which are stored in different formats. This lack of standard and the amount of data make the tasks of the security administrators even harder, because they have to understand, using their tacit knowledge, different security alerts to make correlation and solve security problems. Aiming to assist the administrators in executing these tasks efficiently, this paper presents the main features of the computer security incident ontology developed to model, using a unique standard, the concepts of the security incident domain, and how the ontology has been evaluated.",
"title": ""
},
{
"docid": "980a9d76136ffa057865d2bb425dc8e7",
"text": "Research in digital watermarking is mature. Several software implementations of watermarking algorithms are described in the literature, but few attempts have been made to describe hardware implementations. The ultimate objective of the research presented in this paper was to develop low-power, highperformance, real-time, reliable and secure watermarking systems, which can be achieved through hardware implementations. In this paper, we discuss the development of a very-large-scale integration architecture for a high-performance watermarking chip that can perform both invisible robust and invisible fragile image watermarking in the spatial domain. We prototyped the watermarking chip in two ways: (i) by using a Xilinx field-programmable gate array and (ii) by building a custom integrated circuit. To the best of our knowledge, this prototype is the first watermarking chip with both invisible robust and invisible fragile watermarking capabilities.",
"title": ""
},
{
"docid": "b2c789ba7dbb43ebafa331ea8ae252c1",
"text": "Twelve right-handed men performed two mental rotation tasks and two control tasks while whole-head functional magnetic resonance imaging was applied. Mental rotation tasks implied the comparison of different sorts of stimulus pairs, viz. pictures of hands and pictures of tools, which were either identical or mirror images and which were rotated in the plane of the picture. Control tasks were equal except that stimuli pairs were not rotated. Reaction time profiles were consistent with those found in previous research. Imaging data replicate classic areas of activation in mental rotation for hands and tools (bilateral superior parietal lobule and visual extrastriate cortex) but show an important difference in premotor area activation: pairs of hands engender bilateral premotor activation while pairs of tools elicit only left premotor brain activation. The results suggest that participants imagined moving both their hands in the hand condition, while imagining manipulating objects with their hand of preference (right hand) in the tool condition. The covert actions of motor imagery appear to mimic the \"natural way\" in which a person would manipulate the object in reality, and the activation of cortical regions during mental rotation seems at least in part determined by an intrinsic process that depends on the afforded actions elicited by the kind of stimuli presented.",
"title": ""
},
{
"docid": "8dc8dd1ded0a74ec4d004122463025bf",
"text": "To evaluate retinal function objectively in subjects with different stages of age-related macular degeneration (AMD) using multifocal electroretinography (mfERG) and compare it with age-matched control group. A total of 42 subjects with AMD and 37 age-matched healthy control group aged over 55 years were included in this prospective study. mfERG test was performed to all subjects. Average values in concentric ring analysis in four rings (ring 1, from 0° to 5° of eccentricity relative to fixation; ring 2, from 5° to 10°; ring 3, from 10° to 15°; ring 4, over 15°) and in quadrant analysis (superior nasal quadrant, superior temporal quadrant, inferior nasal quadrant and inferior temporal quadrant) were recorded. Test results were evaluated by one-way ANOVA test and independent samples t test. In mfERG concentric ring analysis, N1 amplitude, P1 amplitude and N2 amplitude were found to be lower and N1 implicit time, P1 implicit time and N2 implicit time were found to be delayed in subjects with AMD compared to control group. In quadrant analysis, N1, P1 and N2 amplitude was lower in all quadrants, whereas N1 implicit time was normal and P1 and N2 implicit times were prolonged in subjects with AMD. mfERG is a useful test in evaluating retinal function in subjects with AMD. AMD affects both photoreceptors and inner retinal function at late stages.",
"title": ""
},
{
"docid": "03d41408da6babfc97399c64860f50cd",
"text": "The nine degrees-of-freedom (DOF) inertial measurement units (IMU) are generally composed of three kinds of sensor: accelerometer, gyroscope and magnetometer. The calibration of these sensor suites not only requires turn-table or purpose-built fixture, but also entails a complex and laborious procedure in data sampling. In this paper, we propose a method to calibrate a 9-DOF IMU by using a set of casually sampled raw sensor measurement. Our sampling procedure allows the sensor suite to move by hand and only requires about six minutes of fast and slow arbitrary rotations with intermittent pauses. It requires neither the specially-designed fixture and equipment, nor the strict sequences of sampling steps. At the core of our method are the techniques of data filtering and a hierarchical scheme for calibration. All the raw sensor measurements are preprocessed by a series of band-pass filters before use. And our calibration scheme makes use of the gravity and the ambient magnetic field as references, and hierarchically calibrates the sensor model parameters towards the minimization of the mis-alignment, scaling and bias errors. Moreover, the calibration steps are formulated as a series of function optimization problems and are solved by an evolutionary algorithm. Finally, the performance of our method is experimentally evaluated. The results show that our method can effectively calibrate the sensor model parameters from one set of raw sensor measurement, and yield consistent calibration results.",
"title": ""
},
{
"docid": "d93609853422aed1c326d35ab820095d",
"text": "We present a method for inferring a 4D light field of a hidden scene from 2D shadows cast by a known occluder on a diffuse wall. We do this by determining how light naturally reflected off surfaces in the hidden scene interacts with the occluder. By modeling the light transport as a linear system, and incorporating prior knowledge about light field structures, we can invert the system to recover the hidden scene. We demonstrate results of our inference method across simulations and experiments with different types of occluders. For instance, using the shadow cast by a real house plant, we are able to recover low resolution light fields with different levels of texture and parallax complexity. We provide two experimental results: a human subject and two planar elements at different depths.",
"title": ""
},
{
"docid": "9eef13dc72daa4ec6cce816c61364d2d",
"text": "Bootstrapping is a crucial operation in Gentry’s breakthrough work on fully homomorphic encryption (FHE), where a homomorphic encryption scheme evaluates its own decryption algorithm. There has been a couple of implementations of bootstrapping, among which HElib arguably marks the state-of-the-art in terms of throughput, ciphertext/message size ratio and support for large plaintext moduli. In this work, we applied a family of “lowest digit removal” polynomials to design an improved homomorphic digit extraction algorithm which is a crucial part in bootstrapping for both FV and BGV schemes. When the secret key has 1-norm h = ||s||1 and the plaintext modulus is t = p, we achieved bootstrapping depth log h + log(logp(ht)) in FV scheme. In case of the BGV scheme, we brought down the depth from log h+ 2 log t to log h + log t. We implemented bootstrapping for FV in the SEAL library. We also introduced another “slim mode”, which restrict the plaintexts to batched vectors in Zpr . The slim mode has similar throughput as the full mode, while each individual run is much faster and uses much smaller memory. For example, bootstrapping takes 6.75 seconds for vectors over GF (127) with 64 slots and 1381 seconds for vectors over GF (257) with 128 slots. We also implemented our improved digit extraction procedure for the BGV scheme in HElib.",
"title": ""
},
{
"docid": "ed351364658a99d4d9c10dd2b9be3c92",
"text": "Information technology continues to provide opportunities to alter the decisionmaking behavior of individuals, groups and organizations. Two related changes that are emerging are social media and Web 2.0 technologies. These technologies can positively and negatively impact the rationality and effectiveness of decision-making. For example, changes that help marketing managers alter consumer decision behavior may result in poorer decisions by consumers. Also, managers who heavily rely on a social network rather than expert opinion and facts may make biased decisions. A number of theories can help explain how social media may impact decision-making and the consequences.",
"title": ""
},
{
"docid": "036908ecb1c648dc900f41dcde2b1a15",
"text": "A Fractional Fourier Transform (FrFT) based waveform design for joint radar-communication systems (Co-Radar) that embeds data into chirp sub-carriers with different time-frequency rates has been recently presented. Simulations demonstrated the possibility to reach data rates as high as 3.660 Mb/s while maintaining good radar performance compared to a Linear Frequency Modulated (LFM) pulse that occupies the same bandwidth. In this paper the experimental validation of the concept is presented. The system is considered in its basic configuration, with a mono-static radar that generates the waveforms and performs basic radar tasks, and a communication receiver in charge of the pulse demodulation. The entire network is implemented on a Software Defined Radio (SDR) device. The system is then used to acquire data and assess radar and communication capabilities.",
"title": ""
},
{
"docid": "7df97d3a5c393053b22255a0414e574a",
"text": "Let G be a directed graph containing n vertices, one of which is a distinguished source s, and m edges, each with a non-negative cost. We consider the problem of finding, for each possible sink vertex u , a pair of edge-disjoint paths from s to u of minimum total edge cost. Suurballe has given an O(n2 1ogn)-time algorithm for this problem. We give an implementation of Suurballe’s algorithm that runs in O(m log(, +,+)n) time and O(m) space. Our algorithm builds an implicit representation of the n pairs of paths; given this representation, the time necessary to explicitly construct the pair of paths for any given sink is O(1) per edge on the paths.",
"title": ""
},
{
"docid": "87ecd8c0331b6277cddb6a9a11cec42f",
"text": "OBJECTIVE\nThis study aimed to determine the principal factors contributing to the cost of avoiding a birth with Down syndrome by using cell-free DNA (cfDNA) to replace conventional screening.\n\n\nMETHODS\nA range of unit costs were assigned to each item in the screening process. Detection rates were estimated by meta-analysis and modeling. The marginal cost associated with the detection of additional cases using cfDNA was estimated from the difference in average costs divided by the difference in detection.\n\n\nRESULTS\nThe main factor was the unit cost of cfDNA testing. For example, replacing a combined test costing $150 with 3% false-positive rate and invasive testing at $1000, by cfDNA tests at $2000, $1500, $1000, and $500, the marginal cost is $8.0, $5.8, $3.6, and $1.4m, respectively. Costs were lower when replacing a quadruple test and higher for a 5% false-positive rate, but the relative importance of cfDNA unit cost was unchanged. A contingent policy whereby 10% to 20% women were selected for cfDNA testing by conventional screening was considerably more cost-efficient. Costs were sensitive to cfDNA uptake.\n\n\nCONCLUSION\nUniversal cfDNA screening for Down syndrome will only become affordable by public health purchasers if costs fall substantially. Until this happens, the contingent use of cfDNA is recommended.",
"title": ""
}
] |
scidocsrr
|
6a4bdf8a3531300909b2c97569672111
|
Gated Multimodal Units for Information Fusion
|
[
{
"docid": "0bbfd07d0686fc563f156d75d3672c7b",
"text": "In this paper, we provide a comprehensive survey of the mixture of experts (ME). We discuss the fundamental models for regression and classification and also their training with the expectation-maximization algorithm. We follow the discussion with improvements to the ME model and focus particularly on the mixtures of Gaussian process experts. We provide a review of the literature for other training methods, such as the alternative localized ME training, and cover the variational learning of ME in detail. In addition, we describe the model selection literature which encompasses finding the optimum number of experts, as well as the depth of the tree. We present the advances in ME in the classification area and present some issues concerning the classification model. We list the statistical properties of ME, discuss how the model has been modified over the years, compare ME to some popular algorithms, and list several applications. We conclude our survey with future directions and provide a list of publicly available datasets and a list of publicly available software that implement ME. Finally, we provide examples for regression and classification. We believe that the study described in this paper will provide quick access to the relevant literature for researchers and practitioners who would like to improve or use ME, and that it will stimulate further studies in ME.",
"title": ""
}
] |
[
{
"docid": "e668a6b42058bc44925d073fd9ee0cdd",
"text": "Reducing the in-order delivery, or playback, delay of reliable transport layer protocols over error prone networks can significantly improve application layer performance. This is especially true for applications that have time sensitive constraints such as streaming services. We explore the benefits of a coded generalization of selective repeat ARQ for minimizing the in-order delivery delay. An analysis of the delay's first two moments is provided so that we can determine when and how much redundancy should be added to meet a user's requirements. Numerical results help show the gains over selective repeat ARQ, as well as the trade-offs between meeting the user's delay constraints and the costs inflicted on the achievable rate. Finally, the analysis is compared with experimental results to help illustrate how our work can be used to help inform system decisions.",
"title": ""
},
{
"docid": "eed45b473ebaad0740b793bda8345ef3",
"text": "Plyometric training (PT) enhances soccer performance, particularly vertical jump. However, the effectiveness of PT depends on various factors. A systematic search of the research literature was conducted for randomized controlled trials (RCTs) studying the effects of PT on countermovement jump (CMJ) height in soccer players. Ten studies were obtained through manual and electronic journal searches (up to April 2017). Significant differences were observed when compared: (1) PT group vs. control group (ES=0.85; 95% CI 0.47-1.23; I2=68.71%; p<0.001), (2) male vs. female soccer players (Q=4.52; p=0.033), (3) amateur vs. high-level players (Q=6.56; p=0.010), (4) single session volume (<120 jumps vs. ≥120 jumps; Q=6.12, p=0.013), (5) rest between repetitions (5 s vs. 10 s vs. 15 s vs. 30 s; Q=19.10, p<0.001), (6) rest between sets (30 s vs. 60 s vs. 90 s vs. 120 s vs. 240 s; Q=19.83, p=0.001) and (7) and overall training volume (low: <1600 jumps vs. high: ≥1600 jumps; Q=5.08, p=0.024). PT is an effective form of training to improve vertical jump performance (i.e., CMJ) in soccer players. The benefits of PT on CMJ performance are greater for interventions of longer rest interval between repetitions (30 s) and sets (240 s) with higher volume of more than 120 jumps per session and 1600 jumps in total. Gender and competitive level differences should be considered when planning PT programs in soccer players.",
"title": ""
},
{
"docid": "33431760dfc16c095a4f0b8d4ed94790",
"text": "Millions of individuals worldwide are afflicted with acute and chronic respiratory diseases, causing temporary and permanent disabilities and even death. Oftentimes, these diseases occur as a result of altered immune responses. The aryl hydrocarbon receptor (AhR), a ligand-activated transcription factor, acts as a regulator of mucosal barrier function and may influence immune responsiveness in the lungs through changes in gene expression, cell–cell adhesion, mucin production, and cytokine expression. This review updates the basic immunobiology of the AhR signaling pathway with regards to inflammatory lung diseases such as asthma, chronic obstructive pulmonary disease, and silicosis following data in rodent models and humans. Finally, we address the therapeutic potential of targeting the AhR in regulating inflammation during acute and chronic respiratory diseases.",
"title": ""
},
{
"docid": "c906d026937ebea3525f5dee5d923335",
"text": "VGGNets have turned out to be effective for object recognition in still images. However, it is unable to yield good performance by directly adapting the VGGNet models trained on the ImageNet dataset for scene recognition. This report describes our implementation of training the VGGNets on the large-scale Places205 dataset. Specifically, we train three VGGNet models, namely VGGNet-11, VGGNet-13, and VGGNet-16, by using a Multi-GPU extension of Caffe toolbox with high computational efficiency. We verify the performance of trained Places205-VGGNet models on three datasets: MIT67, SUN397, and Places205. Our trained models achieve the state-of-the-art performance o n these datasets and are made public available 1.",
"title": ""
},
{
"docid": "7249e8c5db7d9d048f777aeeaf34954c",
"text": "With the growth of system size and complexity, reliability has become of paramount importance for petascale systems. Reliability, Availability, and Serviceability (RAS) logs have been commonly used for failure analysis. However, analysis based on just the RAS logs has proved to be insufficient in understanding failures and system behaviors. To overcome the limitation of this existing methodologies, we analyze the Blue Gene/P RAS logs and the Blue Gene/P job logs in a cooperative manner. From our co-analysis effort, we have identified a dozen important observations about failure characteristics and job interruption characteristics on the Blue Gene/P systems. These observations can significantly facilitate the research in fault resilience of large-scale systems.",
"title": ""
},
{
"docid": "c564656568c9ce966e88d11babc0d445",
"text": "In this study, Turkish texts belonging to different categories were classified by using word2vec word vectors. Firstly, vectors of the words in all the texts were extracted then, each text was represented in terms of the mean vectors of the words it contains. Texts were classified by SVM and 0.92 F measurement score was obtained for seven different categories. As a result, it was experimentally shown that word2vec is more successful than tf-idf based classification for Turkish document classification.",
"title": ""
},
{
"docid": "a74b091706f4aeb384d2bf3d477da67d",
"text": "Amazon's Echo and its conversational agent Alexa open exciting opportunities for understanding how people perceive and interact with virtual agents. Drawing from user reviews of the Echo posted to Amazon.com, this case study explores the degree to which user reviews indicate personification of the device, sociability level of interactions, factors linked with personification, and influences on user satisfaction. Results indicate marked variance in how people refer to the device, with over half using the personified name Alexa but most referencing the device with object pronouns. Degree of device personification is linked with sociability of interactions: greater personification co-occurs with more social interactions with the Echo. Reviewers mentioning multiple member households are more likely to personify the device than reviewers mentioning living alone. Even after controlling for technical issues, personification predicts user satisfaction with the Echo.",
"title": ""
},
{
"docid": "1ede796449f610b186638aa2ac9ceedf",
"text": "We introduce a framework for exploring and learning representations of log data generated by enterprise-grade security devices with the goal of detecting advanced persistent threats (APTs) spanning over several weeks. The presented framework uses a divide-and-conquer strategy combining behavioral analytics, time series modeling and representation learning algorithms to model large volumes of data. In addition, given that we have access to human-engineered features, we analyze the capability of a series of representation learning algorithms to complement human-engineered features in a variety of classification approaches. We demonstrate the approach with a novel dataset extracted from 3 billion log lines generated at an enterprise network boundaries with reported command and control communications. The presented results validate our approach, achieving an area under the ROC curve of 0.943 and 95 true positives out of the Top 100 ranked instances on the test data set.",
"title": ""
},
{
"docid": "08f49b003a3a5323e38e4423ba6503a4",
"text": "Neurofeedback (NF), a type of neurobehavioral training, has gained increasing attention in recent years, especially concerning the treatment of children with ADHD. Promising results have emerged from recent randomized controlled studies, and thus, NF is on its way to becoming a valuable addition to the multimodal treatment of ADHD. In this review, we summarize the randomized controlled trials in children with ADHD that have been published within the last 5 years and discuss issues such as the efficacy and specificity of effects, treatment fidelity and problems inherent in placebo-controlled trials of NF. Directions for future NF research are outlined, which should further address specificity and help to determine moderators and mediators to optimize and individualize NF training. Furthermore, we describe methodological (tomographic NF) and technical ('tele-NF') developments that may also contribute to further improvements in treatment outcome.",
"title": ""
},
{
"docid": "0cf3a201140e02039295a2ef4697a635",
"text": "In recent years, deep convolutional neural networks (ConvNet) have shown their popularity in various real world applications. To provide more accurate results, the state-of-the-art ConvNet requires millions of parameters and billions of operations to process a single image, which represents a computational challenge for general purpose processors. As a result, hardware accelerators such as Graphic Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs), have been adopted to improve the performance of ConvNet. However, GPU-based solution consumes a considerable amount of power and a traditional RTL design on FPGA requires tedious development that is very time-consuming. In this work, we propose a scalable and parameterized end-to-end ConvNet design using Intel FPGA SDK for OpenCL. To validate the design, we implement VGG 16 model on two different FPGA boards. Consequently, our designs achieve 306.41 GOPS on Intel Stratix A7 and 318.94 GOPS on Intel Arria 10 GX 10AX115. To the best of our knowledge, this outperforms previous FPGA-based accelerators. Compared to the CPU (Intel Xeon E5-2620) and a mid-range GPU (Nvidia K40), our design is 24.3X and 1.7X more energy efficient respectively.",
"title": ""
},
{
"docid": "280672ad5473e061269114d0d11acc90",
"text": "With personalization, consumers can choose from various product attributes and a customized product is assembled based on their preferences. Marketers often offer personalization on websites. This paper investigates consumer purchase intentions toward personalized products in an online selling situation. The research builds and tests three hypotheses: (1) intention to purchase personalized products will be affected by individualism, uncertainty avoidance, power distance, and masculinity dimensions of a national culture; (2) consumers will be more likely to buy personalized search products than experience products; and (3) intention to buy a personalized product will not be influenced by price premiums up to some level. Results indicate that individualism is the only culture dimension to have a significant effect on purchase intention. Product type and individualism by price interaction also have a significant effect, whereas price does not. Major findings and implications are discussed. a Department of Business Administration, School of Economics and Business, Hanyang University, Ansan, South Korea b Department of International Business, School of Commerce and Business, University of Auckland, Auckland, New Zealand c School of Business, State University of New York at New Paltz, New Paltz, New York 12561, USA This work was supported by a Korea Research Foundation Grant (KRF-2004-041-B00211) to the first author. Corresponding author. Tel.: +82 31 400 5653; fax: +82 31 400 5591. E-mail addresses: [email protected] (J. Moon), [email protected] (D. Chadee), [email protected] (S. Tikoo). 1 Tel.: +64 9 373 7599 x85951. 2 Tel.: +1 845 257 2959.",
"title": ""
},
{
"docid": "9e6df649528ce4f011fcc09d089b4559",
"text": "Aspect-based sentiment analysis (ABSA) tries to predict the polarity of a given document with respect to a given aspect entity. While neural network architectures have been successful in predicting the overall polarity of sentences, aspectspecific sentiment analysis still remains as an open problem. In this paper, we propose a novel method for integrating aspect information into the neural model. More specifically, we incorporate aspect information into the neural model by modeling word-aspect relationships. Our novel model, Aspect Fusion LSTM (AF-LSTM) learns to attend based on associative relationships between sentence words and aspect which allows our model to adaptively focus on the correct words given an aspect term. This ameliorates the flaws of other state-of-the-art models that utilize naive concatenations to model word-aspect similarity. Instead, our model adopts circular convolution and circular correlation to model the similarity between aspect and words and elegantly incorporates this within a differentiable neural attention framework. Finally, our model is end-to-end differentiable and highly related to convolution-correlation (holographic like) memories. Our proposed neural model achieves state-of-the-art performance on benchmark datasets, outperforming ATAE-LSTM by 4%− 5% on average across multiple datasets.",
"title": ""
},
{
"docid": "4f9558d13c3caf7244b31adc69c8832d",
"text": "Self-adaptation is a first class concern for cloud applications, which should be able to withstand diverse runtime changes. Variations are simultaneously happening both at the cloud infrastructure level - for example hardware failures - and at the user workload level - flash crowds. However, robustly withstanding extreme variability, requires costly hardware over-provisioning. \n In this paper, we introduce a self-adaptation programming paradigm called brownout. Using this paradigm, applications can be designed to robustly withstand unpredictable runtime variations, without over-provisioning. The paradigm is based on optional code that can be dynamically deactivated through decisions based on control theory. \n We modified two popular web application prototypes - RUBiS and RUBBoS - with less than 170 lines of code, to make them brownout-compliant. Experiments show that brownout self-adaptation dramatically improves the ability to withstand flash-crowds and hardware failures.",
"title": ""
},
{
"docid": "dfb78a96f9af81aa3f4be1a28e4ce0a2",
"text": "This paper presents two ultra-high-speed SerDes dedicated for PAM4 and NRZ data. The PAM4 TX incorporates an output driver with 3-tap FFE and adjustable weighting to deliver clean outputs at 4 levels, and the PAM4 RX employs a purely linear full-rate CDR and CTLE/1-tap DFE combination to recover and demultiplex the data. NRZ TX includes a tree-structure MUX with built-in PLL and phase aligner. NRZ RX adopts linear PD with special vernier technique to handle the 56 Gb/s input data. All chips have been verified in silicon with reasonable performance, providing prospective design examples for next-generation 400 GbE.",
"title": ""
},
{
"docid": "2bbcdf5f3182262d3fcd6addc1e3f835",
"text": "Online handwritten Chinese text recognition (OHCTR) is a challenging problem as it involves a large-scale character set, ambiguous segmentation, and variable-length input sequences. In this paper, we exploit the outstanding capability of path signature to translate online pen-tip trajectories into informative signature feature maps, successfully capturing the analytic and geometric properties of pen strokes with strong local invariance and robustness. A multi-spatial-context fully convolutional recurrent network (MC-FCRN) is proposed to exploit the multiple spatial contexts from the signature feature maps and generate a prediction sequence while completely avoiding the difficult segmentation problem. Furthermore, an implicit language model is developed to make predictions based on semantic context within a predicting feature sequence, providing a new perspective for incorporating lexicon constraints and prior knowledge about a certain language in the recognition procedure. Experiments on two standard benchmarks, Dataset-CASIA and Dataset-ICDAR, yielded outstanding results, with correct rates of 97.50 and 96.58 percent, respectively, which are significantly better than the best result reported thus far in the literature.",
"title": ""
},
{
"docid": "981da4eddfc1c9fbbceef437f5f43439",
"text": "A significant number of schizophrenic patients show patterns of smooth pursuit eye-tracking patterns that differ strikingly from the generally smooth eye-tracking seen in normals and in nonschizophrenic patients. These deviations are probably referable not only to motivational or attentional factors, but also to oculomotor involvement that may have a critical relevance for perceptual dysfunction in schizophrenia.",
"title": ""
},
{
"docid": "9be80d8f93dd5edd72ecd759993935d6",
"text": "The excretory system regulates the chemical composition of body fluids by removing metabolic wastes and retaining the proper amount of water, salts and nutrients. The invertebrate excretory structures are classified in according to their marked variations in the morphological structures into three types included contractile vacuoles in protozoa, nephridia (flame cell system) in most invertebrate animals and Malpighian tubules (arthropod kidney) in insects [2]. There are three distinct excretory organs formed in succession during the development of the vertebrate kidney, they are called pronephros, mesonephros and metanephros. The pronephros is the most primitive one and exists as a functional kidney only in some of the lowest fishes and is called the archinephros. The mesonephros represents the functional excretory organs in anamniotes and called as opisthonephros. The metanephros is the most caudally located of the excretory organs and the last to appear, it represents the functional kidney in amniotes [2-4].",
"title": ""
},
{
"docid": "ef8be5104f9bc4a0f4353ed236b6afb8",
"text": "State-of-the-art human pose estimation methods are based on heat map representation. In spite of the good performance, the representation has a few issues in nature, such as non-differentiable postprocessing and quantization error. This work shows that a simple integral operation relates and unifies the heat map representation and joint regression, thus avoiding the above issues. It is differentiable, efficient, and compatible with any heat map based methods. Its effectiveness is convincingly validated via comprehensive ablation experiments under various settings, specifically on 3D pose estimation, for the first time.",
"title": ""
},
{
"docid": "079de41f553c8bd5c87f7c3cfbe5d836",
"text": "We present a design study for a nano-scale crossbar memory system that uses memristors with symmetrical but highly nonlinear current-voltage characteristics as memory elements. The memory is non-volatile since the memristors retain their state when un-powered. In order to address the nano-wires that make up this nano-scale crossbar, we use two coded demultiplexers implemented using mixed-scale crossbars (in which CMOS-wires cross nano-wires and in which the crosspoint junctions have one-time configurable memristors). This memory system does not utilize the kind of devices (diodes or transistors) that are normally used to isolate the memory cell being written to and read from in conventional memories. Instead, special techniques are introduced to perform the writing and the reading operation reliably by taking advantage of the nonlinearity of the type of memristors used. After discussing both writing and reading strategies for our memory system in general, we focus on a 64 x 64 memory array and present simulation results that show the feasibility of these writing and reading procedures. Besides simulating the case where all device parameters assume exactly their nominal value, we also simulate the much more realistic case where the device parameters stray around their nominal value: we observe a degradation in margins, but writing and reading is still feasible. These simulation results are based on a device model for memristors derived from measurements of fabricated devices in nano-scale crossbars using Pt and Ti nano-wires and using oxygen-depleted TiO(2) as the switching material.",
"title": ""
},
{
"docid": "35725331e4abd61ed311b14086dd3d5c",
"text": "BACKGROUND\nBody dysmorphic disorder (BDD) consists of a preoccupation with an 'imagined' defect in appearance which causes significant distress or impairment in functioning. There has been little previous research into BDD. This study replicates a survey from the USA in a UK population and evaluates specific measures of BDD.\n\n\nMETHOD\nCross-sectional interview survey of 50 patients who satisfied DSM-IV criteria for BDD as their primary disorder.\n\n\nRESULTS\nThe average age at onset was late adolescence and a large proportion of patients were either single or divorced. Three-quarters of the sample were female. There was a high degree of comorbidity with the most common additional Axis l diagnosis being either a mood disorder (26%), social phobia (16%) or obsessive-compulsive disorder (6%). Twenty-four per cent had made a suicide attempt in the past. Personality disorders were present in 72% of patients, the most common being paranoid, avoidant and obsessive-compulsive.\n\n\nCONCLUSIONS\nBDD patients had a high associated comorbidity and previous suicide attempts. BDD is a chronic handicapping disorder and patients are not being adequately identified or treated by health professionals.",
"title": ""
}
] |
scidocsrr
|
5f31dfded71c8aa0596b961f83ad9bfd
|
A new hybrid global optimization approach for selecting clinical and biological features that are relevant to the effective diagnosis of ovarian cancer
|
[
{
"docid": "86826e10d531b8d487fada7a5c151a41",
"text": "Feature selection is an important preprocessing step in data mining. Mutual information-based feature selection is a kind of popular and effective approaches. In general, most existing mutual information-based techniques are greedy methods, which are proven to be efficient but suboptimal. In this paper, mutual information-based feature selection is transformed into a global optimization problem, which provides a new idea for solving feature selection problems. Firstly, a single-objective feature selection algorithm combining relevance and redundancy is presented, which has well global searching ability and high computational efficiency. Furthermore, to improve the performance of feature selection, we propose a multi-objective feature selection algorithm. The method can meet different requirements and achieve a tradeoff among multiple conflicting objectives. On this basis, a hybrid feature selection framework is adopted for obtaining a final solution. We compare the performance of our algorithm with related methods on both synthetic and real datasets. Simulation results show the effectiveness and practicality of the proposed method.",
"title": ""
},
{
"docid": "023166be79a875da0b06a4d6d562839f",
"text": "There is no a priori reason why machine learning must borrow from nature. A field could exist, complete with well-defined algorithms, data structures, and theories of learning, without once referring to organisms, cognitive or genetic structures, and psychological or evolutionary theories. Yet at the end of the day, with the position papers written, the computers plugged in, and the programs debugged, a learning edifice devoid of natural metaphor would lack something. It would ignore the fact that all these creations have become possible only after three billion years of evolution on this planet. It would miss the point that the very ideas of adaptation and learning are concepts invented by the most recent representatives of the species Homo sapiens from the careful observation of themselves and life around them. It would miss the point that natural examples of learning and adaptation are treasure troves of robust procedures and structures. Fortunately, the field of machine learning does rely upon nature's bounty for both inspiration and mechanism. Many machine learning systems now borrow heavily from current thinking in cognitive science, and rekindled interest in neural networks and connectionism is evidence of serious mechanistic and philosophical currents running through the field. Another area where natural example has been tapped is in work on genetic algorithms (GAs) and genetics-based machine learning. Rooted in the early cybernetics movement (Holland, 1962), progress has been made in both theory (Holland, 1975; Holland, Holyoak, Nisbett, & Thagard, 1986) and application (Goldberg, 1989; Grefenstette, 1985, 1987) to the point where genetics-based systems are finding their way into everyday commercial use (Davis & Coombs, 1987; Fourman, 1985).",
"title": ""
}
] |
[
{
"docid": "085f0bef6bef5f91659edfad039f422e",
"text": "With the development in modern communication technology, every physical device is now connecting with the internet. IoT is getting emerging technology for connecting physical devices with the user. In this paper we combined existing energy meter with the IoT technology. By implementation of IoT in the case of meter reading for electricity can give customer relief in using electrical energy. In this work a digital energy meter is connected with cloud server via IoT device. It sends the amount of consumed energy of connected customer to webserver. There is a feature for disconnection in the case of unauthorized and unpaid consumption and also have option for renew the connection by paying bill online. We tried to build up a consumer and business friendly system.",
"title": ""
},
{
"docid": "db111db8aaaf1185d9dc99ba53e6e828",
"text": "Topic model uncovers abstract topics within texts documents, which is an essential task in text analysis in social networks. However, identifying topics in text documents in social networks is challenging since the texts are short, unlabeled, and unstructured. For this reason, we propose a topic classification system regarding the features of text documents in social networks. The proposed system is based on several machine-learning algorithms and voting system. The accuracy of the system has been tested using text documents that were classified into three topics. The experiment results show that the proposed system guarantees high accuracy rates in documents topic classification.",
"title": ""
},
{
"docid": "8f6107d045b94917cf0f0bd3f262a1bf",
"text": "An interesting challenge for explainable recommender systems is to provide successful interpretation of recommendations using structured sentences. It is well known that user-generated reviews, have strong influence on the users' decision. Recent techniques exploit user reviews to generate natural language explanations. In this paper, we propose a character-level attention-enhanced long short-term memory model to generate natural language explanations. We empirically evaluated this network using two real-world review datasets. The generated text present readable and similar to a real user's writing, due to the ability of reproducing negation, misspellings, and domain-specific vocabulary.",
"title": ""
},
{
"docid": "1c3c21a159bed9bf293838eee7c6c36b",
"text": "The laminar location of the cell bodies and terminals of interareal connections determines the hierarchical structural organization of the cortex and has been intensively studied. However, we still have only a rudimentary understanding of the connectional principles of feedforward (FF) and feedback (FB) pathways. Quantitative analysis of retrograde tracers was used to extend the notion that the laminar distribution of neurons interconnecting visual areas provides an index of hierarchical distance (percentage of supragranular labeled neurons [SLN]). We show that: 1) SLN values constrain models of cortical hierarchy, revealing previously unsuspected areal relations; 2) SLN reflects the operation of a combinatorial distance rule acting differentially on sets of connections between areas; 3) Supragranular layers contain highly segregated bottom-up and top-down streams, both of which exhibit point-to-point connectivity. This contrasts with the infragranular layers, which contain diffuse bottom-up and top-down streams; 4) Cell filling of the parent neurons of FF and FB pathways provides further evidence of compartmentalization; 5) FF pathways have higher weights, cross fewer hierarchical levels, and are less numerous than FB pathways. Taken together, the present results suggest that cortical hierarchies are built from supra- and infragranular counterstreams. This compartmentalized dual counterstream organization allows point-to-point connectivity in both bottom-up and top-down directions.",
"title": ""
},
{
"docid": "2547e6e8138c49b76062e241391dfc1d",
"text": "Methods of deep neural networks (DNNs) have recently demonstrated superior performance on a number of natural language processing tasks. However, in most previous work, the models are learned based on either unsupervised objectives, which does not directly optimize the desired task, or singletask supervised objectives, which often suffer from insufficient training data. We develop a multi-task DNN for learning representations across multiple tasks, not only leveraging large amounts of cross-task data, but also benefiting from a regularization effect that leads to more general representations to help tasks in new domains. Our multi-task DNN approach combines tasks of multiple-domain classification (for query classification) and information retrieval (ranking for web search), and demonstrates significant gains over strong baselines in a comprehensive set of domain adaptation.",
"title": ""
},
{
"docid": "bea5359317e05e0a9c3b4d474ca0067f",
"text": "Agile method Scrum can effectively resolve numerous problems encountered when Capability Maturity Model Integration(CMMI) is implemented in small and medium software development organizations, but some special needs are hard to be satisfied. According to small and medium organizations' characteristic, the paper analyzes feasibility of combining Scrum and CMMI in depth. It is useful for organizations that build a new project management framework based on both CMMI and Scrum practices.",
"title": ""
},
{
"docid": "4ba4930befdc19c32c4fb73abe35d141",
"text": "Us enhance usab adaptivity and designers mod hindering thus level, increasin possibility of e aims at creat concepts and p literature, app user context an to create a ge This ontology alleviate the a download, is ex areas, person visualization.",
"title": ""
},
{
"docid": "04384b62c17f9ff323db4d51bea86fe9",
"text": "Imbalanced data widely exist in many high-impact applications. An example is in air traffic control, where among all three types of accident causes, historical accident reports with ‘personnel issues’ are much more than the other two types (‘aircraft issues’ and ‘environmental issues’) combined. Thus, the resulting data set of accident reports is highly imbalanced. On the other hand, this data set can be naturally modeled as a network, with each node representing an accident report, and each edge indicating the similarity of a pair of accident reports. Up until now, most existing work on imbalanced data analysis focused on the classification setting, and very little is devoted to learning the node representations for imbalanced networks. To bridge this gap, in this paper, we first propose Vertex-Diminished Random Walk (VDRW) for imbalanced network analysis. It is significantly different from the existing Vertex Reinforced Random Walk by discouraging the random particle to return to the nodes that have already been visited. This design is particularly suitable for imbalanced networks as the random particle is more likely to visit the nodes from the same class, which is a desired property for learning node representations. Furthermore, based on VDRW, we propose a semi-supervised network representation learning framework named ImVerde for imbalanced networks, where context sampling uses VDRW and the limited label information to create node-context pairs, and balanced-batch sampling adopts a simple under-sampling method to balance these pairs from different classes. Experimental results demonstrate that ImVerde based on VDRW outperforms stateof-the-art algorithms for learning network representations from imbalanced data.",
"title": ""
},
{
"docid": "cf0d5d3877bf26822c2196a3a17bd073",
"text": "The purpose of this paper is to review existing sensor and sensor network ontologies to understand whether they can be reused as a basis for a manufacturing perception sensor ontology, or if the existing ontologies hold lessons for the development of a new ontology. We develop an initial set of requirements that should apply to a manufacturing perception sensor ontology. These initial requirements are used in reviewing selected existing sensor ontologies. This paper describes the steps for 1) extending and refining the requirements; 2) proposing hierarchical structures for verifying the purposes of the ontology; and 3) choosing appropriate tools and languages to support such an ontology. Some languages could include OWL (Web Ontology Language) [1] and SensorML (Sensor Markup Language) [2]. This work will be proposed as a standard within the IEEE Robotics and Automation Society (RAS) Ontologies for Robotics Automation (ORA) Working Group [3]. 1. Overview of Sensor Ontology Effort Next generation robotic systems for manufacturing must perform highly complex tasks in dynamic environments. To improve return on investment, manufacturing robots and automation must become more flexible and adaptable, and less dependent on blind, repetitive motions in a structured, fixed environment. To become more adaptable, robots need both precise sensing for parts and assemblies, so they can focus on specific tasks in which they must interact with and manipulate objects; and situational awareness, so they can robustly sense their entire environment for long-term planning and short-term safety. Meeting these requirements will need advances in sensing and perception systems that can identify and locate objects, can detect people and obstacles, and, in general, can perceive as many elements of the manufacturing environment as needed for operation. To robustly and accurately perceive many elements of the environment will require a wide range of collaborating smart sensors such as cameras, laser scanners, stereo cameras, and others. In many cases these sensors will need to be integrated into a distributed sensor network that offers extensive coverage of a manufacturing facility by sensors of complementary capabilities. To support the development of these sensors and networks, the National Institute of Standards and Technology (NIST) manufacturing perception sensor ontology effort looks to create an ontology of sensors, sensor networks, sensor capabilities, environmental objects, and environmental conditions so as to better define and anticipate the wide range of perception systems needed. The ontology will include:",
"title": ""
},
{
"docid": "3a723bb57dedaaf473384243fe6e1ab1",
"text": "Objective\nWe explored whether use of deep learning to model temporal relations among events in electronic health records (EHRs) would improve model performance in predicting initial diagnosis of heart failure (HF) compared to conventional methods that ignore temporality.\n\n\nMaterials and Methods\nData were from a health system's EHR on 3884 incident HF cases and 28 903 controls, identified as primary care patients, between May 16, 2000, and May 23, 2013. Recurrent neural network (RNN) models using gated recurrent units (GRUs) were adapted to detect relations among time-stamped events (eg, disease diagnosis, medication orders, procedure orders, etc.) with a 12- to 18-month observation window of cases and controls. Model performance metrics were compared to regularized logistic regression, neural network, support vector machine, and K-nearest neighbor classifier approaches.\n\n\nResults\nUsing a 12-month observation window, the area under the curve (AUC) for the RNN model was 0.777, compared to AUCs for logistic regression (0.747), multilayer perceptron (MLP) with 1 hidden layer (0.765), support vector machine (SVM) (0.743), and K-nearest neighbor (KNN) (0.730). When using an 18-month observation window, the AUC for the RNN model increased to 0.883 and was significantly higher than the 0.834 AUC for the best of the baseline methods (MLP).\n\n\nConclusion\nDeep learning models adapted to leverage temporal relations appear to improve performance of models for detection of incident heart failure with a short observation window of 12-18 months.",
"title": ""
},
{
"docid": "9d55947637b358c4dc30d7ba49885472",
"text": "Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identification, question answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of different pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models. CCS Concepts •Information systems→ Retrieval models and ranking;",
"title": ""
},
{
"docid": "ff20e5cd554cd628eba07776fa9a5853",
"text": "We describe our early experience in applying our console log mining techniques [19, 20] to logs from production Google systems with thousands of nodes. This data set is five orders of magnitude in size and contains almost 20 times as many messages types as the Hadoop data set we used in [19]. It also has many properties that are unique to large scale production deployments (e.g., the system stays on for several months and multiple versions of the software can run concurrently). Our early experience shows that our techniques, including source code based log parsing, state and sequence based feature creation and problem detection, work well on this production data set. We also discuss our experience in using our log parser to assist the log sanitization.",
"title": ""
},
{
"docid": "1303770cf8d0f1b0f312feb49281aa10",
"text": "A terahertz metamaterial absorber (MA) with properties of broadband width, polarization-insensitive, wide angle incidence is presented. Different from the previous methods to broaden the absorption width, this letter proposes a novel combinatorial way which units a nested structure with multiple metal-dielectric layers. We numerically investigate the proposed MA, and the simulation results show that the absorber achieves a broadband absorption over a frequency range of 0.896 THz with the absorptivity greater than 90%. Moreover, the full-width at half maximum of the absorber is up to 1.224 THz which is 61.2% with respect to the central frequency. The mechanism for the broadband absorption originates from the overlapping of longitudinal coupling between layers and coupling of the nested structure. Importantly, the nested structure makes a great contribution to broaden the absorption width. Thus, constructing a nested structure in a multi-layer absorber may be considered as an effective way to design broadband MAs.",
"title": ""
},
{
"docid": "ea262ac413534326feaed7adf8455881",
"text": "Numerous sensors in modern mobile phones enable a range of people-centric applications. This paper envisions a system called PhonePoint Pen that uses the in-built accelerometer in mobile phones to recognize human writing. By holding the phone like a pen, a user should be able to write short messages or draw simple diagrams in the air. The acceleration due to hand gestures can be translated into geometric strokes, and recognized as characters. We prototype the PhonePoint Pen on the Nokia N95 platform, and evaluate it through real users. Results show that English characters can be identified with an average accuracy of 91.9%, if the users conform to a few reasonable constraints. Future work is focused on refining the prototype, with the goal of offering a new user-experience that complements keyboards and touch-screens.",
"title": ""
},
{
"docid": "aaf30f184fcea3852f73a5927100cac7",
"text": "Dyslexia is a neurodevelopmental reading disability estimated to affect 5-10% of the population. While there is yet no full understanding of the cause of dyslexia, or agreement on its precise definition, it is certain that many individuals suffer persistent problems in learning to read for no apparent reason. Although it is generally agreed that early intervention is the best form of support for children with dyslexia, there is still a lack of efficient and objective means to help identify those at risk during the early years of school. Here we show that it is possible to identify 9-10 year old individuals at risk of persistent reading difficulties by using eye tracking during reading to probe the processes that underlie reading ability. In contrast to current screening methods, which rely on oral or written tests, eye tracking does not depend on the subject to produce some overt verbal response and thus provides a natural means to objectively assess the reading process as it unfolds in real-time. Our study is based on a sample of 97 high-risk subjects with early identified word decoding difficulties and a control group of 88 low-risk subjects. These subjects were selected from a larger population of 2165 school children attending second grade. Using predictive modeling and statistical resampling techniques, we develop classification models from eye tracking records less than one minute in duration and show that the models are able to differentiate high-risk subjects from low-risk subjects with high accuracy. Although dyslexia is fundamentally a language-based learning disability, our results suggest that eye movements in reading can be highly predictive of individual reading ability and that eye tracking can be an efficient means to identify children at risk of long-term reading difficulties.",
"title": ""
},
{
"docid": "45a8fea3e8d780c65811cee79082237f",
"text": "Pedestrian dead reckoning, especially on smart-phones, is likely to play an increasingly important role in indoor tracking and navigation, due to its low cost and ability to work without any additional infrastructure. A challenge however, is that positioning, both in terms of step detection and heading estimation, must be accurate and reliable, even when the use of the device is so varied in terms of placement (e.g. handheld or in a pocket) or orientation (e.g holding the device in either portrait or landscape mode). Furthermore, the placement can vary over time as a user performs different tasks, such as making a call or carrying the device in a bag. A second challenge is to be able to distinguish between a true step and other periodic motion such as swinging an arm or tapping when the placement and orientation of the device is unknown. If this is not done correctly, then the PDR system typically overestimates the number of steps taken, leading to a significant long term error. We present a fresh approach, robust PDR (R-PDR), based on exploiting how bipedal motion impacts acquired sensor waveforms. Rather than attempting to recognize different placements through sensor data, we instead simply determine whether the motion of one or both legs impact the measurements. In addition, we formulate a set of techniques to accurately estimate the device orientation, which allows us to very accurately (typically over 99%) reject false positives. We demonstrate that regardless of device placement, we are able to detect the number of steps taken with >99.4% accuracy. R-PDR thus addresses the two main limitations facing existing PDR techniques.",
"title": ""
},
{
"docid": "6a2e3c783b468474ca0f67d7c5af456c",
"text": "We evaluated the cytotoxic effects of four prostaglandin analogs (PGAs) used to treat glaucoma. First we established primary cultures of conjunctival stromal cells from healthy donors. Then cell cultures were incubated with different concentrations (0, 0.1, 1, 5, 25, 50 and 100%) of commercial formulations of bimatoprost, tafluprost, travoprost and latanoprost for increasing periods (5 and 30 min, 1 h, 6 h and 24 h) and cell survival was assessed with three different methods: WST-1, MTT and calcein/AM-ethidium homodimer-1 assays. Our results showed that all PGAs were associated with a certain level of cell damage, which correlated significantly with the concentration of PGA used, and to a lesser extent with culture time. Tafluprost tended to be less toxic than bimatoprost, travoprost and latanoprost after all culture periods. The results for WST-1, MTT and calcein/AM-ethidium homodimer-1 correlated closely. When the average lethal dose 50 was calculated, we found that the most cytotoxic drug was latanoprost, whereas tafluprost was the most sparing of the ocular surface in vitro. These results indicate the need to design novel PGAs with high effectiveness but free from the cytotoxic effects that we found, or at least to obtain drugs that are functional at low dosages. The fact that the commercial formulation of tafluprost used in this work was preservative-free may support the current tendency to eliminate preservatives from eye drops for clinical use.",
"title": ""
},
{
"docid": "9f3f5e2baa1bff4aa28a2ce2a4c47088",
"text": "One of the most perplexing problems in risk analysis is why some relatively minor risks or risk events, as assessed by technical experts, often elicit strong public concerns and result in substantial impacts upon society and economy. This article sets forth a conceptual framework that seeks to link systematically the technical assessment of risk with psychological, sociological, and cultural perspectives of risk perception and risk-related behavior. The main thesis is that hazards interact with psychological, social, institutional, and cultural processes in ways that may amplify or attenuate public responses to the risk or risk event. A structural description of the social amplification of risk is now possible. Amplification occurs at two stages: in the transfer of information about the risk, and in the response mechanisms of society. Signals about risk are processed by individual and social amplification stations, including the scientist who communicates the risk assessment, the news media, cultural groups, interpersonal networks, and others. Key steps of amplifications can be identified at each stage. The amplified risk leads to behavioral responses, which, in turn, result in secondary impacts. Models are presented that portray the elements and linkages in the proposed conceptual framework.",
"title": ""
},
{
"docid": "d07ba52b14c098ca5e2178ce64fc4403",
"text": "Consider the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to log n-factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constraints such as (generalized) additive models. While there is a lot of flexibility in the network architecture, the tuning parameter is the sparsity of the network. Specifically, we consider large networks with number of potential network parameters exceeding the sample size. The analysis gives some insights why multilayer feedforward neural networks perform well in practice. Interestingly, the depth (number of layers) of the neural network architectures plays an important role and our theory suggests that for nonparametric regression scaling the network depth with the logarithm of the sample size is natural. It is also shown that under the composition assumption wavelet estimators can only achieve suboptimal rates.",
"title": ""
}
] |
scidocsrr
|
977e5731a5015629f26c85791195f0dc
|
Visual localization and loop closing using decision trees and binary features
|
[
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
},
{
"docid": "368a3dd36283257c5573a7e1ab94e930",
"text": "This paper develops the multidimensional binary search tree (or <italic>k</italic>-d tree, where <italic>k</italic> is the dimensionality of the search space) as a data structure for storage of information to be retrieved by associative searches. The <italic>k</italic>-d tree is defined and examples are given. It is shown to be quite efficient in its storage requirements. A significant advantage of this structure is that a single data structure can handle many types of queries very efficiently. Various utility algorithms are developed; their proven average running times in an <italic>n</italic> record file are: insertion, <italic>O</italic>(log <italic>n</italic>); deletion of the root, <italic>O</italic>(<italic>n</italic><supscrpt>(<italic>k</italic>-1)/<italic>k</italic></supscrpt>); deletion of a random node, <italic>O</italic>(log <italic>n</italic>); and optimization (guarantees logarithmic performance of searches), <italic>O</italic>(<italic>n</italic> log <italic>n</italic>). Search algorithms are given for partial match queries with <italic>t</italic> keys specified [proven maximum running time of <italic>O</italic>(<italic>n</italic><supscrpt>(<italic>k</italic>-<italic>t</italic>)/<italic>k</italic></supscrpt>)] and for nearest neighbor queries [empirically observed average running time of <italic>O</italic>(log <italic>n</italic>).] These performances far surpass the best currently known algorithms for these tasks. An algorithm is presented to handle any general intersection query. The main focus of this paper is theoretical. It is felt, however, that <italic>k</italic>-d trees could be quite useful in many applications, and examples of potential uses are given.",
"title": ""
}
] |
[
{
"docid": "65fa13e16b7411c5b3ed20f6009809df",
"text": "In the past few years, various advancements have been made in generative models owing to the formulation of Generative Adversarial Networks (GANs). GANs have been shown to perform exceedingly well on a wide variety of tasks pertaining to image generation and style transfer. In the field of Natural Language Processing, word embeddings such as word2vec and GLoVe are state-of-the-art methods for applying neural network models on textual data. Attempts have been made for utilizing GANs with word embeddings for text generation. This work presents an approach to text generation using SkipThought sentence embeddings in conjunction with GANs based on gradient penalty functions and f-measures. The results of using sentence embeddings with GANs for generating text conditioned on input information are comparable to the approaches where word embeddings are used.",
"title": ""
},
{
"docid": "4523358a96dbf48fd86a1098ffef5c7e",
"text": "This paper proposes a new randomized strategy for adaptive MCMC using Bayesian optimization. This approach applies to nondifferentiable objective functions and trades off exploration and exploitation to reduce the number of potentially costly objective function evaluations. We demonstrate the strategy in the complex setting of sampling from constrained, discrete and densely connected probabilistic graphical models where, for each variation of the problem, one needs to adjust the parameters of the proposal mechanism automatically to ensure efficient mixing of the Markov chains.",
"title": ""
},
{
"docid": "15f51cbbb75d236a5669f613855312e0",
"text": "The recent work of Gatys et al., who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems. While their image generation technique uses a slow optimization process, recently several authors have proposed to learn generator neural networks that can produce similar outputs in one quick forward pass. While generator networks are promising, they are still inferior in visual quality and diversity compared to generation-by-optimization. In this work, we advance them in two significant ways. First, we introduce an instance normalization module to replace batch normalization with significant improvements to the quality of image stylization. Second, we improve diversity by introducing a new learning formulation that encourages generators to sample unbiasedly from the Julesz texture ensemble, which is the equivalence class of all images characterized by certain filter responses. Together, these two improvements take feed forward texture synthesis and image stylization much closer to the quality of generation-via-optimization, while retaining the speed advantage.",
"title": ""
},
{
"docid": "27dda1e123c1b2844b9a570c0f01757b",
"text": "Yue-Tian-Yi Zhao a, Zi-Yang Jia b, Yong Tang c,d,*, Jason Jie Xiong e, Yi-Cheng Zhang d a School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, 610054, China b Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA c School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 610054, China d Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700 Fribourg, Switzerland e Department of Computer Information Systems and Supply Chain Management, Walker College of Business, Appalachian State University, Boone, NC 28608, USA",
"title": ""
},
{
"docid": "9b52a659fb6383e92c5968a082b01b71",
"text": "The internet of things (IoT) has a variety of application domains, including smart homes. This paper analyzes distinct IoT security and privacy features, including security requirements, threat models, and attacks from the smart home perspective. Further, this paper proposes an intelligent collaborative security management model to minimize security risk. The security challenges of the IoT for a smart home scenario are encountered, and a comprehensive IoT security management for smart homes has been proposed.",
"title": ""
},
{
"docid": "c2b3329a849a5554ab6636bf42218519",
"text": "Autism spectrum disorders are not rare; many primary care pediatricians care for several children with autism spectrum disorders. Pediatricians play an important role in early recognition of autism spectrum disorders, because they usually are the first point of contact for parents. Parents are now much more aware of the early signs of autism spectrum disorders because of frequent coverage in the media; if their child demonstrates any of the published signs, they will most likely raise their concerns to their child's pediatrician. It is important that pediatricians be able to recognize the signs and symptoms of autism spectrum disorders and have a strategy for assessing them systematically. Pediatricians also must be aware of local resources that can assist in making a definitive diagnosis of, and in managing, autism spectrum disorders. The pediatrician must be familiar with developmental, educational, and community resources as well as medical subspecialty clinics. This clinical report is 1 of 2 documents that replace the original American Academy of Pediatrics policy statement and technical report published in 2001. This report addresses background information, including definition, history, epidemiology, diagnostic criteria, early signs, neuropathologic aspects, and etiologic possibilities in autism spectrum disorders. In addition, this report provides an algorithm to help the pediatrician develop a strategy for early identification of children with autism spectrum disorders. The accompanying clinical report addresses the management of children with autism spectrum disorders and follows this report on page 1162 [available at www.pediatrics.org/cgi/content/full/120/5/1162]. Both clinical reports are complemented by the toolkit titled \"Autism: Caring for Children With Autism Spectrum Disorders: A Resource Toolkit for Clinicians,\" which contains screening and surveillance tools, practical forms, tables, and parent handouts to assist the pediatrician in the identification, evaluation, and management of autism spectrum disorders in children.",
"title": ""
},
{
"docid": "837dd154df4971adaa4d1f397f546c20",
"text": "Public infrastructure systems provide many of the services that are critical to the health, functioning, and security of society. Many of these infrastructures, however, lack continuous physical sensor monitoring to be able to detect failure events or damage that has occurred to these systems. We propose the use of social sensor big data to detect these events. We focus on two main infrastructure systems, transportation and energy, and use data from Twitter streams to detect damage to bridges, highways, gas lines, and power infrastructure. Through a three-step filtering approach and assignment to geographical cells, we are able to filter out noise in this data to produce relevant geolocated tweets identifying failure events. Applying the strategy to real-world data, we demonstrate the ability of our approach to utilize social sensor big data to detect damage and failure events in these critical public infrastructures.",
"title": ""
},
{
"docid": "8ec9a57e096e05ad57e3421b67dc1b27",
"text": "I review the literature on equity market momentum, a seminal and intriguing finding in finance. This phenomenon is the ability of returns over the past one to four quarters to predict future returns over the same period in the cross-section of equities. I am able to document about ten different theories for momentum, and a large volume of empirical work on the topic. I find, however, that after a quarter century following the discovery of momentum by Jegadeesh and Titman (1993), we are still no closer to finding a discernible cause for this phenomenon, in spite of the extensive work on the topic. More needs to be done to develop tests that are focused not so much on testing one specific theory, but on ruling out alternative",
"title": ""
},
{
"docid": "12579b211831d9df508ecd1f90469399",
"text": "This article considers stochastic algorithms for efficiently solving a class of large scale non-linear least squares (NLS) problems which frequently arise in applications. We propose eight variants of a practical randomized algorithm where the uncertainties in the major stochastic steps are quantified. Such stochastic steps involve approximating the NLS objective function using Monte-Carlo methods, and this is equivalent to the estimation of the trace of corresponding symmetric positive semi-definite (SPSD) matrices. For the latter, we prove tight necessary and sufficient conditions on the sample size (which translates to cost) to satisfy the prescribed probabilistic accuracy. We show that these conditions are practically computable and yield small sample sizes. They are then incorporated in our stochastic algorithm to quantify the uncertainty in each randomized step. The bounds we use are applications of more general results regarding extremal tail probabilities of linear combinations of gamma distributed random variables. We derive and prove new results concerning the maximal and minimal tail probabilities of such linear combinations, which can be considered independently of the rest of this paper.",
"title": ""
},
{
"docid": "b81ed45ad3a3fae8d85993f8cf462640",
"text": "Structure learning is a very important problem in the field of Bayesian networks (BNs). It is also an active research area for more than two decades; therefore, many approaches have been proposed in order to find an optimal structure based on training samples. In this paper, a Particle Swarm Optimization (PSO)-based algorithm is proposed to solve the BN structure learning problem; named BNC-PSO (Bayesian Network Construction algorithm using PSO). Edge inserting/deleting is employed in the algorithm to make the particles have the ability to achieve the optimal solution, while a cycle removing procedure is used to prevent the generation of invalid solutions. Then, the theorem of Markov chain is used to prove the global convergence of our proposed algorithm. Finally, some experiments are designed to evaluate the performance of the proposed PSO-based algorithm. Experimental results indicate that BNC-PSO is worthy of being studied in the field of BNs construction. Meanwhile, it can significantly increase nearly 15% in the scoring metric values, comparing with other optimization-based algorithms. BNC‐PSO: Structure Learning of Bayesian Networks by Particle Swarm Optimization S. Gheisari M.R. Meybodi Department of Computer, Science and Research Branch, Islamic Azad University, Tehran, Iran. Computer Engineering and Information Technology Department, Amirkabir University of Technology, Tehran, Iran. [email protected] [email protected] Abstract Structure learning is a very important problem in the field of Bayesian networks (BNs). It is also an active research area for more than two decades; therefore, many approaches have been proposed in order to find an optimal structure based on training samples. In this paper, a Particle Swarm Optimization (PSO)-based algorithm is proposed to solve the BN structure learning problem; named BNC-PSO (Bayesian Network Construction algorithm using PSO). Edge inserting/deleting is employed in the algorithm to make the particles have the ability to achieve the optimal solution, while a cycle removing procedure is used to prevent the generation of invalid solutions. Then, the theorem of Markov chain is used to prove the global convergence of our proposed algorithm. Finally, some experiments are designed to evaluate the performance of the proposed PSO-based algorithm. Experimental results indicate that BNC-PSO is worthy of being studied in the field of BNs construction. Meanwhile, it can significantly increase nearly 15% in the scoring metric values, comparing with other optimization-based algorithms.",
"title": ""
},
{
"docid": "e18a8e3622ae85763c729bd2844ce14c",
"text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.05.028 ⇑ Corresponding author. E-mail address: [email protected] (D. Gil). 1 These authors equally contributed to this work. Fertility rates have dramatically decreased in the last two decades, especially in men. It has been described that environmental factors, as well as life habits, may affect semen quality. Artificial intelligence techniques are now an emerging methodology as decision support systems in medicine. In this paper we compare three artificial intelligence techniques, decision trees, Multilayer Perceptron and Support Vector Machines, in order to evaluate their performance in the prediction of the seminal quality from the data of the environmental factors and lifestyle. To do that we collect data by a normalized questionnaire from young healthy volunteers and then, we use the results of a semen analysis to asses the accuracy in the prediction of the three classification methods mentioned above. The results show that Multilayer Perceptron and Support Vector Machines show the highest accuracy, with prediction accuracy values of 86% for some of the seminal parameters. In contrast decision trees provide a visual and illustrative approach that can compensate the slightly lower accuracy obtained. In conclusion artificial intelligence methods are a useful tool in order to predict the seminal profile of an individual from the environmental factors and life habits. From the studied methods, Multilayer Perceptron and Support Vector Machines are the most accurate in the prediction. Therefore these tools, together with the visual help that decision trees offer, are the suggested methods to be included in the evaluation of the infertile patient. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6e07085f81dc4f6892e0f2aba7a8dcdd",
"text": "With the rapid growth in the number of spiraling network users and the increase in the use of communication technologies, the multi-server environment is the most common environment for widely deployed applications. Reddy et al. recently showed that Lu et al.'s biometric-based authentication scheme for multi-server environment was insecure, and presented a new authentication and key-agreement scheme for the multi-server. Reddy et al. continued to assert that their scheme was more secure and practical. After a careful analysis, however, their scheme still has vulnerabilities to well-known attacks. In this paper, the vulnerabilities of Reddy et al.'s scheme such as the privileged insider and user impersonation attacks are demonstrated. A proposal is then presented of a new biometric-based user authentication scheme for a key agreement and multi-server environment. Lastly, the authors demonstrate that the proposed scheme is more secure using widely accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool, and that it serves to satisfy all of the required security properties.",
"title": ""
},
{
"docid": "6c68bccf376da1f963aaa8ec5e08b646",
"text": "The composition of the gut microbiota is in constant flow under the influence of factors such as the diet, ingested drugs, the intestinal mucosa, the immune system, and the microbiota itself. Natural variations in the gut microbiota can deteriorate to a state of dysbiosis when stress conditions rapidly decrease microbial diversity and promote the expansion of specific bacterial taxa. The mechanisms underlying intestinal dysbiosis often remain unclear given that combinations of natural variations and stress factors mediate cascades of destabilizing events. Oxidative stress, bacteriophages induction and the secretion of bacterial toxins can trigger rapid shifts among intestinal microbial groups thereby yielding dysbiosis. A multitude of diseases including inflammatory bowel diseases but also metabolic disorders such as obesity and diabetes type II are associated with intestinal dysbiosis. The characterization of the changes leading to intestinal dysbiosis and the identification of the microbial taxa contributing to pathological effects are essential prerequisites to better understand the impact of the microbiota on health and disease.",
"title": ""
},
{
"docid": "1987ba476be524db448cce1835460a33",
"text": "We report on the main features of the IJCAI’07 program, including its theme, and its schedule and organization. In particular, we discuss an effective and novel presentation format at IJCAI in which oral and poster papers were presented in the same sessions categorized by topic area.",
"title": ""
},
{
"docid": "48fde3a2cd8781ce675ce116ed8ee861",
"text": "DVB-S2 is the second-generation specification for satellite broad-band applications, developed by the Digital Video Broadcasting (DVB) Project in 2003. The system is structured as a toolkit to allow the implementation of the following satellite applications: TV and sound broadcasting, interactivity (i.e., Internet access), and professional services, such as TV contribution links and digital satellite news gathering. It has been specified around three concepts: best transmission performance approaching the Shannon limit, total flexibility, and reasonable receiver complexity. Channel coding and modulation are based on more recent developments by the scientific community: low density parity check codes are adopted, combined with QPSK, 8PSK, 16APSK, and 32APSK modulations for the system to work properly on the nonlinear satellite channel. The framing structure allows for maximum flexibility in a versatile system and also synchronization in worst case configurations (low signal-to-noise ratios). Adaptive coding and modulation, when used in one-to-one links, then allows optimization of the transmission parameters for each individual user,dependant on path conditions. Backward-compatible modes are also available,allowing existing DVB-S integrated receivers-decoders to continue working during the transitional period. The paper provides a tutorial overview of the DVB-S2 system, describing its main features and performance in various scenarios and applications.",
"title": ""
},
{
"docid": "583e56fcef68f697d19b179766341aba",
"text": "We recorded echolocation calls from 14 sympatric species of bat in Britain. Once digitised, one temporal and four spectral features were measured from each call. The frequency-time course of each call was approximated by fitting eight mathematical functions, and the goodness of fit, represented by the mean-squared error, was calculated. Measurements were taken using an automated process that extracted a single call from background noise and measured all variables without intervention. Two species of Rhinolophus were easily identified from call duration and spectral measurements. For the remaining 12 species, discriminant function analysis and multilayer back-propagation perceptrons were used to classify calls to species level. Analyses were carried out with and without the inclusion of curve-fitting data to evaluate its usefulness in distinguishing among species. Discriminant function analysis achieved an overall correct classification rate of 79% with curve-fitting data included, while an artificial neural network achieved 87%. The removal of curve-fitting data improved the performance of the discriminant function analysis by 2 %, while the performance of a perceptron decreased by 2 %. However, an increase in correct identification rates when curve-fitting information was included was not found for all species. The use of a hierarchical classification system, whereby calls were first classified to genus level and then to species level, had little effect on correct classification rates by discriminant function analysis but did improve rates achieved by perceptrons. This is the first published study to use artificial neural networks to classify the echolocation calls of bats to species level. Our findings are discussed in terms of recent advances in recording and analysis technologies, and are related to factors causing convergence and divergence of echolocation call design in bats.",
"title": ""
},
{
"docid": "43f2dcf2f2260ff140e20380d265105b",
"text": "As ontologies are the backbone of the Semantic Web, they attract much attention from researchers and engineers in many domains. This results in an increasing number of ontologies and semantic web applications. The number and complexity of such ontologies makes it hard for developers of ontologies and tools to decide which ontologies to use and reuse. To simplify the problem, a modularization algorithm can be used to partition ontologies into sets of modules. In order to evaluate the quality of modularization, we propose a new evaluation metric that quantifies the goodness of ontology modularization. In particular, we investigate the ontology module homogeneity, which assesses module cohesion, and the ontology module heterogeneity, which appraises module coupling. The experimental results demonstrate that the proposed metric is effective.",
"title": ""
},
{
"docid": "62cc85ab7517797f50ce5026fbc5617a",
"text": "OBJECTIVE\nTo assess for the first time the morphology of the lymphatic system in patients with lipedema and lipo-lymphedema of the lower extremities by MR lymphangiography.\n\n\nMATERIALS AND METHODS\n26 lower extremities in 13 consecutive patients (5 lipedema, 8 lipo-lymphedema) were examined by MR lymphangiography. 18 mL of gadoteridol and 1 mL of mepivacainhydrochloride 1% were subdivided into 10 portions and injected intracutaneously in the forefoot. MR imaging was performed with a 1.5-T system equipped with high-performance gradients. For MR lymphangiography, a 3D-spoiled gradient-echo sequence was used. For evaluation of the lymphedema a heavily T2-weighted 3D-TSE sequence was performed.\n\n\nRESULTS\nIn all 16 lower extremities (100%) with lipo-lymphedema, high signal intensity areas in the epifascial region could be detected on the 3D-TSE sequence. In the 16 examined lower extremities with lipo-lymphedema, 8 lower legs and 3 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 3 mm. In two lower legs with lipo-lymphedema, an area of dermal back-flow was seen, indicating lymphatic outflow obstruction. In the 10 examined lower extremities with clinically pure lipedema, 4 lower legs and 2 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 2 mm, indicating a subclinical status of lymphedema. In all examined extremities, the inguinal lymph nodes demonstrated a contrast material enhancement in the first image acquisition 15 min after injection.\n\n\nCONCLUSION\nMR lymphangiography is a safe and accurate minimal-invasive imaging modality for the evaluation of the lymphatic circulation in patients with lipedema and lipo-lymphedema of the lower extremities. If the extent of lymphatic involvement is unclear at the initial clinical examination or requires a better definition for optimal therapeutic planning, MR lymphangiography is able to identify the anatomic and physiological derangements and to establish an objective baseline.",
"title": ""
},
{
"docid": "c23cb6c1cebcc1f5fcd925dc3b75ab6b",
"text": "This paper presents the design of a controller for an autonomous ground vehicle. The goal is to track the lane centerline while avoiding collisions with obstacles. A nonlinear model predictive control (MPC) framework is used where the control inputs are the front steering angle and the braking torques at the four wheels. The focus of this work is on the development of a tailored algorithm for solving the nonlinear MPC problem. Hardware-in-the-loop simulations with the proposed algorithm show a reduction in the computational time as compared to general purpose nonlinear solvers. Experimental tests on a passenger vehicle at high speeds on low friction road surfaces show the effectiveness of the proposed algorithm.",
"title": ""
}
] |
scidocsrr
|
e745cdf3341de90bb9b19a4739da8659
|
Game design principles in everyday fitness applications
|
[
{
"docid": "16d949f6915cbb958cb68a26c6093b6b",
"text": "Overweight and obesity are a global epidemic, with over one billion overweight adults worldwide (300+ million of whom are obese). Obesity is linked to several serious health problems and medical conditions. Medical experts agree that physical activity is critical to maintaining fitness, reducing weight, and improving health, yet many people have difficulty increasing and maintaining physical activity in everyday life. Clinical studies have shown that health benefits can occur from simply increasing the number of steps one takes each day and that social support can motivate people to stay active. In this paper, we describe Houston, a prototype mobile phone application for encouraging activity by sharing step count with friends. We also present four design requirements for technologies that encourage physical activity that we derived from a three-week long in situ pilot study that was conducted with women who wanted to increase their physical activity.",
"title": ""
},
{
"docid": "e5a3119470420024b99df2d6eb14b966",
"text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?",
"title": ""
},
{
"docid": "1aeca45f1934d963455698879b1e53e8",
"text": "A sedentary lifestyle is a contributing factor to chronic diseases, and it is often correlated with obesity. To promote an increase in physical activity, we created a social computer game, Fish'n'Steps, which links a player’s daily foot step count to the growth and activity of an animated virtual character, a fish in a fish tank. As further encouragement, some of the players’ fish tanks included other players’ fish, thereby creating an environment of both cooperation and competition. In a fourteen-week study with nineteen participants, the game served as a catalyst for promoting exercise and for improving game players’ attitudes towards physical activity. Furthermore, although most player’s enthusiasm in the game decreased after the game’s first two weeks, analyzing the results using Prochaska's Transtheoretical Model of Behavioral Change suggests that individuals had, by that time, established new routines that led to healthier patterns of physical activity in their daily lives. Lessons learned from this study underscore the value of such games to encourage rather than provide negative reinforcement, especially when individuals are not meeting their own expectations, to foster long-term behavioral change.",
"title": ""
}
] |
[
{
"docid": "c5081f86c4a173a40175e65b05d9effb",
"text": "Convergence insufficiency is characterized by an inability to maintain effortless alignment of the two eyes (binocular convergence) while performing near tasks. Conventional rehabilitative vision therapy for the condition is monotonous and dull, leading to low levels of compliance. If the therapy is not performed then improvements in the condition are unlikely. This paper examines the use of computer games as a new delivery paradigm for vision therapy, specifically at how they can be used in the treatment of convergence insufficiency while at home. A game was created and tested in a small scale clinical trial. Results show clinical improvements, as well as high levels of compliance and motivation. Additionally, the game was able to objectively track patient progress and compliance.",
"title": ""
},
{
"docid": "928eb797289d2630ff2e701ced782a14",
"text": "The restricted Boltzmann machine (RBM) has received an increasing amount of interest in recent years. It determines good mapping weights that capture useful latent features in an unsupervised manner. The RBM and its generalizations have been successfully applied to a variety of image classification and speech recognition tasks. However, most of the existing RBM-based models disregard the preservation of the data manifold structure. In many real applications, the data generally reside on a low-dimensional manifold embedded in high-dimensional ambient space. In this brief, we propose a novel graph regularized RBM to capture features and learning representations, explicitly considering the local manifold structure of the data. By imposing manifold-based locality that preserves constraints on the hidden layer of the RBM, the model ultimately learns sparse and discriminative representations. The representations can reflect data distributions while simultaneously preserving the local manifold structure of data. We test our model using several benchmark image data sets for unsupervised clustering and supervised classification problem. The results demonstrate that the performance of our method exceeds the state-of-the-art alternatives.",
"title": ""
},
{
"docid": "70ea3e32d4928e7fd174b417ec8b6d0e",
"text": "We show that invariance in a deep neural network is equivalent to information minimality of the representation it computes, and that stacking layers and injecting noise during training naturally bias the network towards learning invariant representations. Then, we show that overfitting is related to the quantity of information stored in the weights, and derive a sharp bound between this information and the minimality and Total Correlation of the layers. This allows us to conclude that implicit and explicit regularization of the loss function not only help limit overfitting, but also foster invariance and disentangling of the learned representation. We also shed light on the properties of deep networks in relation to the geometry of the loss function.",
"title": ""
},
{
"docid": "fd4bd9edcaff84867b6e667401aa3124",
"text": "We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentist methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management. JOURNAL OF WILDLIFE MANAGEMENT 65(3):373-378",
"title": ""
},
{
"docid": "b1453c089b5b9075a1b54e4f564f7b45",
"text": "Neural networks are increasingly deployed in real-world safety-critical domains such as autonomous driving, aircraft collision avoidance, and malware detection. However, these networks have been shown to often mispredict on inputs with minor adversarial or even accidental perturbations. Consequences of such errors can be disastrous and even potentially fatal as shown by the recent Tesla autopilot crashes. Thus, there is an urgent need for formal analysis systems that can rigorously check neural networks for violations of different safety properties such as robustness against adversarial perturbations within a certain L-norm of a given image. An effective safety analysis system for a neural network must be able to either ensure that a safety property is satisfied by the network or find a counterexample, i.e., an input for which the network will violate the property. Unfortunately, most existing techniques for performing such analysis struggle to scale beyond very small networks and the ones that can scale to larger networks suffer from high false positives and cannot produce concrete counterexamples in case of a property violation. In this paper, we present a new efficient approach for rigorously checking different safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude. Our approach can check different safety properties and find concrete counterexamples for networks that are 10× larger than the ones supported by existing analysis techniques. We believe that our approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of neural networks and guide the training process of more robust neural networks.",
"title": ""
},
{
"docid": "ad4d38ee8089a67353586abad319038f",
"text": "State-of-the-art systems of Chinese Named Entity Recognition (CNER) require large amounts of hand-crafted features and domainspecific knowledge to achieve high performance. In this paper, we apply a bidirectional LSTM-CRF neural network that utilizes both characterlevel and radical-level representations. We are the first to use characterbased BLSTM-CRF neural architecture for CNER. By contrasting the results of different variants of LSTM blocks, we find the most suitable LSTM block for CNER. We are also the first to investigate Chinese radical-level representations in BLSTM-CRF architecture and get better performance without carefully designed features. We evaluate our system on the third SIGHAN Bakeoff MSRA data set for simplfied CNER task and achieve state-of-the-art performance 90.95% F1.",
"title": ""
},
{
"docid": "c256283819014d79dd496a3183116b68",
"text": "For the 5th generation of terrestrial mobile communications, Multi-Carrier (MC) transmission based on non-orthogonal waveforms is a promising technology component compared to orthogonal frequency division multiplex (OFDM) in order to achieve higher throughput and enable flexible spectrum management. Coverage extension and service continuity can be provided considering satellites as additional components in future networks by allowing vertical handover to terrestrial radio interfaces. In this paper, the properties of Filter Bank Multicarrier (FBMC) as potential MC transmission scheme is discussed taking into account the requirements for the satellite-specific PHY-Layer like non-linear distortions due to High Power Amplifiers (HPAs). The performance for specific FBMC configurations is analyzed in terms of peak-to-average power ratio (PAPR), computational complexity, non-linear distortions as well as carrier frequency offsets sensitivity (CFOs). Even though FBMC and OFDM have similar PAPR and suffer comparable spectral regrowth at the output of the non linear amplifier, simulations on link level show that FBMC still outperforms OFDM in terms of CFO sensitivity and symbol error rate in the presence of non-linear distortions.",
"title": ""
},
{
"docid": "c2f807e336be1b8d918d716c07668ae1",
"text": "The present article proposes and describes a new ZCS non-isolated bidirectional buck-boost DC-DC converter for energy storage applications in electric vehicles. Usually, the conventional converters are adapted with an auxiliary resonant cell to provide the zero current switching turn-on/turn-off condition for the main switching devices. The advantages of proposed converter has reduced switching losses, reduced component count and improved efficiency. The proposed converter operates either in boost or buck mode. This paper mainly deals with the operating principles, analysis and design simulations of the proposed converter in order to prove the better soft-switching capability, reduced switching losses and efficiency improvement than the conventional converter.",
"title": ""
},
{
"docid": "7963adab39b58ab0334b8eef4149c59c",
"text": "The aim of the present study was to gain a better understanding of the content characteristics that make online consumer reviews a useful source of consumer information. To this end, we content analyzed reviews of experience and search products posted on Amazon.com (N = 400). The insights derived from this content analysis were linked with the proportion of ‘useful’ votes that reviews received from fellow consumers. The results show that content characteristics are paramount to understanding the perceived usefulness of reviews. Specifically, argumentation (density and diversity) served as a significant predictor of perceived usefulness, as did review valence although this latter effect was contingent on the type of product (search or experience) being evaluated in reviews. The presence of expertise claims appeared to be weakly related to the perceived usefulness of reviews. The broader theoretical, methodological and practical implications of these findings are discussed.",
"title": ""
},
{
"docid": "179d8f41102862710595671e5a819d70",
"text": "Detecting changes in time series data is an important data analysis task with application in various scientific domains. In this paper, we propose a novel approach to address the problem of change detection in time series data, which can find both the amplitude and degree of changes. Our approach is based on wavelet footprints proposed originally by the signal processing community for signal compression. We, however, exploit the properties of footprints to efficiently capture discontinuities in a signal. We show that transforming time series data using footprint basis up to degree D generates nonzero coefficients only at the change points with degree up to D. Exploiting this property, we propose a novel change detection query processing scheme which employs footprint-transformed data to identify change points, their amplitudes, and degrees of change efficiently and accurately. We also present two methods for exact and approximate transformation of data. Our analytical and empirical results with both synthetic and real-world data show that our approach outperforms the best known change detection approach in terms of both performance and accuracy. Furthermore, unlike the state of the art approaches, our query response time is independent from the number of change points in the data and the user-defined change threshold.",
"title": ""
},
{
"docid": "c59aaad99023e5c6898243db208a4c3c",
"text": "This paper presents a method for automated vessel segmentation in retinal images. For each pixel in the field of view of the image, a 41-D feature vector is constructed, encoding information on the local intensity structure, spatial properties, and geometry at multiple scales. An AdaBoost classifier is trained on 789 914 gold standard examples of vessel and nonvessel pixels, then used for classifying previously unseen images. The algorithm was tested on the public digital retinal images for vessel extraction (DRIVE) set, frequently used in the literature and consisting of 40 manually labeled images with gold standard. Results were compared experimentally with those of eight algorithms as well as the additional manual segmentation provided by DRIVE. Training was conducted confined to the dedicated training set from the DRIVE database, and feature-based AdaBoost classifier (FABC) was tested on the 20 images from the test set. FABC achieved an area under the receiver operating characteristic (ROC) curve of 0.9561, in line with state-of-the-art approaches, but outperforming their accuracy (0.9597 versus 0.9473 for the nearest performer).",
"title": ""
},
{
"docid": "e11b4a08fc864112d4f68db1ea9703e9",
"text": "Forecasting is an integral part of any organization for their decision-making process so that they can predict their targets and modify their strategy in order to improve their sales or productivity in the coming future. This paper evaluates and compares various machine learning models, namely, ARIMA, Auto Regressive Neural Network(ARNN), XGBoost, SVM, Hy-brid Models like Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM and STL Decomposition (using ARIMA, Snaive, XGBoost) to forecast sales of a drug store company called Rossmann. Training data set contains past sales and supplemental information about drug stores. Accuracy of these models is measured by metrics such as MAE and RMSE. Initially, linear model such as ARIMA has been applied to forecast sales. ARIMA was not able to capture nonlinear patterns precisely, hence nonlinear models such as Neural Network, XGBoost and SVM were used. Nonlinear models performed better than ARIMA and gave low RMSE. Then, to further optimize the performance, composite models were designed using hybrid technique and decomposition technique. Hybrid ARIMA-ARNN, Hybrid ARIMA-XGBoost, Hybrid ARIMA-SVM were used and all of them performed better than their respective individual models. Then, the composite model was designed using STL Decomposition where the decomposed components namely seasonal, trend and remainder components were forecasted by Snaive, ARIMA and XGBoost. STL gave better results than individual and hybrid models. This paper evaluates and analyzes why composite models give better results than an individual model and state that decomposition technique is better than the hybrid technique for this application.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "8c2e69380cebdd6affd43c6bfed2fc51",
"text": "A fundamental property of many plasma-membrane proteins is their association with the underlying cytoskeleton to determine cell shape, and to participate in adhesion, motility and other plasma-membrane processes, including endocytosis and exocytosis. The ezrin–radixin–moesin (ERM) proteins are crucial components that provide a regulated linkage between membrane proteins and the cortical cytoskeleton, and also participate in signal-transduction pathways. The closely related tumour suppressor merlin shares many properties with ERM proteins, yet also provides a distinct and essential function.",
"title": ""
},
{
"docid": "a1046f5282cf4057fd143fdce79c6990",
"text": "Rheumatoid arthritis is a multisystem disease with underlying immune mechanisms. Osteoarthritis is a debilitating, progressive disease of diarthrodial joints associated with the aging process. Although much is known about the pathogenesis of rheumatoid arthritis and osteoarthritis, our understanding of some immunologic changes remains incomplete. This study tries to examine the numeric changes in the T cell subsets and the alterations in the levels of some cytokines and adhesion molecules in these lesions. To accomplish this goal, peripheral blood and synovial fluid samples were obtained from 24 patients with rheumatoid arthritis, 15 patients with osteoarthritis and six healthy controls. The counts of CD4 + and CD8 + T lymphocytes were examined using flow cytometry. The levels of some cytokines (TNF-α, IL1-β, IL-10, and IL-17) and a soluble intercellular adhesion molecule-1 (sICAM-1) were measured in the sera and synovial fluids using enzyme linked immunosorbant assay. We found some variations in the counts of T cell subsets, the levels of cytokines and sICAM-1 adhesion molecule between the healthy controls and the patients with arthritis. High levels of IL-1β, IL-10, IL-17 and TNF-α (in the serum and synovial fluid) were observed in arthritis compared to the healthy controls. In rheumatoid arthritis, a high serum level of sICAM-1 was found compared to its level in the synovial fluid. A high CD4+/CD8+ T cell ratio was found in the blood of the patients with rheumatoid arthritis. In rheumatoid arthritis, the cytokine levels correlated positively with some clinicopathologic features. To conclude, the development of rheumatoid arthritis and osteoarthritis is associated with alteration of the levels of some cytokines. The assessment of these immunologic changes may have potential prognostic roles.",
"title": ""
},
{
"docid": "15e034d722778575b43394b968be19ad",
"text": "Elections are contests for the highest stakes in national politics and the electoral system is a set of predetermined rules for conducting elections and determining their outcome. Thus defined, the electoral system is distinguishable from the actual conduct of elections as well as from the wider conditions surrounding the electoral contest, such as the state of civil liberties, restraints on the opposition and access to the mass media. While all these aspects are of obvious importance to free and fair elections, the main interest of this study is the electoral system.",
"title": ""
},
{
"docid": "77b78ec70f390289424cade3850fc098",
"text": "As the primary barrier between an organism and its environment, epithelial cells are well-positioned to regulate tolerance while preserving immunity against pathogens. Class II major histocompatibility complex molecules (MHC class II) are highly expressed on the surface of epithelial cells (ECs) in both the lung and intestine, although the functional consequences of this expression are not fully understood. Here, we summarize current information regarding the interactions that regulate the expression of EC MHC class II in health and disease. We then evaluate the potential role of EC as non-professional antigen presenting cells. Finally, we explore future areas of study and the potential contribution of epithelial surfaces to gut-lung crosstalk.",
"title": ""
},
{
"docid": "11a1c92620d58100194b735bfc18c695",
"text": "Stabilization by static output feedback (SOF) is a long-standing open problem in control: given an n by n matrix A and rectangular matrices B and C, find a p by q matrix K such that A + BKC is stable. Low-order controller design is a practically important problem that can be cast in the same framework, with (p+k)(q+k) design parameters instead of pq, where k is the order of the controller, and k << n. Robust stabilization further demands stability in the presence of perturbation and satisfactory transient as well as asymptotic system response. We formulate two related nonsmooth, nonconvex optimization problems over K, respectively with the following objectives: minimization of the -pseudospectral abscissa of A+BKC, for a fixed ≥ 0, and maximization of the complex stability radius of A + BKC. Finding global optimizers of these functions is hard, so we use a recently developed gradient sampling method that approximates local optimizers. For modest-sized systems, local optimization can be carried out from a large number of starting points with no difficulty. The best local optimizers may then be investigated as candidate solutions to the static output feedback or low-order controller design problem. We show results for two problems published in the control literature. The first is a turbo-generator example that allows us to show how different choices of the optimization objective lead to stabilization with qualitatively different properties, conveniently visualized by pseudospectral plots. The second is a well known model of a Boeing 767 aircraft at a flutter condition. For this problem, we are not aware of any SOF stabilizing K published in the literature. Our method was not only able to find an SOF stabilizing K, but also to locally optimize the complex stability radius of A + BKC. We also found locally optimizing order–1 and order–2 controllers for this problem. All optimizers are visualized using pseudospectral plots.",
"title": ""
},
{
"docid": "02469f669769f5c9e2a9dc49cee20862",
"text": "In this work we study the use of 3D hand poses to recognize first-person dynamic hand actions interacting with 3D objects. Towards this goal, we collected RGB-D video sequences comprised of more than 100K frames of 45 daily hand action categories, involving 26 different objects in several hand configurations. To obtain hand pose annotations, we used our own mo-cap system that automatically infers the 3D location of each of the 21 joints of a hand model via 6 magnetic sensors and inverse kinematics. Additionally, we recorded the 6D object poses and provide 3D object models for a subset of hand-object interaction sequences. To the best of our knowledge, this is the first benchmark that enables the study of first-person hand actions with the use of 3D hand poses. We present an extensive experimental evaluation of RGB-D and pose-based action recognition by 18 baselines/state-of-the-art approaches. The impact of using appearance features, poses, and their combinations are measured, and the different training/testing protocols are evaluated. Finally, we assess how ready the 3D hand pose estimation field is when hands are severely occluded by objects in egocentric views and its influence on action recognition. From the results, we see clear benefits of using hand pose as a cue for action recognition compared to other data modalities. Our dataset and experiments can be of interest to communities of 3D hand pose estimation, 6D object pose, and robotics as well as action recognition.",
"title": ""
},
{
"docid": "24e1a6f966594d4230089fc433e38ce6",
"text": "The need for omnidirectional antennas for wireless applications has increased considerably. The antennas are used in a variety of bands anywhere from 1.7 to 2.5 GHz, in different configurations which mainly differ in gain. The omnidirectionality is mostly obtained using back-to-back elements or simply using dipoles in different collinear-array configurations. The antenna proposed in this paper is a patch which was built in a cylindrical geometry rather than a planar one, and which generates an omnidirectional pattern in the H-plane.",
"title": ""
}
] |
scidocsrr
|
30c84ddbfcd91cf01f3da6474043f8e0
|
The Chaos Within Sudoku
|
[
{
"docid": "7b170913f315cf5f240958ffbde6697e",
"text": "We show that single-digit “Nishio” subproblems in n×n Sudoku puzzles may be solved in time o(2n), faster than previous solutions such as the pattern overlay method. We also show that single-digit deduction in Sudoku is NP-hard.",
"title": ""
},
{
"docid": "0abc7402f2e9a51be82c4ceea9f9ec02",
"text": "It's one of the fundamental mathematical problems of our time, and its importance grows with the rise of powerful computers.",
"title": ""
}
] |
[
{
"docid": "21af4ea62f07966097c8ab46f7226907",
"text": "With the introduction of Microsoft Kinect, there has been considerable interest in creating various attractive and feasible applications in related research fields. Kinect simultaneously captures the depth and color information and provides real-time reliable 3D full-body human-pose reconstruction that essentially turns the human body into a controller. This article presents a finger-writing system that recognizes characters written in the air without the need for an extra handheld device. This application adaptively merges depth, skin, and background models for the hand segmentation to overcome the limitations of the individual models, such as hand-face overlapping problems and the depth-color nonsynchronization. The writing fingertip is detected by a new real-time dual-mode switching method. The recognition accuracy rate is greater than 90 percent for the first five candidates of Chinese characters, English characters, and numbers.",
"title": ""
},
{
"docid": "81f504c4e378d0952231565d3ba4c555",
"text": "The alignment problem—establishing links between corresponding phrases in two related sentences—is as important in natural language inference (NLI) as it is in machine translation (MT). But the tools and techniques of MT alignment do not readily transfer to NLI, where one cannot assume semantic equivalence, and for which large volumes of bitext are lacking. We present a new NLI aligner, the MANLI system, designed to address these challenges. It uses a phrase-based alignment representation, exploits external lexical resources, and capitalizes on a new set of supervised training data. We compare the performance of MANLI to existing NLI and MT aligners on an NLI alignment task over the well-known Recognizing Textual Entailment data. We show that MANLI significantly outperforms existing aligners, achieving gains of 6.2% in F1 over a representative NLI aligner and 10.5% over GIZA++.",
"title": ""
},
{
"docid": "ad8c9bc6a3b661eaea101653b4119123",
"text": "In three experiments, we studied the influence of foreign language knowledge on native language performance in an exclusively native language context. Trilinguals with Dutch as their native and dominant language (L1), English as their second language (L2), and French as their third language (L3) performed a word association task (Experiment 1) or a lexical decision task (Experiments 2 and 3) in L1. The L1 stimulus words were cognates with their translations in English, cognates with their translations in French, or were noncognates. In Experiments 1 and 2 with trilinguals who were highly proficient in English and relatively low in proficiency in French, we observed shorter word association and lexical decision times to the L1 words that were cognates with English than to the noncognates. In these relatively low-proficiency French speakers, response times (RTs) for the L1 words that were cognates with French did not differ from those for the noncognates. In Experiment 3, we tested Dutch-English-French trilinguals with a higher level of fluency in French (i.e., equally fluent in English and in French). We now observed faster responses on the L1 words that were cognates with French than on the noncognates. Lexical decision times to the cognates with English were also shorter than those to then oncognates. The results indicate that words presented in the dominant language, to naive participants, activate information in the nontarget, and weaker, language in parallel, implying that the multilinguals' processing system is profoundly nonselective with respect to language. A minimal level of nontarget language fluency seems to be required, however, before any weaker language effects become noticeable in L1 processing.",
"title": ""
},
{
"docid": "2ac52b10bc1ea9e69bb20b05f449d398",
"text": "The application of game elements a in non-gaming context offers a great potential regarding the engagement of senior citizens with information systems. In this paper, we suggest the application of gamification to routine tasks and leisure activities, namely physical and cognitive therapy, the gamification of real-life activities which are no longer accessible due to age-related changes and the application of game design elements to foster social interaction. Furthermore, we point out important chances and challenges such as the lack of gaming experience among the target audience and highlight possible areas for future work which offer valuable design opportunities for frail elderly audiences.",
"title": ""
},
{
"docid": "4c1da8d356e4f793d76f79d4270ecbd0",
"text": "As the proportion of the ageing population in industrialized countries continues to increase, the dermatological concerns of the aged grow in medical importance. Intrinsic structural changes occur as a natural consequence of ageing and are genetically determined. The rate of ageing is significantly different among different populations, as well as among different anatomical sites even within a single individual. The intrinsic rate of skin ageing in any individual can also be dramatically influenced by personal and environmental factors, particularly the amount of exposure to ultraviolet light. Photodamage, which considerably accelerates the visible ageing of skin, also greatly increases the risk of cutaneous neoplasms. As the population ages, dermatological focus must shift from ameliorating the cosmetic consequences of skin ageing to decreasing the genuine morbidity associated with problems of the ageing skin. A better understanding of both the intrinsic and extrinsic influences on the ageing of the skin, as well as distinguishing the retractable aspects of cutaneous ageing (primarily hormonal and lifestyle influences) from the irretractable (primarily intrinsic ageing), is crucial to this endeavour.",
"title": ""
},
{
"docid": "5357d90787090ec822d0b540d09b6c6b",
"text": "Providing accurate attendance marking system in real-time is challenging. It is tough to mark the attendance of a student in the large classroom when there are many students attending the class. Many attendance management systems have been implemented in the recent research. However, the attendance management system based on facial recognition still has issues. Thus many research have been conducted to improve system. This paper reviewed the previous works on attendance management system based on facial recognition. This article does not only provide the literature review on the earlier work or related work, but it also provides the deep analysis of Principal Component Analysis, discussion, suggestions for future work.",
"title": ""
},
{
"docid": "28f9a2b2f6f4e90de20c6af78727b131",
"text": "The detection and potential removal of duplicates is desirable for a number of reasons, such as to reduce the need for unnecessary storage and computation, and to provide users with uncluttered search results. This paper describes an investigation into the application of scalable simhash and shingle state of the art duplicate detection algorithms for detecting near duplicate documents in the CiteSeerX digital library. We empirically explored the duplicate detection methods and evaluated their performance and application to academic documents and identified good parameters for the algorithms. We also analyzed the types of near duplicates identified by each algorithm. The highest F-scores achieved were 0.91 and 0.99 for the simhash and shingle-based methods respectively. The shingle-based method also identified a larger variety of duplicate types than the simhash-based method.",
"title": ""
},
{
"docid": "af05ec4998302687aae09cc1d5ad4ccd",
"text": "The development of wireless portable electronics is moving towards smaller and lighter devices. Although low noise amplifier (LNA) performance is extremely good nowadays, the design engineer still has to make some complex system trades. Many LNA are large, heavy and consume a lot of power. The design of an LNA in radio frequency (RF) circuits requires the trade-off of many important characteristics, such as gain, noise figure (NF), stability, power consumption and complexity. This situation forces designers to make choices in the design of RF circuits. The designed simulation process is done using the Advance Design System (ADS), while FR4 strip board is used for fabrication purposes. A single stage LNA has successfully designed with 7.78 dB forward gain and 1.53 dB noise figure; it is stable along the UNII frequency band.",
"title": ""
},
{
"docid": "83e4ee7cf7a82fcb8cb77f7865d67aa8",
"text": "A meta-analysis of the relationship between class attendance in college and college grades reveals that attendance has strong relationships with both class grades (k = 69, N = 21,195, r = .44) and GPA (k = 33, N = 9,243, r = .41). These relationships make class attendance a better predictor of college grades than any other known predictor of academic performance, including scores on standardized admissions tests such as the SAT, high school GPA, study habits, and study skills. Results also show that class attendance explains large amounts of unique variance in college grades because of its relative independence from SAT scores and high school GPA and weak relationship with student characteristics such as conscientiousness and motivation. Mandatory attendance policies appear to have a small positive impact on average grades (k = 3, N = 1,421, d = .21). Implications for theoretical frameworks of student academic performance and educational policy are discussed. Many college instructors exhort their students to attend class as frequently as possible, arguing that high levels of class attendance are likely to increase learning and improve student grades. Such arguments may hold intuitive appeal and are supported by findings linking class attendance to both learning (e.g., Jenne, 1973) and better grades (e.g., Moore et al., 2003), but both students and some educational researchers appear to be somewhat skeptical of the importance of class attendance. This skepticism is reflected in high class absenteeism rates ranging from 18. This article aims to help resolve the debate regarding the importance of class attendance by providing a quantitative review of the literature investigating the relationship of class attendance with both college grades and student characteristics that may influence attendance. 273 At a theoretical level class attendance fits well into frameworks that emphasize the joint role of cognitive ability and motivation in determining learning and work performance (e.g., Kanfer & Ackerman, 1989). Specifically, cognitive ability and motivation influence academic outcomes via two largely distinct mechanisms— one mechanism related to information processing and the other mechanism being behavioral in nature. Cognitive ability influences the degree to which students are able to process, integrate, and remember material presented to them (Humphreys, 1979), a mechanism that explains the substantial predictive validity of SAT scores for college grades (e. & Ervin, 2000). Noncognitive attributes such as conscientiousness and achievement motivation are thought to influence grades via their influence on behaviors that facilitate the understanding and …",
"title": ""
},
{
"docid": "74eb6322d674dec026dc366fbde490bf",
"text": "The purpose of this investigation was to assess the effects of stance width and foot rotation angle on three-dimensional knee joint moments during bodyweight squat performance. Twenty-eight participants performed 8 repetitions in 4 conditions differing in stance or foot rotation positions. Knee joint moment waveforms were subjected to principal component analysis. Results indicated that increasing stance width resulted in a larger knee flexion moment magnitude, as well as larger and phase-shifted adduction moment waveforms. The knee's internal rotation moment magnitude was significantly reduced with external foot rotation only under the wide stance condition. Moreover, squat performance with a wide stance and externally rotated feet resulted in a flattening of the internal rotation moment waveform during the middle portion of the movement. However, it is speculated that the differences observed across conditions are not of clinical relevance for young, healthy participants.",
"title": ""
},
{
"docid": "ddae88fd5b053c338be337fd4a228f80",
"text": "The semiology of graphics diagrams networks maps that we provide for you will be ultimate to give preference. This reading book is your chosen book to accompany you when in your free time, in your lonely. This kind of book can help you to heal the lonely and get or add the inspirations to be more inoperative. Yeah, book as the widow of the world can be very inspiring manners. As here, this book is also created by an inspiring author that can make influences of you to do more.",
"title": ""
},
{
"docid": "32378690ded8920eb81689fea1ac8c23",
"text": "OBJECTIVE\nTo investigate the effect of Beri-honey-impregnated dressing on diabetic foot ulcer and compare it with normal saline dressing.\n\n\nSTUDY DESIGN\nA randomized, controlled trial.\n\n\nPLACE AND DURATION OF STUDY\nSughra Shafi Medical Complex, Narowal, Pakistan and Bhatti International Trust (BIT) Hospital, Affiliated with Central Park Medical College, Lahore, from February 2006 to February 2010.\n\n\nMETHODOLOGY\nPatients with Wagner's grade 1 and 2 ulcers were enrolled. Those patients were divided in two groups; group A (n=179) treated with honey dressing and group B (n=169) treated with normal saline dressing. Outcome measures were calculated in terms of proportion of wounds completely healed (primary outcome), wound healing time, and deterioration of wounds. Patients were followed-up for a maximum of 120 days.\n\n\nRESULTS\nOne hundred and thirty six wounds (75.97%) out of 179 were completely healed with honey dressing and 97 (57.39%) out of 169 wtih saline dressing (p=0.001). The median wound healing time was 18.00 (6 - 120) days (Median with IQR) in group A and 29.00 (7 - 120) days (Median with IQR) in group B (p < 0.001).\n\n\nCONCLUSION\nThe present results showed that honey is an effective dressing agent instead of conventional dressings, in treating patients of diabetic foot ulcer.",
"title": ""
},
{
"docid": "39e7f2015b1f2df4017a4dd0fa4e0012",
"text": "The large variety of architectural dimensions in automotive electronics design, for example, bus protocols, number of nodes, sensors and actuators interconnections and power distribution topologies, makes architecture design task a very complex but crucial design step especially for OEMs. This situation motivates the need for a design environment that accommodates the integration of a variety of models in a manner that enables the exploration of design alternatives in an efficient and seamless fashion. Exploring these design alternatives in a virtual environment and evaluating them with respect to metrics such as cost, latency, flexibility and reliability provide an important competitive advantage to OEMs and help minimize integration risks later in the design cycle. In particular, the choice of the degree of decentralization of the architecture has become a crucial issue in automotive electronics. In this paper, we demonstrate how a rigorous methodology (platform-based design) and the Metropolis framework can be used to find the balance between centralized and decentralized architectures",
"title": ""
},
{
"docid": "81765da7a2d708e8f607255e465259de",
"text": "Feature-based product modeling is the leading approach for the integrated representation of engineering product data. On the one side, this approach has stimulated the development of formal models and vocabularies, data standards and computational ontologies. On the other side, the current ways to model features is considered problematic since it lacks a principled and uniform methodology for feature representation. This paper reviews the state of art of feature-based modeling approaches by concentrating on how features are conceptualised. It points out the drawbacks of current approaches and proposes an high-level ontology-based perspective to harmonize the definition of feature.",
"title": ""
},
{
"docid": "6936462dee2424b92c7476faed5b5a23",
"text": "A significant challenge in scene text detection is the large variation in text sizes. In particular, small text are usually hard to detect. This paper presents an accurate oriented text detector based on Faster R-CNN. We observe that Faster R-CNN is suitable for general object detection but inadequate for scene text detection due to the large variation in text size. We apply feature fusion both in RPN and Fast R-CNN to alleviate this problem and furthermore, enhance model's ability to detect relatively small text. Our text detector achieves comparable results to those state of the art methods on ICDAR 2015 and MSRA-TD500, showing its advantage and applicability.",
"title": ""
},
{
"docid": "933e51f6d297ecb1393688f4165079e1",
"text": "Image clustering is one of the challenging tasks in machine learning, and has been extensively used in various applications. Recently, various deep clustering methods has been proposed. These methods take a two-stage approach, feature learning and clustering, sequentially or jointly. We observe that these works usually focus on the combination of reconstruction loss and clustering loss, relatively little work has focused on improving the learning representation of the neural network for clustering. In this paper, we propose a deep convolutional embedded clustering algorithm with inception-like block (DCECI). Specifically, an inception-like block with different type of convolution filters are introduced in the symmetric deep convolutional network to preserve the local structure of convolution layers. We simultaneously minimize the reconstruction loss of the convolutional autoencoders with inception-like block and the clustering loss. Experimental results on multiple image datasets exhibit the promising performance of our proposed algorithm compared with other competitive methods.",
"title": ""
},
{
"docid": "400be1fdbd0f1aebfb0da220fd62e522",
"text": "Understanding users' interactions with highly subjective content---like artistic images---is challenging due to the complex semantics that guide our preferences. On the one hand one has to overcome `standard' recommender systems challenges, such as dealing with large, sparse, and long-tailed datasets. On the other, several new challenges present themselves, such as the need to model content in terms of its visual appearance, or even social dynamics, such as a preference toward a particular artist that is independent of the art they create. In this paper we build large-scale recommender systems to model the dynamics of a vibrant digital art community, Behance, consisting of tens of millions of interactions (clicks and 'appreciates') of users toward digital art. Methodologically, our main contributions are to model (a) rich content, especially in terms of its visual appearance; (b) temporal dynamics, in terms of how users prefer 'visually consistent' content within and across sessions; and (c) social dynamics, in terms of how users exhibit preferences both towards certain art styles, as well as the artists themselves.",
"title": ""
},
{
"docid": "981e88bd1f4187972f8a3d04960dd2dd",
"text": "The purpose of this study is to examine the appropriateness and effectiveness of the assistive use of robot projector based augmented reality (AR) to children’s dramatic activity. A system that employ a mobile robot mounted with a projector-camera is used to help manage children’s dramatic activity by projecting backdrops and creating a synthetic video imagery, where e.g. children’s faces is replaced with graphic characters. In this Delphi based study, a panel consist of 33 professionals include 11children education experts (college professors majoring in early childhood education), children field educators (kindergarten teachers and principals), and 11 AR and robot technology experts. The experts view the excerpts from the video taken from the actual usage situation. In the first stage of survey, we collect the panel's perspectives on applying the latest new technologies for instructing dramatic activity to children using an open ended questionnaire. Based on the results of the preliminary survey, the subsequent questionnaires (with 5 point Likert scales) are developed for the second and third in-depth surveys. In the second survey, 36 questions is categorized into 5 areas: (1) developmental and educational values, (2) impact on the teacher's role, (3) applicability and special considerations in the kindergarten, (4) external environment and required support, and (5) criteria for the selection of the story in the drama activity. The third survey mainly investigate how AR or robots can be of use in children’s dramatic activity in other ways (than as originally given) and to other educational domains. The surveys show that experts most appreciated the use of AR and robot for positive educational and developmental effects due to the children’s keen interests and in turn enhanced immersion into the dramatic activity. Consequently, the experts recommended that proper stories, scenes and technological realizations need to be selected carefully, in the light of children’s development, while lever aging on strengths of the technologies used.",
"title": ""
},
{
"docid": "fc4fe91aab968227cf718e7a83393d4e",
"text": "People may look dramatically different by changing their hair color, hair style, when they grow older, in a different era style, or a different country or occupation. Some of those may transfigure appearance and inspire creative changes, some not, but how would we know without physically trying? We present a system that enables automatic synthesis of limitless numbers of appearances. A user inputs one or more photos (as many as they like) of his or her face, text queries an appearance of interest (just like they'd search an image search engine) and gets as output the input person in the queried appearance. Rather than fixing the number of queries or a dataset our system utilizes all the relevant and searchable images on the Internet, estimates a doppelgänger set for the inputs, and utilizes it to generate composites. We present a large number of examples on photos taken with completely unconstrained imaging conditions.",
"title": ""
},
{
"docid": "f0500185d2d3b1daa8ea436cd37f19a6",
"text": "Previous studies have shown that low-intensity resistance training with restricted muscular venous blood flow (Kaatsu) causes muscle hypertrophy and strength gain. To investigate the effects of daily physical activity combined with Kaatsu, we examined the acute and chronic effects of walk training with and without Kaatsu on MRI-measured muscle size and maximum dynamic (one repetition maximum) and isometric strength, along with blood hormonal parameters. Nine men performed Kaatsu-walk training, and nine men performed walk training alone (control-walk). Training was conducted two times a day, 6 days/wk, for 3 wk using five sets of 2-min bouts (treadmill speed at 50 m/min), with a 1-min rest between bouts. Mean oxygen uptake during Kaatsu-walk and control-walk exercise was 19.5 (SD 3.6) and 17.2 % (SD 3.1) of treadmill-determined maximum oxygen uptake, respectively. Serum growth hormone was elevated (P < 0.01) after acute Kaatsu-walk exercise but not in control-walk exercise. MRI-measured thigh muscle cross-sectional area and muscle volume increased by 4-7%, and one repetition maximum and maximum isometric strength increased by 8-10% in the Kaatsu-walk group. There was no change in muscle size and dynamic and isometric strength in the control-walk group. Indicators of muscle damage (creatine kinase and myoglobin) and resting anabolic hormones did not change in both groups. The results suggest that the combination of leg muscle blood flow restriction with slow-walk training induces muscle hypertrophy and strength gain, despite the minimal level of exercise intensity. Kaatsu-walk training may be a potentially useful method for promoting muscle hypertrophy, covering a wide range of the population, including the frail and elderly.",
"title": ""
}
] |
scidocsrr
|
95c59e3a429233fd83bde1c55fa2c103
|
Cognitive , metacognitive , and motivational aspects of problem solving
|
[
{
"docid": "c56c71775a0c87f7bb6c59d6607e5280",
"text": "A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed.",
"title": ""
},
{
"docid": "f71d0084ebb315a346b52c7630f36fb2",
"text": "A theory of motivation and emotion is proposed in which causal ascriptions play a key role. It is first documented that in achievement-related contexts there are a few dominant causal perceptions. The perceived causes of success and failure share three common properties: locus, stability, and controllability, with intentionality and globality as other possible causal structures. The perceived stability of causes influences changes in expectancy of success; all three dimensions of causality affect a variety of common emotional experiences, including anger, gratitude, guilt, hopelessness, pity, pride, and shame. Expectancy and affect, in turn, are presumed to guide motivated behavior. The theory therefore relates the structure of thinking to the dynamics of feeling and action. Analysis of a created motivational episode involving achievement strivings is offered, and numerous empirical observations are examined from this theoretical position. The strength of the empirical evidence, the capability of this theory to address prevalent human emotions, and the potential generality of the conception are stressed.",
"title": ""
}
] |
[
{
"docid": "6c1a1e47ce91b2d9ae60a0cfc972b7e4",
"text": "We investigate automatic classification of speculative language (‘hedging’), in biomedical text using weakly supervised machine learning. Our contributions include a precise description of the task with annotation guidelines, analysis and discussion, a probabilistic weakly supervised learning model, and experimental evaluation of the methods presented. We show that hedge classification is feasible using weakly supervised ML, and point toward avenues for future research.",
"title": ""
},
{
"docid": "b7715fb5c6fb19363cb1bdaf92981643",
"text": "The composition and antifungal activity of clove essential oil (EO), obtained from Syzygium aromaticum, were studied. Clove oil was obtained commercially and analysed by GC and GC-MS. The EO analysed showed a high content of eugenol (85.3 %). MICs, determined according to Clinical and Laboratory Standards Institute protocols, and minimum fungicidal concentration were used to evaluate the antifungal activity of the clove oil and its main component, eugenol, against Candida, Aspergillus and dermatophyte clinical and American Type Culture Collection strains. The EO and eugenol showed inhibitory activity against all the tested strains. To clarify its mechanism of action on yeasts and filamentous fungi, flow cytometric and inhibition of ergosterol synthesis studies were performed. Propidium iodide rapidly penetrated the majority of the yeast cells when the cells were treated with concentrations just over the MICs, meaning that the fungicidal effect resulted from an extensive lesion of the cell membrane. Clove oil and eugenol also caused a considerable reduction in the quantity of ergosterol, a specific fungal cell membrane component. Germ tube formation by Candida albicans was completely or almost completely inhibited by oil and eugenol concentrations below the MIC values. The present study indicates that clove oil and eugenol have considerable antifungal activity against clinically relevant fungi, including fluconazole-resistant strains, deserving further investigation for clinical application in the treatment of fungal infections.",
"title": ""
},
{
"docid": "e06b2385b1b9a81b9678fa5be485151a",
"text": "We propose a new weight compensation mechanism with a non-circular pulley and a spring. We show the basic principle and numerical design method to derive the shape of the non-circular pulley. After demonstration of the weight compensation for an inverted/ordinary pendulum system, we extend the same mechanism to a parallel five-bar linkage system, analyzing the required torques using transposed Jacobian matrices. Finally, we develop a three degree of freedom manipulator with relatively small output actuators and verified that the weight compensation mechanism significantly contributes to decrease static torques to keep the same posture within manipulator's work space.",
"title": ""
},
{
"docid": "356361bf2ca0e821250e4a32d299d498",
"text": "DRAM has been a de facto standard for main memory, and advances in process technology have led to a rapid increase in its capacity and bandwidth. In contrast, its random access latency has remained relatively stagnant, as it is still around 100 CPU clock cycles. Modern computer systems rely on caches or other latency tolerance techniques to lower the average access latency. However, not all applications have ample parallelism or locality that would help hide or reduce the latency. Moreover, applications' demands for memory space continue to grow, while the capacity gap between last-level caches and main memory is unlikely to shrink. Consequently, reducing the main-memory latency is important for application performance. Unfortunately, previous proposals have not adequately addressed this problem, as they have focused only on improving the bandwidth and capacity or reduced the latency at the cost of significant area overhead.\n We propose asymmetric DRAM bank organizations to reduce the average main-memory access latency. We first analyze the access and cycle times of a modern DRAM device to identify key delay components for latency reduction. Then we reorganize a subset of DRAM banks to reduce their access and cycle times by half with low area overhead. By synergistically combining these reorganized DRAM banks with support for non-uniform bank accesses, we introduce a novel DRAM bank organization with center high-aspect-ratio mats called CHARM. Experiments on a simulated chip-multiprocessor system show that CHARM improves both the instructions per cycle and system-wide energy-delay product up to 21% and 32%, respectively, with only a 3% increase in die area.",
"title": ""
},
{
"docid": "948257544ca485b689d8663aaba63c5d",
"text": "This paper presents a new single-pass shadow mapping technique that achieves better quality than the approaches based on perspective warping, such as perspective, light-space, and trapezoidal shadow maps. The proposed technique is appropriate for real-time rendering of large virtual environments that include dynamic objects. By performing operations in camera space, this solution successfully handles the general and the dueling frustum cases and produces high-quality shadows even for extremely large scenes. This paper also presents a fast nonlinear projection technique for shadow map stretching that enables complete utilization of the shadow map by eliminating wastage. The application of stretching results in a significant reduction in unwanted perspective aliasing, commonly found in all shadow mapping techniques. Technique is compared with other shadow mapping techniques, and the benefits of the proposed method are presented. The proposed shadow mapping technique is simple and flexible enough to handle most of the special scenarios. An API for a generic shadow mapping solution is presented. This API simplifies the generation of fast and high-quality shadows.",
"title": ""
},
{
"docid": "2ceb6aae1478e42ffae56895e17a9e14",
"text": "Proposed in 1994, the “QED project” was one of the seminally influential initiatives in automated reasoning: It envisioned the formalization of “all of mathematics” and the assembly of these formalizations in a single coherent database. Even though it never led to the concrete system, communal resource, or even joint research envisioned in the QED manifesto, the idea lives on and shapes the research agendas of a significant part of the community This paper surveys a decade of work on representation languages and knowledge management tools for mathematical knowledge conducted in the KWARC research group at Jacobs University Bremen. It assembles the various research strands into a coherent agenda for realizing the QED dream with modern insights and technologies.",
"title": ""
},
{
"docid": "ad3970fe4a43977f521b9c8a68d32647",
"text": "Current key initiatives in deep-space optical communications are treated in terms of historical context, contemporary trends, and prospects for the future. An architectural perspective focusing on high-level drivers, systems, and related operations concepts is provided. Detailed subsystem and component topics are not addressed. A brief overview of past ideas and architectural concepts sets the stage for current developments. Current requirements that might drive a transition from radio frequencies to optical communications are examined. These drivers include mission demand for data rates and/or data volumes; spectrum to accommodate such data rates; and desired power, mass, and cost benefits. As is typical, benefits come with associated challenges. For optical communications, these include atmospheric effects, link availability, pointing, and background light. The paper describes how NASA's Space Communication and Navigation Office will respond to the drivers, achieve the benefits, and mitigate the challenges, as documented in its Optical Communications Roadmap. Some nontraditional architectures and operations concepts are advanced in an effort to realize benefits and mitigate challenges as quickly as possible. Radio frequency communications is considered as both a competitor to and a partner with optical communications. The paper concludes with some suggestions for two affordable first steps that can yet evolve into capable architectures that will fulfill the vision inherent in optical communications.",
"title": ""
},
{
"docid": "c1dbf418f72ad572b3b745a94fe8fbf7",
"text": "In this work we show how to integrate prior statistical knowledge, obtained through principal components analysis (PCA), into a convolutional neural network in order to obtain robust predictions even when dealing with corrupted or noisy data. Our network architecture is trained end-to-end and includes a specifically designed layer which incorporates the dataset modes of variation discovered via PCA and produces predictions by linearly combining them. We also propose a mechanism to focus the attention of the CNN on specific regions of interest of the image in order to obtain refined predictions. We show that our method is effective in challenging segmentation and landmark localization tasks.",
"title": ""
},
{
"docid": "a26d98c1f9cb219f85153e04120053a7",
"text": "The purpose of this paper is to examine the academic and athletic motivation and identify the factors that determine the academic performance among university students in the Emirates of Dubai. The study examined motivation based on non-traditional measure adopting a scale to measure both academic as well as athletic motivation. Keywords-academic performance, academic motivation, athletic performance, university students, business management, academic achievement, career motivation, sports motivation",
"title": ""
},
{
"docid": "6e36dda80f462c23bb7f6224e741e13d",
"text": "Usual way of character's animation is the use of motion captured data. Acquired bones' orientations are blended together according to user input in real-time. Although this massively used method gives a nice results, practical experience show how important is to have a system for interactive direct manipulation of character's skeleton in order to satisfy various tasks in Cartesian space. For this purpose, various methods for solving inverse kinematics problem are used. This paper presents three of such methods: Algebraical method based on limbs positioning; iterative optimization method based on Jacobian pseudo-inversion; and heuristic CCD iterative method. The paper describes them all in detail and discusses practical scope of their use in real-time applications.",
"title": ""
},
{
"docid": "479b2ba292c60ac2441586ac3670e4b8",
"text": "Educational Goals of Course(s): i. Explore and evaluate Multi-Objective Evolutionary algorithm (MOEA) space, Multi-Objective Problem (MOP) space, and parameter space along with MOEA performance comparisons ii. Motivate the student to investigate new areas of MOEA design, implementation, and performance metrics iii. Developed an ability to utilize and improve MOEA performance across a wide variety of application problem domains.",
"title": ""
},
{
"docid": "a1046f5282cf4057fd143fdce79c6990",
"text": "Rheumatoid arthritis is a multisystem disease with underlying immune mechanisms. Osteoarthritis is a debilitating, progressive disease of diarthrodial joints associated with the aging process. Although much is known about the pathogenesis of rheumatoid arthritis and osteoarthritis, our understanding of some immunologic changes remains incomplete. This study tries to examine the numeric changes in the T cell subsets and the alterations in the levels of some cytokines and adhesion molecules in these lesions. To accomplish this goal, peripheral blood and synovial fluid samples were obtained from 24 patients with rheumatoid arthritis, 15 patients with osteoarthritis and six healthy controls. The counts of CD4 + and CD8 + T lymphocytes were examined using flow cytometry. The levels of some cytokines (TNF-α, IL1-β, IL-10, and IL-17) and a soluble intercellular adhesion molecule-1 (sICAM-1) were measured in the sera and synovial fluids using enzyme linked immunosorbant assay. We found some variations in the counts of T cell subsets, the levels of cytokines and sICAM-1 adhesion molecule between the healthy controls and the patients with arthritis. High levels of IL-1β, IL-10, IL-17 and TNF-α (in the serum and synovial fluid) were observed in arthritis compared to the healthy controls. In rheumatoid arthritis, a high serum level of sICAM-1 was found compared to its level in the synovial fluid. A high CD4+/CD8+ T cell ratio was found in the blood of the patients with rheumatoid arthritis. In rheumatoid arthritis, the cytokine levels correlated positively with some clinicopathologic features. To conclude, the development of rheumatoid arthritis and osteoarthritis is associated with alteration of the levels of some cytokines. The assessment of these immunologic changes may have potential prognostic roles.",
"title": ""
},
{
"docid": "d8ead5d749b9af092adf626245e8178a",
"text": "This paper describes a LIN (Local Interconnect Network) Transmitter designed in a BCD HV technology. The key design target is to comply with EMI (electromagnetic interference) specification limits. The two main aspects are low EME (electromagnetic emission) and sufficient immunity against RF disturbance. A gate driver is proposed which uses a certain current summation network for lowering the slew rate on the one hand and being reliable against radio frequency (RF) disturbances within the automotive environment on the other hand. Nowadays the low cost single wire LIN Bus is used for establishing communication between sensors, actuators and other components.",
"title": ""
},
{
"docid": "c45d911aea9d06208a4ef273c9ab5ff3",
"text": "A wide range of research has used face data to estimate a person's engagement, in applications from advertising to student learning. An interesting and important question not addressed in prior work is if face-based models of engagement are generalizable and context-free, or do engagement models depend on context and task. This research shows that context-sensitive face-based engagement models are more accurate, at least in the space of web-based tools for trauma recovery. Estimating engagement is important as various psychological studies indicate that engagement is a key component to measure the effectiveness of treatment and can be predictive of behavioral outcomes in many applications. In this paper, we analyze user engagement in a trauma-recovery regime during two separate modules/tasks: relaxation and triggers. The dataset comprises of 8M+ frames from multiple videos collected from 110 subjects, with engagement data coming from 800+ subject self-reports. We build an engagement prediction model as sequence learning from facial Action Units (AUs) using Long Short Term Memory (LSTMs). Our experiments demonstrate that engagement prediction is contextual and depends significantly on the allocated task. Models trained to predict engagement on one task are only weak predictors for another and are much less accurate than context-specific models. Further, we show the interplay of subject mood and engagement using a very short version of Profile of Mood States (POMS) to extend our LSTM model.",
"title": ""
},
{
"docid": "0719942bf0fc7ddf03b4caf6402dec30",
"text": "Recent years have seen a renewed interest in the harvesting and conversion of solar energy. Among various technologies, the direct conversion of solar to chemical energy using photocatalysts has received significant attention. Although heterogeneous photocatalysts are almost exclusively semiconductors, it has been demonstrated recently that plasmonic nanostructures of noble metals (mainly silver and gold) also show significant promise. Here we review recent progress in using plasmonic metallic nanostructures in the field of photocatalysis. We focus on plasmon-enhanced water splitting on composite photocatalysts containing semiconductor and plasmonic-metal building blocks, and recently reported plasmon-mediated photocatalytic reactions on plasmonic nanostructures of noble metals. We also discuss the areas where major advancements are needed to move the field of plasmon-mediated photocatalysis forward.",
"title": ""
},
{
"docid": "54c6e02234ce1c0f188dcd0d5ee4f04c",
"text": "The World Wide Web is a vast resource for information. At the same time it is extremely distributed. A particular type of data such as restaurant lists may be scattered across thousands of independent information sources in many di erent formats. In this paper, we consider the problem of extracting a relation for such a data type from all of these sources automatically. We present a technique which exploits the duality between sets of patterns and relations to grow the target relation starting from a small sample. To test our technique we use it to extract a relation of (author,title) pairs from the World Wide Web.",
"title": ""
},
{
"docid": "635d981a3f54735ccea336feb0ead45b",
"text": "Keyphrase is an efficient representation of the main idea of documents. While background knowledge can provide valuable information about documents, they are rarely incorporated in keyphrase extraction methods. In this paper, we propose WikiRank, an unsupervised method for keyphrase extraction based on the background knowledge from Wikipedia. Firstly, we construct a semantic graph for the document. Then we transform the keyphrase extraction problem into an optimization problem on the graph. Finally, we get the optimal keyphrase set to be the output. Our method obtains improvements over other state-of-art models by more than 2% in F1-score.",
"title": ""
},
{
"docid": "0ca588e42d16733bc8eef4e7957e01ab",
"text": "Three-dimensional (3D) finite element (FE) models are commonly used to analyze the mechanical behavior of the bone under different conditions (i.e., before and after arthroplasty). They can provide detailed information but they are numerically expensive and this limits their use in cases where large or numerous simulations are required. On the other hand, 2D models show less computational cost, but the precision of results depends on the approach used for the simplification. Two main questions arise: Are the 3D results adequately represented by a 2D section of the model? Which approach should be used to build a 2D model that provides reliable results compared to the 3D model? In this paper, we first evaluate if the stem symmetry plane used for generating the 2D models of bone-implant systems adequately represents the results of the full 3D model for stair climbing activity. Then, we explore three different approaches that have been used in the past for creating 2D models: (1) without side-plate (WOSP), (2) with variable thickness side-plate and constant cortical thickness (SPCT), and (3) with variable thickness side-plate and variable cortical thickness (SPVT). From the different approaches investigated, a 2D model including a side-plate best represents the results obtained with the full 3D model with much less computational cost. The side-plate needs to have variable thickness, while the cortical bone thickness can be kept constant.",
"title": ""
},
{
"docid": "f6b6b175f556e7ae88661b057eb1c373",
"text": "Legacy encryption systems depend on sharing a key (public or private) among the peers involved in exchanging an encrypted message. However, this approach poses privacy concerns. The users or service providers with the key have exclusive rights on the data. Especially with popular cloud services, control over the privacy of the sensitive data is lost. Even when the keys are not shared, the encrypted material is shared with a third party that does not necessarily need to access the content. Moreover, untrusted servers, providers, and cloud operators can keep identifying elements of users long after users end the relationship with the services. Indeed, Homomorphic Encryption (HE), a special kind of encryption scheme, can address these concerns as it allows any third party to operate on the encrypted data without decrypting it in advance. Although this extremely useful feature of the HE scheme has been known for over 30 years, the first plausible and achievable Fully Homomorphic Encryption (FHE) scheme, which allows any computable function to perform on the encrypted data, was introduced by Craig Gentry in 2009. Even though this was a major achievement, different implementations so far demonstrated that FHE still needs to be improved significantly to be practical on every platform. Therefore, this survey focuses on HE and FHE schemes. First, we present the basics of HE and the details of the well-known Partially Homomorphic Encryption (PHE) and Somewhat Homomorphic Encryption (SWHE), which are important pillars for achieving FHE. Then, the main FHE families, which have become the base for the other follow-up FHE schemes, are presented. Furthermore, the implementations and recent improvements in Gentry-type FHE schemes are also surveyed. Finally, further research directions are discussed. This survey is intended to give a clear knowledge and foundation to researchers and practitioners interested in knowing, applying, and extending the state-of-the-art HE, PHE, SWHE, and FHE systems.",
"title": ""
},
{
"docid": "a5082b49cc584548ac066b9c6ffb2452",
"text": "In this paper we review the algorithm development and applications in high resolution shock capturing methods, level set methods and PDE based methods in computer vision and image processing. The emphasis is on Stanley Osher's contribution in these areas and the impact of his work. We will start with shock capturing methods and will review the Engquist-Osher scheme, TVD schemes, entropy conditions, ENO and WENO schemes and numerical schemes for Hamilton-Jacobi type equations. Among level set methods we will review level set calculus, numerical techniques, uids and materials, variational approach, high codimension motion, geometric optics, and the computation of discontinuous solutions to Hamilton-Jacobi equations. Among computer vision and image processing we will review the total variation model for image denoising, images on implicit surfaces, and the level set method in image processing and computer vision.",
"title": ""
}
] |
scidocsrr
|
58c8bc749e0e26e1cae2a5987eaffe5c
|
The importance of the label hierarchy in hierarchical multi-label classification
|
[
{
"docid": "97b7065942b53f2d873c80f32242cd00",
"text": "Hierarchical multilabel classification (HMC) allows an instance to have multiple labels residing in a hierarchy. A popular loss function used in HMC is the H-loss, which penalizes only the first classification mistake along each prediction path. However, the H-loss metric can only be used on tree-structured label hierarchies, but not on DAG hierarchies. Moreover, it may lead to misleading predictions as not all misclassifications in the hierarchy are penalized. In this paper, we overcome these deficiencies by proposing a hierarchy-aware loss function that is more appropriate for HMC. Using Bayesian decision theory, we then develop a Bayes-optimal classifier with respect to this loss function. Instead of requiring an exhaustive summation and search for the optimal multilabel, the proposed classification problem can be efficiently solved using a greedy algorithm on both tree-and DAG-structured label hierarchies. Experimental results on a large number of real-world data sets show that the proposed algorithm outperforms existing HMC methods.",
"title": ""
}
] |
[
{
"docid": "fd8574edb4fc609ade520fff36fac8cd",
"text": "A large share of websites today allow users to contribute and manage user-generated content. This content is often in textual form and involves names, terms, and keywords that can be ambiguous and difficult to interpret for other users. Semantic annotation can be used to tackle such issues, but this technique has been adopted by only a few websites. This may be attributed to a lack of a standard web input component that allows users to simply and efficiently annotate text. In this paper, we introduce an autocomplete-enabled annotation box that supports users in associating their text with DBpedia resources as they type. This web component can replace existing input fields and does not require particular user skills. Furthermore, it can be used by semantic web developers as a user interface for advanced semantic search and data processing back-ends. Finally, we validate the approach with a preliminary user study.",
"title": ""
},
{
"docid": "b10a0f8d888d4ecfc0e0d154ae7416dc",
"text": "The purpose of this study was to investigate the differences in the viscoelastic properties of human tendon structures (tendon and aponeurosis) in the medial gastrocnemius muscle between men (n=16) and women (n=13). The elongation of the tendon and aponeurosis of the medial gastrocnemius muscle was measured directly by ultrasonography, while the subjects performed ramp isometric plantar flexion up to the voluntary maximum, followed by a ramp relaxation. The relationship between the estimated muscle force (Fm) and tendon elongation (L) during the ascending phase was fitted to a linear regression, the slope of which was defined as stiffness. The percentage of the area within the Fm-L loop to the area beneath the curve during the ascending phase was calculated as hysteresis. The L values at force production levels beyond 50 N were significantly greater for women than for men. The maximum strain (100×ΔL/initial tendon length) was significantly greater in women [9.5 (1.1)%] than in men [8.1 (1.6)%]. The stiffness and Young's modulus were significantly lower in women [16.5 (3.4) N/mm, 277 (25) MPa] than in men [25.9 (7.0) N/mm, 356 (32) MPa]. Furthermore, the hysteresis was significantly lower in women [11.1 (5.9)%] than in men [18.7 (8.5)%, P=0.048]. These results suggest that there are gender differences in the viscoelastic properties of tendon structures and that these might in part account for previously observed performance differences between the genders.",
"title": ""
},
{
"docid": "55694b963cde47e9aecbeb21fb0e79cf",
"text": "The rise of Uber as the global alternative taxi operator has attracted a lot of interest recently. Aside from the media headlines which discuss the new phenomenon, e.g. on how it has disrupted the traditional transportation industry, policy makers, economists, citizens and scientists have engaged in a discussion that is centred around the means to integrate the new generation of the sharing economy services in urban ecosystems. In this work, we aim to shed new light on the discussion, by taking advantage of a publicly available longitudinal dataset that describes the mobility of yellow taxis in New York City. In addition to movement, this data contains information on the fares paid by the taxi customers for each trip. As a result we are given the opportunity to provide a first head to head comparison between the iconic yellow taxi and its modern competitor, Uber, in one of the world’s largest metropolitan centres. We identify situations when Uber X, the cheapest version of the Uber taxi service, tends to be more expensive than yellow taxis for the same journey. We also demonstrate how Uber’s economic model effectively takes advantage of well known patterns in human movement. Finally, we take our analysis a step further by proposing a new mobile application that compares taxi prices in the city to facilitate traveller’s taxi choices, hoping to ultimately to lead to a reduction of commuter costs. Our study provides a case on how big datasets that become public can improve urban services for consumers by offering the opportunity for transparency in economic sectors that lack up to date regulations.",
"title": ""
},
{
"docid": "1415e7053edc09e149a5bcc124aa2cf0",
"text": "Accurate medical image segmentation is essential for diagnosis, surgical planning and many other applications. Convolutional Neural Networks (CNNs) have become the state-of-the-art automatic segmentation methods. However, fully automatic results may still need to be refined to become accurate and robust enough for clinical use. We propose a deep learning-based interactive segmentation method to improve the results obtained by an automatic CNN and to reduce user interactions during refinement for higher accuracy. We use one CNN to obtain an initial automatic segmentation, on which user interactions are added to indicate mis-segmentations. Another CNN takes as input the user interactions with the initial segmentation and gives a refined result. We propose to combine user interactions with CNNs through geodesic distance transforms, and propose a resolution-preserving network that gives a better dense prediction. In addition, we integrate user interactions as hard constraints into a back-propagatable Conditional Random Field. We validated the proposed framework in the context of 2D placenta segmentation from fetal MRI and 3D brain tumor segmentation from FLAIR images. Experimental results show our method achieves a large improvement from automatic CNNs, and obtains comparable and even higher accuracy with fewer user interventions and less time compared with traditional interactive methods.",
"title": ""
},
{
"docid": "4d3baff85c302b35038f35297a8cdf90",
"text": "Most speech recognition applications in use today rely heavily on confidence measure for making optimal decisions. In this paper, we aim to answer the question: what can be done to improve the quality of confidence measure if we cannot modify the speech recognition engine? The answer provided in this paper is a post-processing step called confidence calibration, which can be viewed as a special adaptation technique applied to confidence measure. Three confidence calibration methods have been developed in this work: the maximum entropy model with distribution constraints, the artificial neural network, and the deep belief network. We compare these approaches and demonstrate the importance of key features exploited: the generic confidence-score, the application-dependent word distribution, and the rule coverage ratio. We demonstrate the effectiveness of confidence calibration on a variety of tasks with significant normalized cross entropy increase and equal error rate reduction.",
"title": ""
},
{
"docid": "d93abfdc3bc20a23e533f3ad2e30b9c9",
"text": "Over the past few years, the realm of embedded systems has expanded to include a wide variety of products, ranging from digital cameras, to sensor networks, to medical imaging systems. Consequently, engineers strive to create ever smaller and faster products, many of which have stringent power requirements. Coupled with increasing pressure to decrease costs and time-to-market, the design constraints of embedded systems pose a serious challenge to embedded systems designers. Reconfigurable hardware can provide a flexible and efficient platform for satisfying the area, performance, cost, and power requirements of many embedded systems. This article presents an overview of reconfigurable computing in embedded systems, in terms of benefits it can provide, how it has already been used, design issues, and hurdles that have slowed its adoption.",
"title": ""
},
{
"docid": "3fa0911a8e65461a0c1014cc481293bb",
"text": "Researchers are using emerging technologies to develop novel play environments, while established computer and console game markets continue to grow rapidly. Even so, evaluating the success of interactive play environments is still an open research challenge. Both subjective and objective techniques fall short due to limited evaluative bandwidth; there remains no corollary in play environments to task performance with productivity systems. This paper presents a method of modeling user emotional state, based on a user's physiology, for users interacting with play technologies. Modeled emotions are powerful because they capture usability and playability through metrics relevant to ludic experience; account for user emotion; are quantitative and objective; and are represented continuously over a session. Furthermore, our modeled emotions show the same trends as reported emotions for fun, boredom, and excitement; however, the modeled emotions revealed differences between three play conditions, while the differences between the subjective reports failed to reach significance.",
"title": ""
},
{
"docid": "ac3223b0590216936cc2f48f6a61dc40",
"text": "It is greatly demanded that to develop a kind of stably stair climbing mobile vehicle to assist the physically handicapped in moving outdoors. In this paper, we first propose a novel leg-wheel hybrid stair-climbing vehicle, \"Zero Carrier\", which consists of eight unified prismatic-joint legs, four of which attached active wheels and other four attached passive casters. Zero Carrier can be designed lightweight, compact, powerful, together with its significant stability on stair climbing motion, since its mechanism is mostly concentrated in its eight simplified legs. We discuss the leg mechanism and control method of the first trial model, Zero Carrier I, and verify its performance based on the experiments of stair climbing and moving over obstacles performed by Zero Carrier I",
"title": ""
},
{
"docid": "752a9661f174499c2aa4a4fa70d5b46b",
"text": "Energy is an important consideration in the design and deployment of wireless sensor networks (WSNs) since sensor nodes are typically powered by batteries with limited capacity. Since the communication unit on a wireless sensor node is the major power consumer, data compression is one of possible techniques that can help reduce the amount of data exchanged between wireless sensor nodes resulting in power saving. However, wireless sensor networks possess significant limitations in communication, processing, storage, bandwidth, and power. Thus, any data compression scheme proposed for WSNs must be lightweight. In this paper, we present an adaptive lossless data compression (ALDC) algorithm for wireless sensor networks. Our proposed ALDC scheme performs compression losslessly using multiple code options. Adaptive compression schemes allow compression to dynamically adjust to a changing source. The data sequence to be compressed is partitioned into blocks, and the optimal compression scheme is applied for each block. Using various real-world sensor datasets we demonstrate the merits of our proposed compression algorithm in comparison with other recently proposed lossless compression algorithms for WSNs.",
"title": ""
},
{
"docid": "492c5a20c4ef5b7a3ea08083ecf66bce",
"text": "We present the design for an absorbing metamaterial (MM) with near unity absorbance A(omega). Our structure consists of two MM resonators that couple separately to electric and magnetic fields so as to absorb all incident radiation within a single unit cell layer. We fabricate, characterize, and analyze a MM absorber with a slightly lower predicted A(omega) of 96%. Unlike conventional absorbers, our MM consists solely of metallic elements. The substrate can therefore be optimized for other parameters of interest. We experimentally demonstrate a peak A(omega) greater than 88% at 11.5 GHz.",
"title": ""
},
{
"docid": "88130a65e625f85e527d63a0d2a446d4",
"text": "Test-Driven Development (TDD) is an agile practice that is widely accepted and advocated by most agile methods and methodologists. In this paper, we report on a longitudinal case study of an IBM team who has sustained use of TDD for five years and over ten releases of a Java-implemented product. The team worked from a design and wrote tests incrementally before or while they wrote code and, in the process, developed a significant asset of automated tests. The IBM team realized sustained quality improvement relative to a pre-TDD project and consistently had defect density below industry standards. As a result, our data indicate that the TDD practice can aid in the production of high quality products. This quality improvement would compensate for the moderate perceived productivity losses. Additionally, the use of TDD may decrease the degree to which code complexity increases as software ages.",
"title": ""
},
{
"docid": "e347eadb8df6386e70171d73388b8ace",
"text": "An ultra-large voltage conversion ratio converter is proposed by integrating a switched-capacitor circuit with a coupled inductor technology. The proposed converter can be seen as an equivalent parallel connection to the load of a basic boost converter and a number of forward converters, each one containing a switched-capacitor circuit. All the stages are activated by the boost switch. A single active switch is required, with no need of extreme duty-ratio values. The leakage energy of the coupled inductor is recycled to the load. The inrush current problem of switched capacitors is restrained by the leakage inductance of the coupled-inductor. The above features are the reason for the high efficiency performance. The operating principles and steady state analyses of continuous, discontinuous and boundary conduction modes are discussed in detail. To verify the performance of the proposed converter, a 200 W/20 V to 400 V prototype was implemented. The maximum measured efficiency is 96.4%. The full load efficiency is 95.1%.",
"title": ""
},
{
"docid": "5378e05d2d231969877131a011b3606a",
"text": "Environmental, health, and safety (EHS) concerns are receiving considerable attention in nanoscience and nanotechnology (nano) research and development (R&D). Policymakers and others have urged that research on nano's EHS implications be developed alongside scientific research in the nano domain rather than subsequent to applications. This concurrent perspective suggests the importance of early understanding and measurement of the diffusion of nano EHS research. The paper examines the diffusion of nano EHS publications, defined through a set of search terms, into the broader nano domain using a global nanotechnology R&D database developed at Georgia Tech. The results indicate that nano EHS research is growing rapidly although it is orders of magnitude smaller than the broader nano S&T domain. Nano EHS work is moderately multidisciplinary, but gaps in biomedical nano EHS's connections with environmental nano EHS are apparent. The paper discusses the implications of these results for the continued monitoring and development of the cross-disciplinary utilization of nano EHS research.",
"title": ""
},
{
"docid": "53ab91cdff51925141c43c4bc1c6aade",
"text": "Floods are the most common natural disasters, and cause significant damage to life, agriculture and economy. Research has moved on from mathematical modeling or physical parameter based flood forecasting schemes, to methodologies focused around algorithmic approaches. The Internet of Things (IoT) is a field of applied electronics and computer science where a system of devices collects data in real time and transfers it through a Wireless Sensor Network (WSN) to the computing device for analysis. IoT generally combines embedded system hardware techniques along with data science or machine learning models. In this work, an IoT and machine learning based embedded system is proposed to predict the probability of floods in a river basin. The model uses a modified mesh network connection over ZigBee for the WSN to collect data, and a GPRS module to send the data over the internet. The data sets are evaluated using an artificial neural network model. The results of the analysis which are also appended show a considerable improvement over the currently existing methods.",
"title": ""
},
{
"docid": "68c7509ec0261b1ddccef7e3ad855629",
"text": "This research comprehensively illustrates the design, implementation and evaluation of a novel marker less environment tracking technology for an augmented reality based indoor navigation application, adapted to efficiently operate on a proprietary head-mounted display. Although the display device used, Google Glass, had certain pitfalls such as short battery life, slow processing speed, and lower quality visual display but the tracking technology was able to complement these limitations by rendering a very efficient, precise, and intuitive navigation experience. The performance assessments, conducted on the basis of efficiency and accuracy, substantiated the utility of the device for everyday navigation scenarios, whereas a later conducted subjective evaluation of handheld and wearable devices also corroborated the wearable as the preferred device for indoor navigation.",
"title": ""
},
{
"docid": "6e4dfb4c6974543246003350b5e3e07f",
"text": "Zero-shot object detection is an emerging research topic that aims to recognize and localize previously ‘unseen’ objects. This setting gives rise to several unique challenges, e.g., highly imbalanced positive vs. negative instance ratio, ambiguity between background and unseen classes and the proper alignment between visual and semantic concepts. Here, we propose an end-to-end deep learning framework underpinned by a novel loss function that puts more emphasis on difficult examples to avoid class imbalance. We call our objective the ‘Polarity loss’ because it explicitly maximizes the gap between positive and negative predictions. Such a margin maximizing formulation is important as it improves the visual-semantic alignment while resolving the ambiguity between background and unseen. Our approach is inspired by the embodiment theories in cognitive science, that claim human semantic understanding to be grounded in past experiences (seen objects), related linguistic concepts (word dictionary) and the perception of the physical world (visual imagery). To this end, we learn to attend to a dictionary of related semantic concepts that eventually refines the noisy semantic embeddings and helps establish a better synergy between visual and semantic domains. Our extensive results on MS-COCO and Pascal VOC datasets show as high as 14× mAP improvement over state of the art.1",
"title": ""
},
{
"docid": "49a54c57984c3feaef32b708ae328109",
"text": "While it has a long history, the last 30 years have brought considerable advances to the discipline of forensic anthropology worldwide. Every so often it is essential that these advances are noticed and trends assessed. It is also important to identify those research areas that are needed for the forthcoming years. The purpose of this special issue is to examine some of the examples of research that might identify the trends in the 21st century. Of the 14 papers 5 dealt with facial features and identification such as facial profile determination and skull-photo superimposition. Age (fetus and cranial thickness), sex (supranasal region, arm and leg bones) and stature (from the arm bones) estimation were represented by five articles. Others discussed the estimation of time since death, skull color and diabetes, and a case study dealing with a mummy and skeletal analysis in comparison with DNA identification. These papers show that age, sex, and stature are still important issues of the discipline. Research on the human face is moving from hit and miss case studies to a more scientifically sound direction. A lack of studies on trauma and taphonomy is very clear. Anthropologists with other scientists can develop research areas to make the identification process more reliable. Research should include the assessment of animal attacks on human remains, factors affecting decomposition rates, and aging of the human face. Lastly anthropologists should be involved in the education of forensic pathologists about osteological techniques and investigators regarding archaeology of crime scenes.",
"title": ""
},
{
"docid": "eaf7b6b0cc18453538087cc90254dbd8",
"text": "We present a real-time system that renders antialiased hard shadows using irregular z-buffers (IZBs). For subpixel accuracy, we use 32 samples per pixel at roughly twice the cost of a single sample. Our system remains interactive on a variety of game assets and CAD models while running at 1080p and 2160p and imposes no constraints on light, camera or geometry, allowing fully dynamic scenes without precomputation. Unlike shadow maps we introduce no spatial or temporal aliasing, smoothly animating even subpixel shadows from grass or wires.\n Prior irregular z-buffer work relies heavily on GPU compute. Instead we leverage the graphics pipeline, including hardware conservative raster and early-z culling. We observe a duality between irregular z-buffer performance and shadow map quality; this allows common shadow map algorithms to reduce our cost. Compared to state-of-the-art ray tracers, we spawn similar numbers of triangle intersections per pixel yet completely rebuild our data structure in under 2 ms per frame.",
"title": ""
},
{
"docid": "8742ca1440f2913f61d0be3f5415c682",
"text": "Fluid-phase endocytosis (pinocytosis) is highly active in amoebae of the cellular slime mould Dictyostelium discoideum as it provides an efficient entry of nutrients in axenic strains. Detailed kinetic analyses were conducted using fluorescein-labeled dextran (FITC-dextran) as fluid-phase marker and pH probe. Cells were first pulsed with FITC-dextran during a short period then chased by suspension in probe-free medium. Chase kinetics were characterized by a lag phase of about 40 min before pseudo-first order FITC-dextran efflux and thus reflected the progression of the probe cohort through the various endosomal compartments along the endosomal pathway. Temporal evolution of endo-lysosomal pH showed a rapid acidification (T1/2 approximately 10 min) to pH 5.0 followed by an increase up to pH 6.2 to 6.3. The effects of cycloheximide and caffeine, two inhibitors of endocytosis in Dictyostelium amoebae, on the evolution of endosomal pH during fluid-phase endocytosis, have been investigated. Cycloheximide fully blocked the cellular transit of FITC-dextran but acidification of endo-lysosomal compartments still took place. Caffeine increased endo-lysosomal pH, probably as a consequence of an elevation of cytosolic [Ca2+]. Furthermore, it allowed the functional identification of a caffeine-insensitive terminal segment of the endocytic pathway. It corresponded to a recycling, postlysosomal compartment at pH 6.2 to 6.3 with an apparent volume of 160 microns 3/amoebae.",
"title": ""
},
{
"docid": "fb89fd2d9bf526b8bc7f1433274859a6",
"text": "In multidimensional image analysis, there are, and will continue to be, situations wherein automatic image segmentation methods fail, calling for considerable user assistance in the process. The main goals of segmentation research for such situations ought to be (i) to provide ffective controlto the user on the segmentation process while it is being executed, and (ii) to minimize the total user’s time required in the process. With these goals in mind, we present in this paper two paradigms, referred to aslive wireandlive lane, for practical image segmentation in large applications. For both approaches, we think of the pixel vertices and oriented edges as forming a graph, assign a set of features to each oriented edge to characterize its “boundariness,” and transform feature values to costs. We provide training facilities and automatic optimal feature and transform selection methods so that these assignments can be made with consistent effectiveness in any application. In live wire, the user first selects an initial point on the boundary. For any subsequent point indicated by the cursor, an optimal path from the initial point to the current point is found and displayed in real time. The user thus has a live wire on hand which is moved by moving the cursor. If the cursor goes close to the boundary, the live wire snaps onto the boundary. At this point, if the live wire describes the boundary appropriately, the user deposits the cursor which now becomes the new starting point and the process continues. A few points (livewire segments) are usually adequate to segment the whole 2D boundary. In live lane, the user selects only the initial point. Subsequent points are selected automatically as the cursor is moved within a lane surrounding the boundary whose width changes",
"title": ""
}
] |
scidocsrr
|
0a3a922b9c9b58b3fd13d369a4e171c8
|
MSER-Based Real-Time Text Detection and Tracking
|
[
{
"docid": "9185a7823e699c758dde3a81f7d6d86d",
"text": "Reading text from photographs is a challenging problem that has received a significant amount of attention. Two key components of most systems are (i) text detection from images and (ii) character recognition, and many recent methods have been proposed to design better feature representations and models for both. In this paper, we apply methods recently developed in machine learning -- specifically, large-scale algorithms for learning the features automatically from unlabeled data -- and show that they allow us to construct highly effective classifiers for both detection and recognition to be used in a high accuracy end-to-end system.",
"title": ""
}
] |
[
{
"docid": "8b060d80674bd3f329a675f1a3f4bce2",
"text": "Smartphones are ubiquitous devices that offer endless possibilities for health-related applications such as Ambient Assisted Living (AAL). They are rich in sensors that can be used for Human Activity Recognition (HAR) and monitoring. The emerging problem now is the selection of optimal combinations of these sensors and existing methods to accurately and efficiently perform activity recognition in a resource and computationally constrained environment. To accomplish efficient activity recognition on mobile devices, the most discriminative features and classification algorithms must be chosen carefully. In this study, sensor fusion is employed to improve the classification results of a lightweight classifier. Furthermore, the recognition performance of accelerometer, gyroscope and magnetometer when used separately and simultaneously on a feature-level sensor fusion is examined to gain valuable knowledge that can be used in dynamic sensing and data collection. Six ambulatory activities, namely, walking, running, sitting, standing, walking upstairs and walking downstairs, are inferred from low-sensor data collected from the right trousers pocket of the subjects and feature selection is performed to further optimize resource use.",
"title": ""
},
{
"docid": "0a0f826f1a8fa52d61892632fd403502",
"text": "We show that sequence information can be encoded into highdimensional fixed-width vectors using permutations of coordinates. Computational models of language often represent words with high-dimensional semantic vectors compiled from word-use statistics. A word’s semantic vector usually encodes the contexts in which the word appears in a large body of text but ignores word order. However, word order often signals a word’s grammatical role in a sentence and thus tells of the word’s meaning. Jones and Mewhort (2007) show that word order can be included in the semantic vectors using holographic reduced representation and convolution. We show here that the order information can be captured also by permuting of vector coordinates, thus providing a general and computationally light alternative to convolution.",
"title": ""
},
{
"docid": "ce9b9cc57277b635262a5d4af999dc32",
"text": "Age invariant face recognition has received increasing attention due to its great potential in real world applications. In spite of the great progress in face recognition techniques, reliably recognizing faces across ages remains a difficult task. The facial appearance of a person changes substantially over time, resulting in significant intra-class variations. Hence, the key to tackle this problem is to separate the variation caused by aging from the person-specific features that are stable. Specifically, we propose a new method, called Hidden Factor Analysis (HFA). This method captures the intuition above through a probabilistic model with two latent factors: an identity factor that is age-invariant and an age factor affected by the aging process. Then, the observed appearance can be modeled as a combination of the components generated based on these factors. We also develop a learning algorithm that jointly estimates the latent factors and the model parameters using an EM procedure. Extensive experiments on two well-known public domain face aging datasets: MORPH (the largest public face aging database) and FGNET, clearly show that the proposed method achieves notable improvement over state-of-the-art algorithms.",
"title": ""
},
{
"docid": "6e7098f39a8b860307dba52dcc7e0d42",
"text": "The paper presents an experimental algorithm to detect conventionalized metaphors implicit in the lexical data in a resource like WordNet, where metaphors are coded into the senses and so would never be detected by any algorithm based on the violation of preferences, since there would always be a constraint satisfied by such senses. We report an implementation of this algorithm, which was implemented first the preference constraints in VerbNet. We then derived in a systematic way a far more extensive set of constraints based on WordNet glosses, and with this data we reimplemented the detection algorithm and got a substantial improvement in recall. We suggest that this technique could contribute to improve the performance of existing metaphor detection strategies that do not attempt to detect conventionalized metaphors. The new WordNet-derived data is of wider significance because it also contains adjective constraints, unlike any existing lexical resource, and can be applied to any language with a semantic parser (and",
"title": ""
},
{
"docid": "e49515145975eadccc20b251d56f0140",
"text": "High mortality of nestling cockatiels (Nymphicus hollandicus) was observed in one breeding flock in Slovakia. The nestling mortality affected 50% of all breeding pairs. In general, all the nestlings in affected nests died. Death occurred suddenly in 4to 6-day-old birds, most of which had full crops. No feather disorders were diagnosed in this flock. Two dead nestlings were tested by nested PCR for the presence of avian polyomavirus (APV) and Chlamydophila psittaci and by single-round PCR for the presence of beak and feather disease virus (BFDV). After the breeding season ended, a breeding pair of cockatiels together with their young one and a fledgling budgerigar (Melopsittacus undulatus) were examined. No clinical alterations were observed in these birds. Haemorrhages in the proventriculus and irregular foci of yellow liver discoloration were found during necropsy in the young cockatiel and the fledgling budgerigar. Microscopy revealed liver necroses and acute haemolysis in the young cockatiel and confluent liver necroses and heart and kidney haemorrhages in the budgerigar. Two dead cockatiel nestlings, the young cockatiel and the fledgling budgerigar were tested positive for APV, while the cockatiel adults were negative. The presence of BFDV or Chlamydophila psittaci DNA was detected in none of the birds. The specificity of PCR was confirmed by the sequencing of PCR products amplified from the samples from the young cockatiel and the fledgling budgerigar. The sequences showed 99.6–100% homology with the previously reported sequences. To our knowledge, this is the first report of APV infection which caused a fatal disease in parent-raised cockatiel nestlings and merely subclinical infection in budgerigar nestlings.",
"title": ""
},
{
"docid": "30c96eb397b515f6b3e4d05c071413d1",
"text": "Thin-film solar cells have the potential to significantly decrease the cost of photovoltaics. Light trapping is particularly critical in such thin-film crystalline silicon solar cells in order to increase light absorption and hence cell efficiency. In this article we investigate the suitability of localized surface plasmons on silver nanoparticles for enhancing the absorbance of silicon solar cells. We find that surface plasmons can increase the spectral response of thin-film cells over almost the entire solar spectrum. At wavelengths close to the band gap of Si we observe a significant enhancement of the absorption for both thin-film and wafer-based structures. We report a sevenfold enhancement for wafer-based cells at =1200 nm and up to 16-fold enhancement at =1050 nm for 1.25 m thin silicon-on-insulator SOI cells, and compare the results with a theoretical dipole-waveguide model. We also report a close to 12-fold enhancement in the electroluminescence from ultrathin SOI light-emitting diodes and investigate the effect of varying the particle size on that enhancement. © 2007 American Institute of Physics. DOI: 10.1063/1.2734885",
"title": ""
},
{
"docid": "3f5706c0aedb5f66497a564105c3dea0",
"text": "The scientific study of hate speech, from a computer science point of view, is recent. This survey organizes and describes the current state of the field, providing a structured overview of previous approaches, including core algorithms, methods, and main features used. This work also discusses the complexity of the concept of hate speech, defined in many platforms and contexts, and provides a unifying definition. This area has an unquestionable potential for societal impact, particularly in online communities and digital media platforms. The development and systematization of shared resources, such as guidelines, annotated datasets in multiple languages, and algorithms, is a crucial step in advancing the automatic detection of hate speech.",
"title": ""
},
{
"docid": "9faa8b39898eaa4ca0a0c23d29e7a0ff",
"text": "Highly emphasized in entrepreneurial practice, business models have received limited attention from researchers. No consensus exists regarding the definition, nature, structure, and evolution of business models. Still, the business model holds promise as a unifying unit of analysis that can facilitate theory development in entrepreneurship. This article synthesizes the literature and draws conclusions regarding a number of these core issues. Theoretical underpinnings of a firm's business model are explored. A sixcomponent framework is proposed for characterizing a business model, regardless of venture type. These components are applied at three different levels. The framework is illustrated using a successful mainstream company. Suggestions are made regarding the manner in which business models might be expected to emerge and evolve over time. a c Purchase Export",
"title": ""
},
{
"docid": "7670b1eea992a1e83d3ebc1464563d60",
"text": "The present work was conducted to demonstrate a method that could be used to assess the hypothesis that children with specific language impairment (SLI) often respond more slowly than unimpaired children on a range of tasks. The data consisted of 22 pairs of mean response times (RTs) obtained from previously published studies; each pair consisted of a mean RT for a group of children with SLI for an experimental condition and the corresponding mean RT for a group of children without SLI. If children with SLI always respond more slowly than unimpaired children and by an amount that does not vary across tasks, then RTs for children with SLI should increase linearly as a function of RTs for age-matched control children without SLI. This result was obtained and is consistent with the view that differences in processing speed between children with and without SLI reflect some general (i.e., non-task specific) component of cognitive processing. Future applications of the method are suggested.",
"title": ""
},
{
"docid": "95050a66393b41978cf136c1c99b1922",
"text": "In this paper, we explore a new way to provide context-aware assistance for indoor navigation using a wearable vision system. We investigate how to represent the cognitive knowledge of wayfinding based on first-person-view videos in real-time and how to provide context-aware navigation instructions in a human-like manner. Inspired by the human cognitive process of wayfinding, we propose a novel cognitive model that represents visual concepts as a hierarchical structure. It facilitates efficient and robust localization based on cognitive visual concepts. Next, we design a prototype system that provides intelligent context-aware assistance based on the cognitive indoor navigation knowledge model. We conducted field tests and evaluated the system's efficacy by benchmarking it against traditional 2D maps and human guidance. The results show that context-awareness built on cognitive visual perception enables the system to emulate the efficacy of a human guide, leading to positive user experience.",
"title": ""
},
{
"docid": "4292a60a5f76fd3e794ce67d2ed6bde3",
"text": "If two translation systems differ differ in performance on a test set, can we trust that this indicates a difference in true system quality? To answer this question, we describe bootstrap resampling methods to compute statistical significance of test results, and validate them on the concrete example of the BLEU score. Even for small test sizes of only 300 sentences, our methods may give us assurances that test result differences are real.",
"title": ""
},
{
"docid": "024b739dc047e17310fe181591fcd335",
"text": "In this paper, a Ka-Band patch sub-array structure for millimeter-wave phased array applications is demonstrated. The conventional corner truncated patch is modified to improve the impedance and CP bandwidth alignment. A new sub-array feed approach is introduced to reduce complexity of the feed line between elements and increase the radiation efficiency. A sub-array prototype is built and tested. Good agreement with the theoretical results is obtained.",
"title": ""
},
{
"docid": "b1c0fb9a020d8bc85b23f696586dd9d3",
"text": "Most instances of real-life language use involve discourses in which several sentences or utterances are coherently linked through the use of repeated references. Repeated reference can take many forms, and the choice of referential form has been the focus of much research in several related fields. In this article we distinguish between three main approaches: one that addresses the ‘why’ question – why are certain forms used in certain contexts; one that addresses the ‘how’ question – how are different forms processed; and one that aims to answer both questions by seriously considering both the discourse function of referential expressions, and the cognitive mechanisms that underlie their processing cost. We argue that only the latter approach is capable of providing a complete view of referential processing, and that in so doing it may also answer a more profound ‘why’ question – why does language offer multiple referential forms. Coherent discourse typically involves repeated references to previously mentioned referents, and these references can be made with different forms. For example, a person mentioned in discourse can be referred to by a proper name (e.g., Bill), a definite description (e.g., the waiter), or a pronoun (e.g., he). When repeated reference is made to a referent that was mentioned in the same sentence, the choice and processing of referential form may be governed by syntactic constraints such as binding principles (Chomsky 1981). However, in many cases of repeated reference to a referent that was mentioned in the same sentence, and in all cases of repeated reference across sentences, the choice and processing of referential form reflects regular patterns and preferences rather than strong syntactic constraints. The present article focuses on the factors that underlie these patterns. Considerable research in several disciplines has aimed to explain how speakers and writers choose which form they should use to refer to objects and events in discourse, and how listeners and readers process different referential forms (e.g., Chafe 1976; Clark & Wilkes 1986; Kintsch 1988; Gernsbacher 1989; Ariel 1990; Gordon, Grosz & Gilliom 1993; Gundel, Hedberg & Zacharski 1993; Garrod & Sanford 1994; Gordon & Hendrick 1998; Almor 1999; Cowles & Garnham 2005). One of the central observations in this research is that there exists an inverse relation between the specificity of the referential",
"title": ""
},
{
"docid": "1df103aef2a4a5685927615cfebbd1ea",
"text": "While human subjects lift small objects using the precision grip between the tips of the fingers and thumb the ratio between the grip force and the load force (i.e. the vertical lifting force) is adapted to the friction between the object and the skin. The present report provides direct evidence that signals in tactile afferent units are utilized in this adaptation. Tactile afferent units were readily excited by small but distinct slips between the object and the skin revealed as vibrations in the object. Following such afferent slip responses the force ratio was upgraded to a higher, stable value which provided a safety margin to prevent further slips. The latency between the onset of the a slip and the appearance of the ratio change (74 ±9 ms) was about half the minimum latency for intended grip force changes triggered by cutaneous stimulation of the fingers. This indicated that the motor responses were automatically initiated. If the subjects were asked to very slowly separate their thumb and the opposing finger while the object was held in air, grip force reflexes originating from afferent slip responses appeared to counteract the voluntary command, but the maintained upgrading of the force ratio was suppressed. In experiments with weak electrical cutaneous stimulation delivered through the surfaces of the object it was established that tactile input alone could trigger the upgrading of the force ratio. Although, varying in responsiveness, each of the three types of tactile units which exhibit a pronounced dynamic sensitivity (FA I, FA II and SA I units) could reliably signal these slips. Similar but generally weaker afferent responses, sometimes followed by small force ratio changes, also occurred in the FA I and the SA I units in the absence of detectable vibrations events. In contrast to the responses associated with clear vibratory events, the weaker afferent responses were probably caused by localized frictional slips, i.e. slips limited to small fractions of the skin area in contact with the object. Indications were found that the early adjustment to a new frictional condition, which may appear soon (ca. 0.1–0.2 s) after the object is initially gripped, might depend on the vigorous responses in the FA I units during the initial phase of the lifts (see Westling and Johansson 1987). The role of the tactile input in the adaptation of the force coordination to the frictional condition is discussed.",
"title": ""
},
{
"docid": "562ec4c39f0d059fbb9159ecdecd0358",
"text": "In this paper, we propose the factorized hidden layer FHL approach to adapt the deep neural network DNN acoustic models for automatic speech recognition ASR. FHL aims at modeling speaker dependent SD hidden layers by representing an SD affine transformation as a linear combination of bases. The combination weights are low-dimensional speaker parameters that can be initialized using speaker representations like i-vectors and then reliably refined in an unsupervised adaptation fashion. Therefore, our method provides an efficient way to perform both adaptive training and test-time adaptation. Experimental results have shown that the FHL adaptation improves the ASR performance significantly, compared to the standard DNN models, as well as other state-of-the-art DNN adaptation approaches, such as training with the speaker-normalized CMLLR features, speaker-aware training using i-vector and learning hidden unit contributions LHUC. For Aurora 4, FHL achieves 3.8% and 2.3% absolute improvements over the standard DNNs trained on the LDA + STC and CMLLR features, respectively. It also achieves 1.7% absolute performance improvement over a system that combines the i-vector adaptive training with LHUC adaptation. For the AMI dataset, FHL achieved 1.4% and 1.9% absolute improvements over the sequence-trained CMLLR baseline systems, for the IHM and SDM tasks, respectively.",
"title": ""
},
{
"docid": "074567500751d814eef4ba979dc3cc8d",
"text": "Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner’s predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms’ merits and limitations. Reinforcement learning is of great interest because of the large number of practical applications that it can be used to address, ranging from problems in artificial intelligence to operations research or control engineering. In this book, we focus on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming. We give a fairly comprehensive catalog of learning problems,",
"title": ""
},
{
"docid": "7b945d65f37fbd80b7cf1a5fad526360",
"text": "from these individual steps to produce global behavior, usually averaged over time. Computer science provides the key elements to describe mechanistic steps: algorithms and programming languages [3]. Following the metaphor of molecules as processes introduced in [4], process calculi have been identified as a promising tool to model biological systems that are inherently complex, concurrent, and driven by the interactions of their subsystems. Visualization in Process Algebra Models of Biological Systems",
"title": ""
},
{
"docid": "f935bdde9d4571f50e47e48f13bfc4b8",
"text": "BACKGROUND\nThe incidence of microcephaly in Brazil in 2015 was 20 times higher than in previous years. Congenital microcephaly is associated with genetic factors and several causative agents. Epidemiological data suggest that microcephaly cases in Brazil might be associated with the introduction of Zika virus. We aimed to detect and sequence the Zika virus genome in amniotic fluid samples of two pregnant women in Brazil whose fetuses were diagnosed with microcephaly.\n\n\nMETHODS\nIn this case study, amniotic fluid samples from two pregnant women from the state of Paraíba in Brazil whose fetuses had been diagnosed with microcephaly were obtained, on the recommendation of the Brazilian health authorities, by ultrasound-guided transabdominal amniocentesis at 28 weeks' gestation. The women had presented at 18 weeks' and 10 weeks' gestation, respectively, with clinical manifestations that could have been symptoms of Zika virus infection, including fever, myalgia, and rash. After the amniotic fluid samples were centrifuged, DNA and RNA were extracted from the purified virus particles before the viral genome was identified by quantitative reverse transcription PCR and viral metagenomic next-generation sequencing. Phylogenetic reconstruction and investigation of recombination events were done by comparing the Brazilian Zika virus genome with sequences from other Zika strains and from flaviviruses that occur in similar regions in Brazil.\n\n\nFINDINGS\nWe detected the Zika virus genome in the amniotic fluid of both pregnant women. The virus was not detected in their urine or serum. Tests for dengue virus, chikungunya virus, Toxoplasma gondii, rubella virus, cytomegalovirus, herpes simplex virus, HIV, Treponema pallidum, and parvovirus B19 were all negative. After sequencing of the complete genome of the Brazilian Zika virus isolated from patient 1, phylogenetic analyses showed that the virus shares 97-100% of its genomic identity with lineages isolated during an outbreak in French Polynesia in 2013, and that in both envelope and NS5 genomic regions, it clustered with sequences from North and South America, southeast Asia, and the Pacific. After assessing the possibility of recombination events between the Zika virus and other flaviviruses, we ruled out the hypothesis that the Brazilian Zika virus genome is a recombinant strain with other mosquito-borne flaviviruses.\n\n\nINTERPRETATION\nThese findings strengthen the putative association between Zika virus and cases of microcephaly in neonates in Brazil. Moreover, our results suggest that the virus can cross the placental barrier. As a result, Zika virus should be considered as a potential infectious agent for human fetuses. Pathogenesis studies that confirm the tropism of Zika virus for neuronal cells are warranted.\n\n\nFUNDING\nConsellho Nacional de Desenvolvimento e Pesquisa (CNPq), Fundação de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ).",
"title": ""
},
{
"docid": "f2f7b7152de3b83cc476e38eb6265fdf",
"text": "The discrimination of textures is a critical aspect of identi\"cation in digital imagery. Texture features generated by Gabor \"lters have been increasingly considered and applied to image analysis. Here, a comprehensive classi\"cation and segmentation comparison of di!erent techniques used to produce texture features using Gabor \"lters is presented. These techniques are based on existing implementations as well as new, innovative methods. The functional characterization of the \"lters as well as feature extraction based on the raw \"lter outputs are both considered. Overall, using the Gabor \"lter magnitude response given a frequency bandwidth and spacing of one octave and orientation bandwidth and spacing of 303 augmented by a measure of the texture complexity generated preferred results. ( 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "24ac33300d3ea99441068c20761e8305",
"text": "Purpose – The purpose of this research is to examine the critical success factors of mobile web site adoption. Design/methodology/approach – Based on the valid responses collected from a questionnaire survey, the structural equation modelling technique was employed to examine the research model. Findings – The results indicate that system quality is the main factor affecting perceived ease of use, whereas information quality is the main factor affecting perceived usefulness. Service quality has significant effects on trust and perceived ease of use. Perceived usefulness, perceived ease of use and trust determine user satisfaction. Practical implications – Mobile service providers need to improve the system quality, information quality and service quality of mobile web sites to enhance user satisfaction. Originality/value – Previous research has mainly focused on e-commerce web site success and seldom examined the factors affecting mobile web site success. This research fills the gap. The research draws on information systems success theory, the technology acceptance model and trust theory as the theoretical bases.",
"title": ""
}
] |
scidocsrr
|
05524af7dccb5b0d91040086c2c51573
|
Mining , Pruning and Visualizing Frequent Patterns for Temporal Event Sequence Analysis
|
[
{
"docid": "f2c8af1f4bcf7115fc671ae9922adbb3",
"text": "Extracting insights from temporal event sequences is an important challenge. In particular, mining frequent patterns from event sequences is a desired capability for many domains. However, most techniques for mining frequent patterns are ineffective for real-world data that may be low-resolution, concurrent, or feature many types of events, or the algorithms may produce results too complex to interpret. To address these challenges, we propose Frequence, an intelligent user interface that integrates data mining and visualization in an interactive hierarchical information exploration system for finding frequent patterns from longitudinal event sequences. Frequence features a novel frequent sequence mining algorithm to handle multiple levels-of-detail, temporal context, concurrency, and outcome analysis. Frequence also features a visual interface designed to support insights, and support exploration of patterns of the level-of-detail relevant to users. Frequence's effectiveness is demonstrated with two use cases: medical research mining event sequences from clinical records to understand the progression of a disease, and social network research using frequent sequences from Foursquare to understand the mobility of people in an urban environment.",
"title": ""
},
{
"docid": "5f04fcacc0dd325a1cd3ba5a846fe03f",
"text": "Web clickstream data are routinely collected to study how users browse the web or use a service. It is clear that the ability to recognize and summarize user behavior patterns from such data is valuable to e-commerce companies. In this paper, we introduce a visual analytics system to explore the various user behavior patterns reflected by distinct clickstream clusters. In a practical analysis scenario, the system first presents an overview of clickstream clusters using a Self-Organizing Map with Markov chain models. Then the analyst can interactively explore the clusters through an intuitive user interface. He can either obtain summarization of a selected group of data or further refine the clustering result. We evaluated our system using two different datasets from eBay. Analysts who were working on the same data have confirmed the system's effectiveness in extracting user behavior patterns from complex datasets and enhancing their ability to reason.",
"title": ""
}
] |
[
{
"docid": "f7f6ee050a759842cbbab74e7487ab15",
"text": "Tests, as learning events, can enhance subsequent recall more than do additional study opportunities, even without feedback. Such advantages of testing tend to appear, however, only at long retention intervals and/or when criterion tests stress recall, rather than recognition, processes. We propose that the interaction of the benefits of testing versus restudying with final-test delay and format reflects not only that successful retrievals are more powerful learning events than are re-presentations but also that the distribution of memory strengths across items is shifted differentially by testing and restudying. The benefits of initial testing over restudying, in this view, should increase as the delay or format of the final test makes that test more difficult. Final-test difficulty, not the similarity of initial-test and final-test conditions, should determine the benefits of testing. In Experiments 1 and 2 we indeed found that initial cued-recall testing enhanced subsequent recall more than did restudying when the final test was a difficult (free-recall) test but not when it was an easier (cued-recall) test that matched the initial test. The results of Experiment 3 supported a new prediction of the distribution framework: namely, that the final cued-recall test that did not show a benefit of testing in Experiment 1 should show such a benefit when that test was made more difficult by introducing retroactive interference. Overall, our results suggest that the differential consequences of initial testing versus restudying reflect, in part, differences in how items distributions are shifted by testing and studying.",
"title": ""
},
{
"docid": "600d04e1d78084b36c9fb573fb9d699a",
"text": "A mobile robot is designed to pick and place the objects through voice commands. This work would be practically useful to wheelchair bound persons. The pick and place robot is designed in a way that it is able to help the user to pick up an item that is placed at two different levels using an extendable arm. The robot would move around to pick up an item and then pass it back to the user or to a desired location as told by the user. The robot control is achieved through voice commands such as left, right, straight, etc. in order to help the robot to navigate around. Raspberry Pi 2 controls the overall design with 5 DOF servo motor arm. The webcam is used to navigate around which provides live streaming using a mobile application for the user to look into. Results show the ability of the robot to pick and place the objects up to a height of 23.5cm through proper voice commands.",
"title": ""
},
{
"docid": "b5af51c869fa4863dfa581b0fb8cc20a",
"text": "This paper describes progress toward a prototype implementation of a tool which aims to improve literacy in deaf high school and college students who are native (or near native) signers of American Sign Language (ASL). We envision a system that will take a piece of text written by a deaf student, analyze that text for grammatical errors, and engage that student in a tutorial dialogue, enabling the student to generate appropriate corrections to the text. A strong focus of this work is to develop a system which adapts this process to the knowledge level and learning strengths of the user and which has the flexibility to engage in multi-modal, multilingual tutorial instruction utilizing both English and the native language of the user.",
"title": ""
},
{
"docid": "3304f4d4c936a416b0ced56ee8e96f20",
"text": "Big Data analytics plays a key role through reducing the data size and complexity in Big Data applications. Visualization is an important approach to helping Big Data get a complete view of data and discover data values. Big Data analytics and visualization should be integrated seamlessly so that they work best in Big Data applications. Conventional data visualization methods as well as the extension of some conventional methods to Big Data applications are introduced in this paper. The challenges of Big Data visualization are discussed. New methods, applications, and technology progress of Big Data visualization are presented.",
"title": ""
},
{
"docid": "17c81b17aa32ad6a732fc9f0c6b9ad76",
"text": "Highly pathogenic avian influenza A/H5N1 virus can cause morbidity and mortality in humans but thus far has not acquired the ability to be transmitted by aerosol or respiratory droplet (\"airborne transmission\") between humans. To address the concern that the virus could acquire this ability under natural conditions, we genetically modified A/H5N1 virus by site-directed mutagenesis and subsequent serial passage in ferrets. The genetically modified A/H5N1 virus acquired mutations during passage in ferrets, ultimately becoming airborne transmissible in ferrets. None of the recipient ferrets died after airborne infection with the mutant A/H5N1 viruses. Four amino acid substitutions in the host receptor-binding protein hemagglutinin, and one in the polymerase complex protein basic polymerase 2, were consistently present in airborne-transmitted viruses. The transmissible viruses were sensitive to the antiviral drug oseltamivir and reacted well with antisera raised against H5 influenza vaccine strains. Thus, avian A/H5N1 influenza viruses can acquire the capacity for airborne transmission between mammals without recombination in an intermediate host and therefore constitute a risk for human pandemic influenza.",
"title": ""
},
{
"docid": "420719690b6249322927153daedba87b",
"text": "• In-domain: 91% F1 on the dev set, 5 we reduced the learning rate from 10−4 to 10−5. We then stopped the training when F1 was not improved after 20 epochs. We did the same for ment-norm except that the learning rate was changed at 91.5% F1. Note that all the hyper-parameters except K and the turning point for early stopping were set to the values used by Ganea and Hofmann (2017). Systematic tuning is expensive though may have further ncreased the result of our models.",
"title": ""
},
{
"docid": "dd412b31bc6f7f18ca18a54dc5267cc3",
"text": "We propose a partial information state-based framework for collaborative dialogue and argument between agents. We employ a three-valued based nonmonotonic logic, NML3, for representing and reasoning about Partial Information States (PIS). NML3 formalizes some aspects of revisable reasoning and it is sound and complete. Within the framework of NML3, we present a formalization of some basic dialogue moves and the rules of protocols of some types of dialogue. The rules of a protocol are nonmonotonic in the sense that the set of propositions to which an agent is committed and the validity of moves vary from one move to another. The use of PIS allows an agent to expand consistently its viewpoint with some of the propositions to which another agent, involved in a dialogue, is overtly committed. A proof method for the logic NML3 has been successfully implemented as an automatic theorem prover. We show, via some examples, that the tableau method employed to implement the theorem prover allows an agent, absolute access to every stage of a proof process. This access is useful for constructive argumentation and for finding cooperative and/or informative answers.",
"title": ""
},
{
"docid": "d1cf6f36fe964ac9e48f54a1f35e94c3",
"text": "Recognising patterns that correlate multiple events over time becomes increasingly important in applications from urban transportation to surveillance monitoring. In many realworld scenarios, however, timestamps of events may be erroneously recorded and events may be dropped from a stream due to network failures or load shedding policies. In this work, we present SimpMatch, a novel simplex-based algorithm for probabilistic evaluation of event queries using constraints over event orderings in a stream. Our approach avoids learning probability distributions for time-points or occurrence intervals. Instead, we employ the abstraction of segmented intervals and compute the probability of a sequence of such segments using the principle of order statistics. The algorithm runs in linear time to the number of missed timestamps, and shows high accuracy, yielding exact results if event generation is based on a Poisson process and providing a good approximation otherwise. As we demonstrate empirically, SimpMatch enables efficient and effective reasoning over event streams, outperforming state-ofthe-art methods for probabilistic evaluation of event queries by up to two orders of magnitude.",
"title": ""
},
{
"docid": "01895415b6785dda28ac5fa133c97909",
"text": "Lossy compression introduces complex compression artifacts, particularly blocking artifacts, ringing effects and blurring. Existing algorithms either focus on removing blocking artifacts and produce blurred output, or restore sharpened images that are accompanied with ringing effects. Inspired by the success of deep convolutional networks (DCN) on superresolution [6], we formulate a compact and efficient network for seamless attenuation of different compression artifacts. To meet the speed requirement of real-world applications, we further accelerate the proposed baseline model by layer decomposition and joint use of large-stride convolutional and deconvolutional layers. This also leads to a more general CNN framework that has a close relationship with the conventional Multi-Layer Perceptron (MLP). Finally, the modified network achieves a speed up of 7.5× with almost no performance loss compared to the baseline model. We also demonstrate that a deeper model can be effectively trained with features learned in a shallow network. Following a similar “easy to hard” idea, we systematically investigate three practical transfer settings and show the effectiveness of transfer learning in low-level vision problems. Our method shows superior performance than the state-of-the-art methods both on benchmark datasets and a real-world use case.",
"title": ""
},
{
"docid": "46e8318e76a1b2e539d7eafd65617993",
"text": "A super wideband printed modified bow-tie antenna loaded with rounded-T shaped slots fed through a microstrip balun is proposed for microwave and millimeter-wave band imaging applications. The modified slot-loaded bow-tie pattern increases the electrical length of the bow-tie antenna reducing the lower band to 3.1 GHz. In addition, over the investigated frequency band up to 40 GHz, the proposed modified bow-tie pattern considerably flattens the input impedance response of the bow-tie resulting in a smooth impedance matching performance enhancing the reflection coefficient (S11) characteristics. The introduction of the modified ground plane printed underneath the bow-tie, on the other hand, yields to directional far-field radiation patterns with considerably enhanced gain performance. The S11 and E-plane/H-plane far-field radiation pattern measurements have been carried out and it is demonstrated that the fabricated bow-tie antenna operates across a measured frequency band of 3.1-40 GHz with an average broadband gain of 7.1 dBi.",
"title": ""
},
{
"docid": "7023b8c49c03f37d4a71ed179dddf487",
"text": "PURPOSE\nThe Study of Transition, Outcomes and Gender (STRONG) was initiated to assess the health status of transgender people in general and following gender-affirming treatments at Kaiser Permanente health plans in Georgia, Northern California and Southern California. The objectives of this communication are to describe methods of cohort ascertainment and data collection and to characterise the study population.\n\n\nPARTICIPANTS\nA stepwise methodology involving computerised searches of electronic medical records and free-text validation of eligibility and gender identity was used to identify a cohort of 6456 members with first evidence of transgender status (index date) between 2006 and 2014. The cohort included 3475 (54%) transfeminine (TF), 2892 (45%) transmasculine (TM) and 89 (1%) members whose natal sex and gender identity remained undetermined from the records. The cohort was matched to 127 608 enrollees with no transgender evidence (63 825 women and 63 783 men) on year of birth, race/ethnicity, study site and membership year of the index date. Cohort follow-up extends through the end of 2016.\n\n\nFINDINGS TO DATE\nAbout 58% of TF and 52% of TM cohort members received hormonal therapy at Kaiser Permanente. Chest surgery was more common among TM participants (12% vs 0.3%). The proportions of transgender participants who underwent genital reconstruction surgeries were similar (4%-5%) in the two transgender groups. Results indicate that there are sufficient numbers of events in the TF and TM cohorts to further examine mental health status, cardiovascular events, diabetes, HIV and most common cancers.\n\n\nFUTURE PLANS\nSTRONG is well positioned to fill existing knowledge gaps through comparisons of transgender and reference populations and through analyses of health status before and after gender affirmation treatment. Analyses will include incidence of cardiovascular disease, mental health, HIV and diabetes, as well as changes in laboratory-based endpoints (eg, polycythemia and bone density), overall and in relation to gender affirmation therapy.",
"title": ""
},
{
"docid": "d9d0edec2ad5ac8120fb8626f208af6c",
"text": "Light-Field enables us to observe scenes from free viewpoints. However, it generally consists of 4-D enormous data, that are not suitable for storing or transmitting without effective compression. 4-D Light-Field is very redundant because essentially it includes just 3-D scene information. Actually, although robust 3-D scene estimation such as depth recovery from Light-Field is not so easy, we successfully derived a method of reconstructing Light-Field directly from 3-D information composed of multi-focus images without any scene estimation. On the other hand, it is easy to synthesize multi-focus images from Light-Field. In this paper, based on the method, we propose novel Light-Field compression via synthesized multi-focus images as effective representation of 3-D scenes. Multi-focus images are easily compressed because they contain mostly low frequency components. We show experimental results by using synthetic and real images. Reconstruction quality of the method is robust even at very low bit-rate.",
"title": ""
},
{
"docid": "b2c299e13eff8776375c14357019d82e",
"text": "This paper is focused on the application of complementary split-ring resonators (CSRRs) to the suppression of the common (even) mode in microstrip differential transmission lines. By periodically and symmetrically etching CSRRs in the ground plane of microstrip differential lines, the common mode can be efficiently suppressed over a wide band whereas the differential signals are not affected. Throughout the paper, we present and discuss the principle for the selective common-mode suppression, the circuit model of the structure (including the models under even- and odd-mode excitation), the strategies for bandwidth enhancement of the rejected common mode, and a methodology for common-mode filter design. On the basis of the dispersion relation for the common mode, it is shown that the maximum achievable rejection bandwidth can be estimated. Finally, theory is validated by designing and measuring a differential line and a balanced bandpass filter with common-mode suppression, where double-slit CSRRs (DS-CSRRs) are used in order to enhance the common-mode rejection bandwidth. Due to the presence of DS-CSRRs, the balanced filter exhibits more than 40 dB of common-mode rejection within a 34% bandwidth around the filter pass band.",
"title": ""
},
{
"docid": "76e407bc17d0317eae8ff004dc200095",
"text": "Major advances have recently been made in merging language and vision representations. But most tasks considered so far have confined themselves to the processing of objects and lexicalised relations amongst objects (content words). We know, however, that humans (even preschool children) can abstract over raw data to perform certain types of higher-level reasoning, expressed in natural language by function words. A case in point is given by their ability to learn quantifiers, i.e. expressions like few, some and all. From formal semantics and cognitive linguistics, we know that quantifiers are relations over sets which, as a simplification, we can see as proportions. For instance, in most fish are red, most encodes the proportion of fish which are red fish. In this paper, we study how well current language and vision strategies model such relations. We show that state-of-the-art attention mechanisms coupled with a traditional linguistic formalisation of quantifiers gives best performance on the task. Additionally, we provide insights on the role of 'gist' representations in quantification. A 'logical' strategy to tackle the task would be to first obtain a numerosity estimation for the two involved sets and then compare their cardinalities. We however argue that precisely identifying the composition of the sets is not only beyond current state-of-the-art models but perhaps even detrimental to a task that is most efficiently performed by refining the approximate numerosity estimator of the system.",
"title": ""
},
{
"docid": "f1c5f6f2bdff251e91df1dbd1e2302b2",
"text": "In this paper, mathematical models for permutation flow shop scheduling and job shop scheduling problems are proposed. The first problem is based on a mixed integer programming model. As the problem is NP-complete, this model can only be used for smaller instances where an optimal solution can be computed. For large instances, another model is proposed which is suitable for solving the problem by stochastic heuristic methods. For the job shop scheduling problem, a mathematical model and its main representation schemes are presented. Keywords—Flow shop, job shop, mixed integer model, representation scheme.",
"title": ""
},
{
"docid": "a6a7007f64e5d615c641048d6c630e03",
"text": "Assessment Clinic, Department of Surgery, Flinders University and Medical Centre, Adelaide, South Australia Good understanding of a patient’s lymphoedema or their risk of it is based on accurate and appropriate assessment of their medical, surgical and familial history, as well as taking baseline measures which can provide an indication of structural and functional changes. If we want the holistic picture, we should also examine the impact that lymphoedema has on the patient’s quality of life and activities of daily living.",
"title": ""
},
{
"docid": "aaa2a2971b070bc6e59a4ca9bcd00b49",
"text": "In this study, the relationship between psychopathy and the prepetration of sexual homicide was investigated. The official file descriptions of sexual homicides committed by 18 psychopathic and 20 nonpsychopathic Canadian offenders were coded (by coders unaware of Psychopathy Checklist--Revised [PCL--R] scores) for characteristics of the victim, victim/perpetrator relationship, and evidence of gratuitous and sadistic violent behavior. Results indicated that most (84.7%) of the sexual murderers scored in the moderate to high range on the PCL--R. The majority of victims (66.67%) were female strangers, with no apparent influence of psychopathy on victim choice. Homicides committed by psychopathic offenders (using a PCL--R cut-off of 30) contained a significantly higher level of both gratuitous and sadistic violence than nonpsychopathic offenders. Most (82.4%) of the psychopaths exhibited some degree of sadistic behavior in their homicides compared to 52.6% of the nonpsychopaths. Implications for homicide investigations are discussed.",
"title": ""
},
{
"docid": "ceb9cfea66bb08a73c48c2cef82ff7d0",
"text": "In this letter, we propose a novel supervised change detection method based on a deep siamese convolutional network for optical aerial images. We train a siamese convolutional network using the weighted contrastive loss. The novelty of the method is that the siamese network is learned to extract features directly from the image pairs. Compared with hand-crafted features used by the conventional change detection method, the extracted features are more abstract and robust. Furthermore, because of the advantage of the weighted contrastive loss function, the features have a unique property: the feature vectors of the changed pixel pair are far away from each other, while the ones of the unchanged pixel pair are close. Therefore, we use the distance of the feature vectors to detect changes between the image pair. Simple threshold segmentation on the distance map can even obtain good performance. For improvement, we use a $k$ -nearest neighbor approach to update the initial result. Experimental results show that the proposed method produces results comparable, even better, with the two state-of-the-art methods in terms of F-measure.",
"title": ""
}
] |
scidocsrr
|
e0a8bda10c5595a4a07a428ce6dd2a29
|
A new image filtering method: Nonlocal image guided averaging
|
[
{
"docid": "b5453d9e4385d5a5ff77997ad7e3f4f0",
"text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.",
"title": ""
}
] |
[
{
"docid": "a9612aacde205be2d753c5119b9d95d3",
"text": "We propose a multi-object multi-camera framework for tracking large numbers of tightly-spaced objects that rapidly move in three dimensions. We formulate the problem of finding correspondences across multiple views as a multidimensional assignment problem and use a greedy randomized adaptive search procedure to solve this NP-hard problem efficiently. To account for occlusions, we relax the one-to-one constraint that one measurement corresponds to one object and iteratively solve the relaxed assignment problem. After correspondences are established, object trajectories are estimated by stereoscopic reconstruction using an epipolar-neighborhood search. We embedded our method into a tracker-to-tracker multi-view fusion system that not only obtains the three-dimensional trajectories of closely-moving objects but also accurately settles track uncertainties that could not be resolved from single views due to occlusion. We conducted experiments to validate our greedy assignment procedure and our technique to recover from occlusions. We successfully track hundreds of flying bats and provide an analysis of their group behavior based on 150 reconstructed 3D trajectories.",
"title": ""
},
{
"docid": "e70c6ccc129f602bd18a49d816ee02a9",
"text": "This purpose of this paper is to show how prevalent features of successful human tutoring interactions can be integrated into a pedagogical agent, AutoTutor. AutoTutor is a fully automated computer tutor that responds to learner input by simulating the dialog moves of effective, normal human tutors. AutoTutor’s delivery of dialog moves is organized within a 5step framework that is unique to normal human tutoring interactions. We assessed AutoTutor’s performance as an effective tutor and conversational partner during tutoring sessions with virtual students of varying ability levels. Results from three evaluation cycles indicate the following: (1) AutoTutor is capable of delivering pedagogically effective dialog moves that mimic the dialog move choices of human tutors, and (2) AutoTutor is a reasonably effective conversational partner. INTRODUCTION AND BACKGROUND Over the last decade a number of researchers have attempted to uncover the mechanisms of human tutoring that are responsible for student learning gains. Many of the informative findings have been reported in studies that have systematically analyzed the collaborative discourse that occurs between tutors and students (Fox, 1993; Graesser & Person, 1994; Graesser, Person, & Magliano, 1995; Hume, Michael, Rovick, & Evens, 1996; McArthur, Stasz, & Zmuidzinas, 1990; Merrill, Reiser, Ranney, & Trafton, 1992; Moore, 1995; Person & Graesser, 1999; Person, Graesser, Magliano, & Kreuz, 1994; Person, Kreuz, Zwaan, & Graesser, 1995; Putnam, 1987). For example, we have learned that the tutorial session is predominately controlled by the tutor. That is, tutors, not students, typically determine when and what topics will be covered in the session. Further, we know that human tutors rarely employ sophisticated or “ideal” tutoring models that are often incorporated into intelligent tutoring systems. Instead, human tutors are more likely to rely on localized strategies that are embedded within conversational turns. Although many findings such as these have illuminated the tutoring process, they present formidable challenges for designers of intelligent tutoring systems. After all, building a knowledgeable conversational partner is no small feat. However, if designers of future tutoring systems wish to capitalize on the knowledge gained from human tutoring studies, the next generation of tutoring systems will incorporate pedagogical agents that engage in learning dialogs with students. The purpose of this paper is twofold. First, we will describe how prevalent features of successful human tutoring interactions can be incorporated into a pedagogical agent, AutoTutor. Second, we will provide data from several preliminary performance evaluations in which AutoTutor interacts with virtual students of varying ability levels. Person, Graesser, Kreuz, Pomeroy, and the Tutoring Research Group AutoTutor is a fully automated computer tutor that is currently being developed by the Tutoring Research Group (TRG). AutoTutor is a working system that attempts to comprehend students’ natural language contributions and then respond to the student input by simulating the dialogue moves of human tutors. AutoTutor differs from other natural language tutors in several ways. First, AutoTutor does not restrict the natural language input of the student like other systems (e.g., Adele (Shaw, Johnson, & Ganeshan, 1999); the Ymir agents (Cassell & Thórisson, 1999); Cirscim-Tutor (Hume, Michael, Rovick, & Evens, 1996; Zhou et al., 1999); Atlas (Freedman, 1999); and Basic Electricity and Electronics (Moore, 1995; Rose, Di Eugenio, & Moore, 1999)). These systems tend to limit student input to a small subset of judiciously worded speech acts. Second, AutoTutor does not allow the user to substitute natural language contributions with GUI menu options like those in the Atlas and Adele systems. The third difference involves the open-world nature of AutoTutor’s content domain (i.e., computer literacy). The previously mentioned tutoring systems are relatively more closed-world in nature, and therefore, constrain the scope of student contributions. The current version of AutoTutor simulates the tutorial dialog moves of normal, untrained tutors; however, plans for subsequent versions include the integration of more sophisticated ideal tutoring strategies. AutoTutor is currently designed to assist college students learn about topics covered in an introductory computer literacy course. In a typical tutoring session with AutoTutor, students will learn the fundamentals of computer hardware, the operating system, and the Internet. A Brief Sketch of AutoTutor AutoTutor is an animated pedagogical agent that serves as a conversational partner with the student. AutoTutor’s interface is comprised of four features: a two-dimensional, talking head, a text box for typed student input, a text box that displays the problem/question being discussed, and a graphics box that displays pictures and animations that are related to the topic at hand. AutoTutor begins the session by introducing himself and then presents the student with a question or problem that is selected from a curriculum script. The question/problem remains in a text box at the top of the screen until AutoTutor moves on to the next topic. For some questions and problems, there are graphical displays and animations that appear in a specially designated box on the screen. Once AutoTutor has presented the student with a problem or question, a multi-turn tutorial dialog occurs between AutoTutor and the learner. All student contributions are typed into the keyboard and appear in a text box at the bottom of the screen. AutoTutor responds to each student contribution with one or a combination of pedagogically appropriate dialog moves. These dialog moves are conveyed via synthesized speech, appropriate intonation, facial expressions, and gestures and do not appear in text form on the screen. In the future, we hope to have AutoTutor handle speech recognition, so students can speak their contributions. However, current speech recognition packages require time-consuming training that is not optimal for systems that interact with multiple users. The various modules that enable AutoTutor to interact with the learner will be described in subsequent sections of the paper. For now, however, it is important to note that our initial goals for building AutoTutor have been achieved. That is, we have designed a computer tutor that participates in a conversation with the learner while simulating the dialog moves of normal human tutors. WHY SIMULATE NORMAL HUMAN TUTORS? It has been well documented that normal, untrained human tutors are effective. Effect sizes ranging between .5 and 2.3 have been reported in studies where student learning gains were measured (Bloom, 1984; Cohen, Kulik, & Kulik, 1982). For quite a while, these rather large effect sizes were somewhat puzzling. That is, normal tutors typically do not have expert domain knowledge nor do they have knowledge about sophisticated tutoring strategies. In order to gain a better understanding of the primary mechanisms that are responsible for student learning Simulating Human Tutor Dialog Moves in AutoTutor gains, a handful of researchers have systematically analyzed the dialogue that occurs between normal, untrained tutors and students (Graesser & Person, 1994; Graesser et al., 1995; Person & Graesser, 1999; Person et al., 1994; Person et al., 1995). Graesser, Person, and colleagues analyzed over 100 hours of tutoring interactions and identified two prominent features of human tutoring dialogs: (1) a five-step dialog frame that is unique to tutoring interactions, and (2) a set of tutor-initiated dialog moves that serve specific pedagogical functions. We believe these two features are responsible for the positive learning outcomes that occur in typical tutoring settings, and further, these features can be implemented in a tutoring system more easily than the sophisticated methods and strategies that have been advocated by other educational researchers and ITS developers. Five-step Dialog Frame The structure of human tutorial dialogs differs from learning dialogs that often occur in classrooms. Mehan (1979) and others have reported a 3-step pattern that is prevalent in classroom interactions. This pattern is often referred to as IRE, which stands for Initiation (a question or claim articulated by the teacher), Response (an answer or comment provided by the student), and Evaluation (teacher evaluates the student contribution). In tutoring, however, the dialog is managed by a 5-step dialog frame (Graesser & Person, 1994; Graesser et al., 1995). The five steps in this frame are presented below. Step 1: Tutor asks question (or presents problem). Step 2: Learner answers question (or begins to solve problem). Step 3: Tutor gives short immediate feedback on the quality of the answer (or solution). Step 4: Tutor and learner collaboratively improve the quality of the answer. Step 5: Tutor assesses learner’s understanding of the answer. This 5-step dialog frame in tutoring is a significant augmentation over the 3-step dialog frame in classrooms. We believe that the advantage of tutoring over classroom settings lies primarily in Step 4. Typically, Step 4 is a lengthy multi-turn dialog in which the tutor and student collaboratively contribute to the explanation that answers the question or solves the problem. At a macro-level, the dialog that occurs between AutoTutor and the learner conforms to Steps 1 through 4 of the 5-step frame. For example, at the beginning of each new topic, AutoTutor presents the learner with a problem or asks the learner a question (Step 1). The learner then attempts to solve the problem or answer the question (Step 2). Next, AutoTutor provides some type of short, evaluative feedback (Step 3). During Step 4, AutoTutor employs a variety of dialog moves (see next section) that encourage learner participation. Thus, ins",
"title": ""
},
{
"docid": "40495cc96353f56481ed30f7f5709756",
"text": "This paper reported the construction of partial discharge measurement system under influence of cylindrical metal particle in transformer oil. The partial discharge of free cylindrical metal particle in the uniform electric field under AC applied voltage was studied in this paper. The partial discharge inception voltage (PDIV) for the single particle was measure to be 11kV. The typical waveform of positive PD and negative PD was also obtained. The result shows that the magnitude of negative PD is higher compared to positive PD. The observation on cylindrical metal particle movement revealed that there were a few stages of motion process involved.",
"title": ""
},
{
"docid": "2753e0a54d1a58993fcdd79ee40f0aac",
"text": "This study investigated the effectiveness of the WAIS-R Block Design subtest to predict everyday spatial ability for 65 university undergraduates (15 men, 50 women) who were administered Block Design, the Standardized Road Map Test of Direction Sense, and the Everyday Spatial Activities Test. In addition, the verbally loaded National Adult Reading Test was administered to assess whether the more visuospatial Block Design subtest was a better predictor of spatial ability. Moderate support was found. When age and sex were accounted for, Block Design accounted for 36% of the variance in performance (r = -.62) on the Road Map Test and 19% of the variance on the performance of the Everyday Spatial Activities Test (r = .42). In contrast, the scores on the National Adult Reading Test did not predict performance on the Road Map Test or Everyday Spatial Abilities Test. This suggests that, with appropriate caution, Block Design could be used as a measure of everyday spatial abilities.",
"title": ""
},
{
"docid": "c70d8ae9aeb8a36d1f68ba0067c74696",
"text": "Representing entities and relations in an embedding space is a well-studied approach for machine learning on relational data. Existing approaches, however, primarily focus on simple link structure between a finite set of entities, ignoring the variety of data types that are often used in knowledge bases, such as text, images, and numerical values. In this paper, we propose multimodal knowledge base embeddings (MKBE) that use different neural encoders for this variety of observed data, and combine them with existing relational models to learn embeddings of the entities and multimodal data. Further, using these learned embedings and different neural decoders, we introduce a novel multimodal imputation model to generate missing multimodal values, like text and images, from information in the knowledge base. We enrich existing relational datasets to create two novel benchmarks that contain additional information such as textual descriptions and images of the original entities. We demonstrate that our models utilize this additional information effectively to provide more accurate link prediction, achieving state-of-the-art results with a considerable gap of 5-7% over existing methods. Further, we evaluate the quality of our generated multimodal values via a user study. We have release the datasets and the opensource implementation of our models at https: //github.com/pouyapez/mkbe.",
"title": ""
},
{
"docid": "d3b2283ce3815576a084f98c34f37358",
"text": "We present a system for the detection of the stance of headlines with regard to their corresponding article bodies. The approach can be applied in fake news, especially clickbait detection scenarios. The component is part of a larger platform for the curation of digital content; we consider veracity and relevancy an increasingly important part of curating online information. We want to contribute to the debate on how to deal with fake news and related online phenomena with technological means, by providing means to separate related from unrelated headlines and further classifying the related headlines. On a publicly available data set annotated for the stance of headlines with regard to their corresponding article bodies, we achieve a (weighted) accuracy score of 89.59.",
"title": ""
},
{
"docid": "3b9491f337ab93d65831a0dfe687a639",
"text": "—The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximumlikelihood principle, which clearly satisfies these requirements. The core of this method is a simple hill-climbing algorithm that adjusts tree topology and branch lengths simultaneously. This algorithm starts from an initial tree built by a fast distance-based method and modifies this tree to improve its likelihood at each iteration. Due to this simultaneous adjustment of the topology and branch lengths, only a few iterations are sufficient to reach an optimum. We used extensive and realistic computer simulations to show that the topological accuracy of this new method is at least as high as that of the existing maximum-likelihood programs and much higher than the performance of distance-based and parsimony approaches. The reduction of computing time is dramatic in comparison with other maximum-likelihood packages, while the likelihood maximization ability tends to be higher. For example, only 12 min were required on a standard personal computer to analyze a data set consisting of 500 rbcL sequences with 1,428 base pairs from plant plastids, thus reaching a speed of the same order as some popular distance-based and parsimony algorithms. This new method is implemented in the PHYML program, which is freely available on our web page: http://www.lirmm.fr/w3ifa/MAAS/. [Algorithm; computer simulations; maximum likelihood; phylogeny; rbcL; RDPII project.] The size of homologous sequence data sets has increased dramatically in recent years, and many of these data sets now involve several hundreds of taxa. Moreover, current probabilistic sequence evolution models (Swofford et al., 1996; Page and Holmes, 1998), notably those including rate variation among sites (Uzzell and Corbin, 1971; Jin and Nei, 1990; Yang, 1996), require an increasing number of calculations. Therefore, the speed of phylogeny reconstruction methods is becoming a significant requirement and good compromises between speed and accuracy must be found. The maximum likelihood (ML) approach is especially accurate for building molecular phylogenies. Felsenstein (1981) brought this framework to nucleotide-based phylogenetic inference, and it was later also applied to amino acid sequences (Kishino et al., 1990). Several variants were proposed, most notably the Bayesian methods (Rannala and Yang 1996; and see below), and the discrete Fourier analysis of Hendy et al. (1994), for example. Numerous computer studies (Huelsenbeck and Hillis, 1993; Kuhner and Felsenstein, 1994; Huelsenbeck, 1995; Rosenberg and Kumar, 2001; Ranwez and Gascuel, 2002) have shown that ML programs can recover the correct tree from simulated data sets more frequently than other methods can. Another important advantage of the ML approach is the ability to compare different trees and evolutionary models within a statistical framework (see Whelan et al., 2001, for a review). However, like all optimality criterion–based phylogenetic reconstruction approaches, ML is hampered by computational difficulties, making it impossible to obtain the optimal tree with certainty from even moderate data sets (Swofford et al., 1996). Therefore, all practical methods rely on heuristics that obtain near-optimal trees in reasonable computing time. Moreover, the computation problem is especially difficult with ML, because the tree likelihood not only depends on the tree topology but also on numerical parameters, including branch lengths. Even computing the optimal values of these parameters on a single tree is not an easy task, particularly because of possible local optima (Chor et al., 2000). The usual heuristic method, implemented in the popular PHYLIP (Felsenstein, 1993 ) and PAUP∗ (Swofford, 1999 ) packages, is based on hill climbing. It combines stepwise insertion of taxa in a growing tree and topological rearrangement. For each possible insertion position and rearrangement, the branch lengths of the resulting tree are optimized and the tree likelihood is computed. When the rearrangement improves the current tree or when the position insertion is the best among all possible positions, the corresponding tree becomes the new current tree. Simple rearrangements are used during tree growing, namely “nearest neighbor interchanges” (see below), while more intense rearrangements can be used once all taxa have been inserted. The procedure stops when no rearrangement improves the current best tree. Despite significant decreases in computing times, notably in fastDNAml (Olsen et al., 1994 ), this heuristic becomes impracticable with several hundreds of taxa. This is mainly due to the two-level strategy, which separates branch lengths and tree topology optimization. Indeed, most calculations are done to optimize the branch lengths and evaluate the likelihood of trees that are finally rejected. New methods have thus been proposed. Strimmer and von Haeseler (1996) and others have assembled fourtaxon (quartet) trees inferred by ML, in order to reconstruct a complete tree. However, the results of this approach have not been very satisfactory to date (Ranwez and Gascuel, 2001 ). Ota and Li (2000, 2001) described",
"title": ""
},
{
"docid": "32ec9f1c0dbc7caaf6ece7ba105eace1",
"text": "A major problem worldwide is the potential loss of fisheries, forests, and water resources. Understanding of the processes that lead to improvements in or deterioration of natural resources is limited, because scientific disciplines use different concepts and languages to describe and explain complex social-ecological systems (SESs). Without a common framework to organize findings, isolated knowledge does not cumulate. Until recently, accepted theory has assumed that resource users will never self-organize to maintain their resources and that governments must impose solutions. Research in multiple disciplines, however, has found that some government policies accelerate resource destruction, whereas some resource users have invested their time and energy to achieve sustainability. A general framework is used to identify 10 subsystem variables that affect the likelihood of self-organization in efforts to achieve a sustainable SES.",
"title": ""
},
{
"docid": "b9a2a41e12e259fbb646ff92956e148e",
"text": "The paper presents a concept where pairs of ordinary RFID tags are exploited for use as remotely read moisture sensors. The pair of tags is incorporated into one label where one of the tags is embedded in a moisture absorbent material and the other is left open. In a humid environment the moisture concentration is higher in the absorbent material than the surrounding environment which causes degradation to the embedded tag's antenna in terms of dielectric losses and change of input impedance. The level of relative humidity or the amount of water in the absorbent material is determined for a passive RFID system by comparing the difference in RFID reader output power required to power up respectively the open and embedded tag. It is similarly shown how the backscattered signal strength of a semi-active RFID system is proportional to the relative humidity and amount of water in the absorbent material. Typical applications include moisture detection in buildings, especially from leaking water pipe connections hidden beyond walls. Presented solution has a cost comparable to ordinary RFID tags, and the passive system also has infinite life time since no internal power supply is needed. The concept is characterized for two commercial RFID systems, one passive operating at 868 MHz and one semi-active operating at 2.45 GHz.",
"title": ""
},
{
"docid": "8c3e545f12c621e0ffe1460b9db959e7",
"text": "A unique behavior of humans is modifying one’s unobservable behavior based on the reaction of others for cooperation. We used a card game called Hanabi as an evaluation task of imitating human reflective intelligence with artificial intelligence. Hanabi is a cooperative card game with incomplete information. A player cooperates with an opponent in building several card sets constructed with the same color and ordered numbers. However, like a blind man's bluff, each player sees the cards of all other players except his/her own. Also, communication between players is restricted to information about the same numbers and colors, and the player is required to read his/his opponent's intention with the opponent's hand, estimate his/her cards with incomplete information, and play one of them for building a set. We compared human play with several simulated strategies. The results indicate that the strategy with feedbacks from simulated opponent's viewpoints achieves more score than other strategies. Introduction of Cooperative Game Social Intelligence estimating an opponent's thoughts from his/her behavior – is a unique function of humans. The solving process of this social intelligence is one of the interesting challenges for both artificial intelligence (AI) and cognitive science. Bryne et al. hypothesized that the human brain increases mainly due to this type of social requirement as a evolutionary pressure (Byrne & Whiten 1989). One of the most difficult tasks for using social intelligence is estimating one’s own unobservable information from the behavior of others and to modify one's own information. This type of reflective behavior – using other behavior as a looking glass – is both a biological and psychological task. For example, the human voice is informed by others via sound waves through the air, but informed by him/herself through bone conduction (Chen et al. 2007). In this scenario, a person cannot observe his/her own voice directly. For improving social Copyright © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. influence from one's voice, one needs to observe others' reactions and modify his/her voice. Joseph et al. also defined such unobservable information from oneself as a \"blind spot\" from a psychological viewpoint (Luft & Ingham 1961). In this study, we solved such reflective estimation tasks using a cooperative game involving incomplete information. We used a card game called Hanabi as a challenge task. Hanabi is a cooperative card game. It has three unique features for contributing to AI and multi-agent system (MAS) studies compared with other card games that have been used in AI studies. First, it is a cooperative card game and not a battle card game. Every player is required to cooperate and build a set of five different colored fireworks (Hanabi in Japanese) before the cards run out. This requires the AI program to handle cooperation of multiple agents. Second, every player can observe all other players' cards except his/her own. This does not require a coordinative leader and requires pure coordination between multiple agents. Finally, communication between players is prohibited except for restricted informing actions for a color or a number of opponent's cards. This allows the AI program to avoid handling natural language processing matters directly. Hanabi won the top German game award due to these unique features (Jahres 2013). We created an AI program to play Hanabi with multiple strategies including simulation of opponents' viewpoints with opponents' behavior, and evaluated how this type of reflective simulation contributes to earning a high score in this game. The paper is organized as follows. Section 2 gives background on incomplete information games involving AI and what challenges there are with Hanabi. Section 3 explains the rules of Hanabi and models. We focused on a two-player game in this paper. Section 4 explains several strategies for playing Hanabi. Section 5 evaluates these strategies and the results are discussed in Section 6. Section 7 explains the contribution of our research, limitations, and future work, and Section 8 concludes our paper. 37 Computer Poker and Imperfect Information: Papers from the 2015 AAAI Workshop",
"title": ""
},
{
"docid": "62cf2ae97e48e6b57139f305d616ec1b",
"text": "Many analytics applications generate mixed workloads, i.e., workloads comprised of analytical tasks with different processing characteristics including data pre-processing, SQL, and iterative machine learning algorithms. Examples of such mixed workloads can be found in web data analysis, social media analysis, and graph analytics, where they are executed repetitively on large input datasets (e.g., Find the average user time spent on the top 10 most popular web pages on the UK domain web graph.). Scale-out processing engines satisfy the needs of these applications by distributing the data and the processing task efficiently among multiple workers that are first reserved and then used to execute the task in parallel on a cluster of machines. Finding the resource allocation that can complete the workload execution within a given time constraint, and optimizing cluster resource allocations among multiple analytical workloads motivates the need for estimating the runtime of the workload before its actual execution. Predicting runtime of analytical workloads is a challenging problem as runtime depends on a large number of factors that are hard to model a priori execution. These factors can be summarized as workload characteristics (data statistics and processing costs) , the execution configuration (deployment, resource allocation, and software settings), and the cost model that captures the interplay among all of the above parameters. While conventional cost models proposed in the context of query optimization can assess the relative order among alternative SQL query plans, they are not aimed to estimate absolute runtime. Additionally, conventional models are ill-equipped to estimate the runtime of iterative analytics that are executed repetitively until convergence and that of user defined data pre-processing operators which are not “owned” by the underlying data management system. This thesis demonstrates that runtime for data analytics can be predicted accurately by breaking the analytical tasks into multiple processing phases, collecting key input features during a reference execution on a sample of the dataset, and then using the features to build per-phase cost models. We develop prediction models for three categories of data analytics produced by social media applications: iterative machine learning, data pre-processing, and reporting SQL. The prediction framework for iterative analytics, PREDIcT, addresses the challenging problem of estimating the number of iterations, and per-iteration runtime for a class of iterative machine learning algorithms that are run repetitively until convergence. The hybrid prediction models we develop for data pre-processing tasks and for reporting SQL combine the benefits of analytical modeling with that of machine learning-based models. Through a",
"title": ""
},
{
"docid": "cf54533bc317b960fc80f22baa26d7b1",
"text": "The state-of-the-art named entity recognition (NER) systems are statistical machine learning models that have strong generalization capability (i.e., can recognize unseen entities that do not appear in training data) based on lexical and contextual information. However, such a model could still make mistakes if its features favor a wrong entity type. In this paper, we utilize Wikipedia as an open knowledge base to improve multilingual NER systems. Central to our approach is the construction of high-accuracy, highcoverage multilingual Wikipedia entity type mappings. These mappings are built from weakly annotated data and can be extended to new languages with no human annotation or language-dependent knowledge involved. Based on these mappings, we develop several approaches to improve an NER system. We evaluate the performance of the approaches via experiments on NER systems trained for 6 languages. Experimental results show that the proposed approaches are effective in improving the accuracy of such systems on unseen entities, especially when a system is applied to a new domain or it is trained with little training data (up to 18.3 F1 score improvement).",
"title": ""
},
{
"docid": "79b0f13bec3201bf2ca770b268085306",
"text": "In this paper, we introduce a new 3D hand gesture recognition approach based on a deep learning model. We propose a new Convolutional Neural Network (CNN) where sequences of hand-skeletal joints' positions are processed by parallel convolutions; we then investigate the performance of this model on hand gesture sequence classification tasks. Our model only uses hand-skeletal data and no depth image. Experimental results show that our approach achieves a state-of-the-art performance on a challenging dataset (DHG dataset from the SHREC 2017 3D Shape Retrieval Contest), when compared to other published approaches. Our model achieves a 91.28% classification accuracy for the 14 gesture classes case and an 84.35% classification accuracy for the 28 gesture classes case.",
"title": ""
},
{
"docid": "7ca62c2da424c826744bca7196f07def",
"text": "Accurately answering a question about a given image requires combining observations with general knowledge. While this is effortless for humans, reasoning with general knowledge remains an algorithmic challenge. To advance research in this direction a novel ‘fact-based’ visual question answering (FVQA) task has been introduced recently along with a large set of curated facts which link two entities, i.e., two possible answers, via a relation. Given a question-image pair, deep network techniques have been employed to successively reduce the large set of facts until one of the two entities of the final remaining fact is predicted as the answer. We observe that a successive process which considers one fact at a time to form a local decision is sub-optimal. Instead, we develop an entity graph and use a graph convolutional network to ‘reason’ about the correct answer by jointly considering all entities. We show on the challenging FVQA dataset that this leads to an improvement in accuracy of around 7% compared to the state of the art.",
"title": ""
},
{
"docid": "9bcf4fcb795ab4cfe4e9d2a447179feb",
"text": "In a previous experiment, we determined how various changes in three structural elements of the software inspection process (team size and the number and sequencing of sessions) altered effectiveness and interval. Our results showed that such changes did not significantly influence the defect detection rate, but that certain combinations of changes dramatically increased the inspection interval. We also observed a large amount of unexplained variance in the data, indicating that other factors must be affecting inspection performance. The nature and extent of these other factors now have to be determined to ensure that they had not biased our earlier results. Also, identifying these other factors might suggest additional ways to improve the efficiency of inspections. Acting on the hypothesis that the “inputs” into the inspection process (reviewers, authors, and code units) were significant sources of variation, we modeled their effects on inspection performance. We found that they were responsible for much more variation in detect detection than was process structure. This leads us to conclude that better defect detection techniques, not better process structures, are the key to improving inspection effectiveness. The combined effects of process inputs and process structure on the inspection interval accounted for only a small percentage of the variance in inspection interval. Therefore, there must be other factors which need to be identified.",
"title": ""
},
{
"docid": "3c5bb0b08b365029a3fc1a7ef73e3aa7",
"text": "This paper proposes an estimation method to identify the electrical model parameters of photovoltaic (PV) modules and makes a comparison with other methods already popular in the technical literature. Based on the full single-diode model, the mathematical description of the I-V characteristic of modules is generally represented by a coupled nonlinear equation with five unknown parameters, which is difficult to solve by an analytical approach. The aim of the proposed method is to find the five unknown parameters that guarantee the minimum absolute error between the P-V curves generated by the electrical model and the P-V curves provided by the manufacturers' datasheets for different external conditions such as temperature and irradiance. The first advantage of the proposed method is that the parameters are estimated using the P-V curves instead of I-V curves, since most of the applications that use the electrical model want to accurately estimate the extracted power. The second advantage is that the value ranges of each unknown parameter respect their physical meaning. In order to prove the effectiveness of the proposition, a comparison among methods is carried out using both types of P-V and I-V curves: those obtained by manufacturers' datasheets and those extracted experimentally in the laboratory.",
"title": ""
},
{
"docid": "f5405c8fb7ad62d4277837bd7036b0d3",
"text": "Context awareness is one of the important fields in ubiquitous computing. Smart Home, a specific instance of ubiquitous computing, provides every family with opportunities to enjoy the power of hi-tech home living. Discovering that relationship among user, activity and context data in home environment is semantic, therefore, we apply ontology to model these relationships and then reason them as the semantic information. In this paper, we present the realization of smart home’s context-aware system based on ontology. We discuss the current challenges in realizing the ontology context base. These challenges can be listed as collecting context information from heterogeneous sources, such as devices, agents, sensors into ontology, ontology management, ontology querying, and the issue related to environment database explosion.",
"title": ""
},
{
"docid": "36e8ecc13c1f92ca3b056359e2d803f0",
"text": "We propose a novel module, the reviewer module, to improve the encoder-decoder learning framework. The reviewer module is generic, and can be plugged into an existing encoder-decoder model. The reviewer module performs a number of review steps with attention mechanism on the encoder hidden states, and outputs a fact vector after each review step; the fact vectors are used as the input of the attention mechanism in the decoder. We show that the conventional encoderdecoders are a special case of our framework. Empirically, we show that our framework can improve over state-of-the-art encoder-decoder systems on the tasks of image captioning and source code captioning.",
"title": ""
},
{
"docid": "59f29d3795e747bb9cee8fcbf87cb86f",
"text": "This paper introduces the development of a semi-active friction based variable physical damping actuator (VPDA) unit. The realization of this unit aims to facilitate the control of compliant robotic joints by providing physical variable damping on demand assisting on the regulation of the oscillations induced by the introduction of compliance. The mechatronics details and the dynamic model of the damper are introduced. The proposed variable damper mechanism is evaluated on a simple 1-DOF compliant joint linked to the ground through a torsion spring. This flexible connection emulates a compliant joint, generating oscillations when the link is perturbed. Preliminary results are presented to show that the unit and the proposed control scheme are capable of replicating simulated relative damping values with good fidelity.",
"title": ""
},
{
"docid": "3b31d07c6a5f7522e2060d5032ca5177",
"text": "In the past few years detection of repeatable and distinctive keypoints on 3D surfaces has been the focus of intense research activity, due on the one hand to the increasing diffusion of low-cost 3D sensors, on the other to the growing importance of applications such as 3D shape retrieval and 3D object recognition. This work aims at contributing to the maturity of this field by a thorough evaluation of several recent 3D keypoint detectors. A categorization of existing methods in two classes, that allows for highlighting their common traits, is proposed, so as to abstract all algorithms to two general structures. Moreover, a comprehensive experimental evaluation is carried out in terms of repeatability, distinctiveness and computational efficiency, based on a vast data corpus characterized by nuisances such as noise, clutter, occlusions and viewpoint changes.",
"title": ""
}
] |
scidocsrr
|
44ba2f8d3461d9fdad4ab07005cdc5a0
|
Deep Reinforcement Learning for Visual Object Tracking in Videos
|
[
{
"docid": "e14d1f7f7e4f7eaf0795711fb6260264",
"text": "In this paper, we treat tracking as a learning problem of estimating the location and the scale of an object given its previous location, scale, as well as current and previous image frames. Given a set of examples, we train convolutional neural networks (CNNs) to perform the above estimation task. Different from other learning methods, the CNNs learn both spatial and temporal features jointly from image pairs of two adjacent frames. We introduce multiple path ways in CNN to better fuse local and global information. A creative shift-variant CNN architecture is designed so as to alleviate the drift problem when the distracting objects are similar to the target in cluttered environment. Furthermore, we employ CNNs to estimate the scale through the accurate localization of some key points. These techniques are object-independent so that the proposed method can be applied to track other types of object. The capability of the tracker of handling complex situations is demonstrated in many testing sequences.",
"title": ""
}
] |
[
{
"docid": "727c36aac7bd0327f3edb85613dcf508",
"text": "The interpretation of adjective-noun pairs plays a crucial role in tasks such as recognizing textual entailment. Formal semantics often places adjectives into a taxonomy which should dictate adjectives’ entailment behavior when placed in adjective-noun compounds. However, we show experimentally that the behavior of subsective adjectives (e.g. red) versus non-subsective adjectives (e.g. fake) is not as cut and dry as often assumed. For example, inferences are not always symmetric: while ID is generally considered to be mutually exclusive with fake ID, fake ID is considered to entail ID. We discuss the implications of these findings for automated natural language understanding.",
"title": ""
},
{
"docid": "889dd22fcead3ce546e760bda8ef4980",
"text": "We explore unsupervised approaches to relation extraction between two named entities; for instance, the semantic bornIn relation between a person and location entity. Concretely, we propose a series of generative probabilistic models, broadly similar to topic models, each which generates a corpus of observed triples of entity mention pairs and the surface syntactic dependency path between them. The output of each model is a clustering of observed relation tuples and their associated textual expressions to underlying semantic relation types. Our proposed models exploit entity type constraints within a relation as well as features on the dependency path between entity mentions. We examine effectiveness of our approach via multiple evaluations and demonstrate 12% error reduction in precision over a state-of-the-art weakly supervised baseline.",
"title": ""
},
{
"docid": "ae153e953060e9e8a742c8a9149521a8",
"text": "This paper briefly describes three Windkessel models and demonstrates application of Matlab for mathematical modelling and simulation experiments with the models. Windkessel models are usually used to describe basic properties vascular bed and to study relationships among hemodynamic variables in great vessels. Analysis of a systemic or pulmonary arterial load described by parameters such as arterial compliance and peripheral resistance, is important, for example, in quantifying the effects of vasodilator or vasoconstrictor drugs. Also, a mathematical model of the relationship between blood pressure and blood flow in the aorta and pulmonary artery can be useful, for example, in the design, development and functional analysis of a mechanical heart and/or heart-lung machines. We found that ascending aortic pressure could be predicted better from aortic flow by using the four-element windkessel than by using the three-element windkessel or two-elment windkessel. The root-mean-square errors were smaller for the four-element windkessel.",
"title": ""
},
{
"docid": "c3ad915ac57bf56c4adc47acee816b54",
"text": "How does the brain “produce” conscious subjective experience, an awareness of something? This question has been regarded as perhaps the most challenging one facing science. Penfield et al. [9] had produced maps of whereresponses to electrical stimulation of cerebral cortex could be obtained in human neurosurgical patients. Mapping of cerebral activations in various subjective paradigms has been greatly extended more recently by utilizing PET scan and fMRI techniques. But there were virtually no studies of what the appropriate neurons do in order to elicit a conscious experience. The opportunity for me to attempt such studies arose when my friend and neurosurgeon colleague, Bertram Feinstein, invited me to utilize the opportunity presented by access to stimulating and recording electrodes placed for therapeutic purposes intracranially in awake and responsive patients. With the availability of an excellent facility and team of co-workers, I decided to study neuronal activity requirements for eliciting a simple conscious somatosensory experience, and compare that to activity requirements forunconsciousdetection of sensory signals. We discovered that a surprising duration of appropriate neuronal activations, up to about 500 msec, was required in order to elicit a conscious sensory experience [5]. This was true not only when the initiating stimulus was in any of the cerebral somatosensory pathways; several lines of evidence indicated that even a single stimulus pulse to the skin required similar durations of activities at the cortical level. That discovery led to further studies of such a delay factor for awareness generally, and to profound inferences for the nature of conscious subjective experience. It formed the basis of that highlight in my work [1,3]. For example, a neuronal requirement of about 500 msec to produce awareness meant that we do not experience our sensory world immediately, in real time. But that would contradict our intuitive feeling of the experience in real time. We solved this paradox with a hypothesis for “backward referral” of subjective experience to the time of the first cortical response, the primary evoked potential. This was tested and confirmed experimentally [8], a thrilling result. We could now add subjective referral in time to the already known subjective referral in space. Subjective referrals have no known neural basis and appear to be purely mental phenomena! Another experimental study supported my “time-on” theory for eliciting conscious sensations as opposed to unconscious detection [7]. The time-factor appeared also in an endogenous experience, the conscious intention or will to produce a purely voluntary act [4,6]. In this, we found that cerebral activity initiates this volitional process at least 350 msec before the conscious wish (W) to act appears. However, W appears about 200 msec before the muscles are activated. That retained the possibility that the conscious will could control the outcome of the volitional process; it could veto it and block the performance of the act. These discoveries have profound implications for the nature of free will, for individual responsibility and guilt. Discovery of these time factors led to unexpected ways of viewing conscious experience and unconscious mental functions. Experience of the sensory world is delayed. It raised the possibility that all conscious mental functions are initiated unconsciouslyand become conscious only if neuronal activities persist for a sufficiently long time. Conscious experiences must be discontinuousif there is a delay for each; the “stream of consciousness” must be modified. Quick actions or responses, whether in reaction times, sports activities, etc., would all be initially unconscious. Unconscious mental operations, as in creative thinking, artistic impulses, production of speech, performing in music, etc., can all proceed rapidly, since only brief neural actions are sufficient. Rapid unconscious events would allow faster processing in thinking, etc. The delay for awareness provides a physiological opportunity for modulatory influences to affect the content of an experience that finally appears, as in Freudian repression of certain sensory images or thoughts [2,3]. The discovery of the neural time factor (except in conscious will) could not have been made without intracranial access to the neural pathways. They provided an experimentally based entry into how new hypotheses, of how the brain deals with conscious experience, could be directly tested. That was in contrast to the many philosophical approaches which were speculative and mostly untestable. Evidence based views could now be accepted with some confidence.",
"title": ""
},
{
"docid": "ea236e7ab1b3431523c01c51a3186009",
"text": "Analysis-by-synthesis has been a successful approach for many tasks in computer vision, such as 6D pose estimation of an object in an RGB-D image which is the topic of this work. The idea is to compare the observation with the output of a forward process, such as a rendered image of the object of interest in a particular pose. Due to occlusion or complicated sensor noise, it can be difficult to perform this comparison in a meaningful way. We propose an approach that \"learns to compare\", while taking these difficulties into account. This is done by describing the posterior density of a particular object pose with a convolutional neural network (CNN) that compares observed and rendered images. The network is trained with the maximum likelihood paradigm. We observe empirically that the CNN does not specialize to the geometry or appearance of specific objects. It can be used with objects of vastly different shapes and appearances, and in different backgrounds. Compared to state-of-the-art, we demonstrate a significant improvement on two different datasets which include a total of eleven objects, cluttered background, and heavy occlusion.",
"title": ""
},
{
"docid": "2b8efba9363b5f177089534edeb877a9",
"text": "This article presents a methodology that allows the development of new converter topologies for single-input, multiple-output (SIMO) from different basic configurations of single-input, single-output dc-dc converters. These typologies have in common the use of only one power-switching device, and they are all nonisolated converters. Sixteen different topologies are highlighted, and their main features are explained. The 16 typologies include nine twooutput-type, five three-output-type, one four-output-type, and one six-output-type dc-dc converter configurations. In addition, an experimental prototype of a three-output-type configuration with six different output voltages based on a single-ended primary inductance (SEPIC)-Cuk-boost combination converter was developed, and the proposed design methodology for a basic converter combination was experimentally verified.",
"title": ""
},
{
"docid": "2ad34a7b1ed6591d683fe1450d1bd25f",
"text": "An extension of the Gauss-Newton method for nonlinear equations to convex composite optimization is described and analyzed. Local quadratic convergence is established for the minimization of h o F under two conditions, namely h has a set of weak sharp minima, C, and there is a regular point of the inclusion F ( x ) E C. This result extends a similar convergence result due to Womersley (this journal, 1985) which employs the assumption of a strongly unique solution of the composite function h o F. A backtracking line-search is proposed as a globalization strategy. For this algorithm, a global convergence result is established, with a quadratic rate under the regularity assumption.",
"title": ""
},
{
"docid": "553de71fcc3e4e6660015632eee751b1",
"text": "Data governance is an emerging research area getting attention from information systems (IS) scholars and practitioners. In this paper I take a look at existing literature and current state-of-the-art in data governance. I found out that there is only a limited amount of existing scientific literature, but many practitioners are already treating data as a valuable corporate asset. The paper describes an action design research project that will be conducted in 2012-2016 and is expected to result in a generic data governance framework.",
"title": ""
},
{
"docid": "2496fa63868717ce2ed56c1777c4b0ed",
"text": "Person re-identification (reID) is an important task that requires to retrieve a person’s images from an image dataset, given one image of the person of interest. For learning robust person features, the pose variation of person images is one of the key challenges. Existing works targeting the problem either perform human alignment, or learn human-region-based representations. Extra pose information and computational cost is generally required for inference. To solve this issue, a Feature Distilling Generative Adversarial Network (FD-GAN) is proposed for learning identity-related and pose-unrelated representations. It is a novel framework based on a Siamese structure with multiple novel discriminators on human poses and identities. In addition to the discriminators, a novel same-pose loss is also integrated, which requires appearance of a same person’s generated images to be similar. After learning pose-unrelated person features with pose guidance, no auxiliary pose information and additional computational cost is required during testing. Our proposed FD-GAN achieves state-of-the-art performance on three person reID datasets, which demonstrates that the effectiveness and robust feature distilling capability of the proposed FD-GAN. ‡‡",
"title": ""
},
{
"docid": "cbaead0172b87c670929d38a5e2199bb",
"text": "Internet addiction is characterized by excessive or poorly controlled preoccupations, urges or behaviours regarding computer use and internet access that lead to impairment or distress. The condition has attracted increasing attention in the popular media and among researchers, and this attention has paralleled the growth in computer (and Internet) access. Prevalence estimates vary widely, although a recent random telephone survey of the general US population reported an estimate of 0.3-0.7%. The disorder occurs worldwide, but mainly in countries where computer access and technology are widespread. Clinical samples and a majority of relevant surveys report a male preponderance. Onset is reported to occur in the late 20s or early 30s age group, and there is often a lag of a decade or more from initial to problematic computer usage. Internet addiction has been associated with dimensionally measured depression and indicators of social isolation. Psychiatric co-morbidity is common, particularly mood, anxiety, impulse control and substance use disorders. Aetiology is unknown, but probably involves psychological, neurobiological and cultural factors. There are no evidence-based treatments for internet addiction. Cognitive behavioural approaches may be helpful. There is no proven role for psychotropic medication. Marital and family therapy may help in selected cases, and online self-help books and tapes are available. Lastly, a self-imposed ban on computer use and Internet access may be necessary in some cases.",
"title": ""
},
{
"docid": "758eb7a0429ee116f7de7d53e19b3e02",
"text": "With the rapid development of the Internet, many types of websites have been developed. This variety of websites makes it necessary to adopt systemized evaluation criteria with a strong theoretical basis. This study proposes a set of evaluation criteria derived from an architectural perspective which has been used for over a 1000 years in the evaluation of buildings. The six evaluation criteria are internal reliability and external security for structural robustness, useful content and usable navigation for functional utility, and system interface and communication interface for aesthetic appeal. The impacts of the six criteria on user satisfaction and loyalty have been investigated through a large-scale survey. The study results indicate that the six criteria have different impacts on user satisfaction for different types of websites, which can be classified along two dimensions: users’ goals and users’ activity levels.",
"title": ""
},
{
"docid": "279870c84659e0eb6668e1ec494e77c9",
"text": "There is a need to move from opinion-based education to evidence-based education. Best evidence medical education (BEME) is the implementation, by teachers in their practice, of methods and approaches to education based on the best evidence available. It involves a professional judgement by the teacher about his/her teaching taking into account a number of factors-the QUESTS dimensions. The Quality of the research evidence available-how reliable is the evidence? the Utility of the evidence-can the methods be transferred and adopted without modification, the Extent of the evidence, the Strength of the evidence, the Target or outcomes measured-how valid is the evidence? and the Setting or context-how relevant is the evidence? The evidence available can be graded on each of the six dimensions. In the ideal situation the evidence is high on all six dimensions, but this is rarely found. Usually the evidence may be good in some respects, but poor in others.The teacher has to balance the different dimensions and come to a decision on a course of action based on his or her professional judgement.The QUESTS dimensions highlight a number of tensions with regard to the evidence in medical education: quality vs. relevance; quality vs. validity; and utility vs. the setting or context. The different dimensions reflect the nature of research and innovation. Best Evidence Medical Education encourages a culture or ethos in which decision making takes place in this context.",
"title": ""
},
{
"docid": "201d9105d956bc8cb8d692490d185487",
"text": "BACKGROUND\nDespite its evident clinical benefits, single-incision laparoscopic surgery (SILS) imposes inherent limitations of collision between external arms and inadequate triangulation because multiple instruments are inserted through a single port at the same time.\n\n\nMETHODS\nA robot platform appropriate for SILS was developed wherein an elbowed instrument can be equipped to easily create surgical triangulation without the interference of robot arms. A novel joint mechanism for a surgical instrument actuated by a rigid link was designed for high torque transmission capability.\n\n\nRESULTS\nThe feasibility and effectiveness of the robot was checked through three kinds of preliminary tests: payload, block transfer, and ex vivo test. Measurements showed that the proposed robot has a payload capability >15 N with 7 mm diameter.\n\n\nCONCLUSIONS\nThe proposed robot is effective and appropriate for SILS, overcoming inadequate triangulation and improving workspace and traction force capability.",
"title": ""
},
{
"docid": "e871e2b5bd1ed95fd5302e71f42208bf",
"text": "Chapters 2–7 make up Part II of the book: artificial neural networks. After introducing the basic concepts of neurons and artificial neuron learning rules in Chapter 2, Chapter 3 describes a particular formalism, based on signal-plus-noise, for the learning problem in general. After presenting the basic neural network types this chapter reviews the principal algorithms for error function minimization/optimization and shows how these learning issues are addressed in various supervised models. Chapter 4 deals with issues in unsupervised learning networks, such as the Hebbian learning rule, principal component learning, and learning vector quantization. Various techniques and learning paradigms are covered in Chapters 3–6, and especially the properties and relative merits of the multilayer perceptron networks, radial basis function networks, self-organizing feature maps and reinforcement learning are discussed in the respective four chapters. Chapter 7 presents an in-depth examination of performance issues in supervised learning, such as accuracy, complexity, convergence, weight initialization, architecture selection, and active learning. Par III (Chapters 8–15) offers an extensive presentation of techniques and issues in evolutionary computing. Besides the introduction to the basic concepts in evolutionary computing, it elaborates on the more important and most frequently used techniques on evolutionary computing paradigm, such as genetic algorithms, genetic programming, evolutionary programming, evolutionary strategies, differential evolution, cultural evolution, and co-evolution, including design aspects, representation, operators and performance issues of each paradigm. The differences between evolutionary computing and classical optimization are also explained. Part IV (Chapters 16 and 17) introduces swarm intelligence. It provides a representative selection of recent literature on swarm intelligence in a coherent and readable form. It illustrates the similarities and differences between swarm optimization and evolutionary computing. Both particle swarm optimization and ant colonies optimization are discussed in the two chapters, which serve as a guide to bringing together existing work to enlighten the readers, and to lay a foundation for any further studies. Part V (Chapters 18–21) presents fuzzy systems, with topics ranging from fuzzy sets, fuzzy inference systems, fuzzy controllers, to rough sets. The basic terminology, underlying motivation and key mathematical models used in the field are covered to illustrate how these mathematical tools can be used to handle vagueness and uncertainty. This book is clearly written and it brings together the latest concepts in computational intelligence in a friendly and complete format for undergraduate/postgraduate students as well as professionals new to the field. With about 250 pages covering such a wide variety of topics, it would be impossible to handle everything at a great length. Nonetheless, this book is an excellent choice for readers who wish to familiarize themselves with computational intelligence techniques or for an overview/introductory course in the field of computational intelligence. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond—Bernhard Schölkopf and Alexander Smola, (MIT Press, Cambridge, MA, 2002, ISBN 0-262-19475-9). Reviewed by Amir F. Atiya.",
"title": ""
},
{
"docid": "d8b3eb944d373741747eb840a18a490b",
"text": "Natural scenes contain large amounts of geometry, such as hundreds of thousands or even millions of tree leaves and grass blades. Subtle lighting effects present in such environments usually include a significant amount of occlusion effects and lighting variation. These effects are important for realistic renderings of such natural environments; however, plausible lighting and full global illumination computation come at prohibitive costs especially for interactive viewing. As a solution to this problem, we present a simple approximation to integrated visibility over a hemisphere (ambient occlusion) that allows interactive rendering of complex and dynamic scenes. Based on a set of simple assumptions, we show that our method allows the rendering of plausible variation in lighting at modest additional computation and little or no precomputation, for complex and dynamic scenes.",
"title": ""
},
{
"docid": "132bb5b7024de19f4160664edca4b4f5",
"text": "Generic Competitive Strategy: Basically, strategy is about two things: deciding where you want your business to go, and deciding how to get there. A more complete definition is based on competitive advantage, the object of most corporate strategy: “Competitive advantage grows out of value a firm is able to create for its buyers that exceeds the firm's cost of creating it. Value is what buyers are willing to pay, and superior value stems from offering lower prices than competitors for equivalent benefits or providing unique benefits that more than offset a higher price. There are two basic types of competitive advantage: cost leadership and differentiation.” Michael Porter Competitive strategies involve taking offensive or defensive actions to create a defendable position in the industry. Generic strategies can help the organization to cope with the five competitive forces in the industry and do better than other organization in the industry. Generic strategies include ‘overall cost leadership’, ‘differentiation’, and ‘focus’. Generally firms pursue only one of the above generic strategies. However some firms make an effort to pursue only one of the above generic strategies. However some firms make an effort to pursue more than one strategy at a time by bringing out a differentiated product at low cost. Though approaches like these are successful in short term, they are hardly sustainable in the long term. If firms try to maintain cost leadership as well as differentiation at the same time, they may fail to achieve either.",
"title": ""
},
{
"docid": "c44f060f18e55ccb1b31846e618f3282",
"text": "In multi-label classification, each sample can be associated with a set of class labels. When the number of labels grows to the hundreds or even thousands, existing multi-label classification methods often become computationally inefficient. In recent years, a number of remedies have been proposed. However, they are based either on simple dimension reduction techniques or involve expensive optimization problems. In this paper, we address this problem by selecting a small subset of class labels that can approximately span the original label space. This is performed by an efficient randomized sampling procedure where the sampling probability of each class label reflects its importance among all the labels. Experiments on a number of realworld multi-label data sets with many labels demonstrate the appealing performance and efficiency of the proposed algorithm.",
"title": ""
},
{
"docid": "708915f99102f80b026b447f858e3778",
"text": "One of the main obstacles to broad application of reinforcement learning methods is the parameter sensitivity of our core learning algorithms. In many large-scale applications, online computation and function approximation represent key strategies in scaling up reinforcement learning algorithms. In this setting, we have effective and reasonably well understood algorithms for adapting the learning-rate parameter, online during learning. Such meta-learning approaches can improve robustness of learning and enable specialization to current task, improving learning speed. For temporaldifference learning algorithms which we study here, there is yet another parameter, λ, that similarly impacts learning speed and stability in practice. Unfortunately, unlike the learning-rate parameter, λ parametrizes the objective function that temporal-difference methods optimize. Different choices of λ produce different fixed-point solutions, and thus adapting λ online and characterizing the optimization is substantially more complex than adapting the learningrate parameter. There are no meta-learning method for λ that can achieve (1) incremental updating, (2) compatibility with function approximation, and (3) maintain stability of learning under both on and off-policy sampling. In this paper we contribute a novel objective function for optimizing λ as a function of state rather than time. We derive a new incremental, linear complexity λ-adaption algorithm that does not require offline batch updating or access to a model of the world, and present a suite of experiments illustrating the practicality of our new algorithm in three different settings. Taken together, our contributions represent a concrete step towards black-box application of temporaldifference learning methods in real world problems.",
"title": ""
}
] |
scidocsrr
|
ba87e2ebdc3c8a4aea5b135201401c75
|
Bi-directional conversion between graphemes and phonemes using a joint N-gram model
|
[
{
"docid": "fc79bfdb7fbbfa42d2e1614964113101",
"text": "Probability Theory, 2nd ed. Princeton, N. J.: 960. Van Nostrand, 1 121 T. T. Kadota, “Optimum reception of binary gaussian signals,” Bell Sys. Tech. J., vol. 43, pp. 2767-2810, November 1964. 131 T. T. Kadota. “Ootrmum recention of binarv sure and Gaussian signals,” Bell Sys. ?‘ech: J., vol. 44;~~. 1621-1658, October 1965. 141 U. Grenander, ‘Stochastic processes and statistical inference,” Arkiv fiir Matematik, vol. 17, pp. 195-277, 1950. 151 L. A. Zadeh and J. R. Ragazzini, “Optimum filters for the detection of signals in noise,” Proc. IRE, vol. 40, pp. 1223-1231, O,+nhm 1 a.63 161 J. H. Laning and R. H. Battin, Random Processes in Automatic Control. New York: McGraw-Hill. 1956. nn. 269-358. 171 C.. W. Helstrom, “ Solution of the dete&on integral equation for stationary filtered white noise,” IEEE Trans. on Information Theory, vol. IT-II, pp. 335-339, July 1965. 181 T. Kailath, “The detection of known signals in colored Gaussian noise,” Stanford Electronics Labs., Stanford Univ., Stanford, Calif. Tech. Rept. 7050-4, July 1965. 191 T. T. Kadota, “Optimum reception of nf-ary Gaussian signals in Gaussian noise,” Bell. Sys. Tech. J., vol. 44, pp. 2187-2197, November 1965. [lOI T. T. Kadota, “Term-by-term differentiability of Mercer’s expansion,” Proc. of Am. Math. Sot., vol. 18, pp. 69-72, February 1967.",
"title": ""
}
] |
[
{
"docid": "cfbfc01ee75019b563c46d4bebfba0f4",
"text": "We present results from gate-all-around (GAA) silicon nanowire (SiNW) MOSFETs fabricated using a process flow capable of achieving a nanowire pitch of 30 nm and a scaled gate pitch of 60 nm. We demonstrate for the first time that GAA SiNW devices can be integrated to density targets commensurate with CMOS scaling needs of the 10 nm node and beyond. In addition, this work achieves the highest performance for GAA SiNW NFETs at a gate pitch below 100 nm.",
"title": ""
},
{
"docid": "8d8dc05c2de34440eb313503226f7e99",
"text": "Disambiguating entity references by annotating them with unique ids from a catalog is a critical step in the enrichment of unstructured content. In this paper, we show that topic models, such as Latent Dirichlet Allocation (LDA) and its hierarchical variants, form a natural class of models for learning accurate entity disambiguation models from crowd-sourced knowledge bases such as Wikipedia. Our main contribution is a semi-supervised hierarchical model called Wikipedia-based Pachinko Allocation Model} (WPAM) that exploits: (1) All words in the Wikipedia corpus to learn word-entity associations (unlike existing approaches that only use words in a small fixed window around annotated entity references in Wikipedia pages), (2) Wikipedia annotations to appropriately bias the assignment of entity labels to annotated (and co-occurring unannotated) words during model learning, and (3) Wikipedia's category hierarchy to capture co-occurrence patterns among entities. We also propose a scheme for pruning spurious nodes from Wikipedia's crowd-sourced category hierarchy. In our experiments with multiple real-life datasets, we show that WPAM outperforms state-of-the-art baselines by as much as 16% in terms of disambiguation accuracy.",
"title": ""
},
{
"docid": "2693a2815adf4e731d87f9630cd7c427",
"text": "A new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. The filter consists of two stages. The first stage computes a fuzzy derivative for eight different directions. The second stage uses these fuzzy derivatives to perform fuzzy smoothing by weighting the contributions of neighboring pixel values. Both stages are based on fuzzy rules which make use of membership functions. The filter can be applied iteratively to effectively reduce heavy noise. In particular, the shape of the membership functions is adapted according to the remaining noise level after each iteration, making use of the distribution of the homogeneity in the image. A statistical model for the noise distribution can be incorporated to relate the homogeneity to the adaptation scheme of the membership functions. Experimental results are obtained to show the feasibility of the proposed approach. These results are also compared to other filters by numerical measures and visual inspection.",
"title": ""
},
{
"docid": "e96c9bdd3f5e9710f7264cbbe02738a7",
"text": "25 years ago, Lenstra, Lenstra and Lovász presented their c el brated LLL lattice reduction algorithm. Among the various applicatio ns of the LLL algorithm is a method due to Coppersmith for finding small roots of polyn mial equations. We give a survey of the applications of this root finding metho d t the problem of inverting the RSA function and the factorization problem. A s we will see, most of the results are of a dual nature, they can either be interpret ed as cryptanalytic results or as hardness/security results.",
"title": ""
},
{
"docid": "9a9fdd35a3f9df6ebdd7ea8f0cac5a00",
"text": "The recent appearance of augmented reality headsets, such as the Microsoft HoloLens, is a marked move from traditional 2D screen to 3D hologram-like interfaces. Striving to be completely portable, these devices unfortunately suffer multiple limitations, such as the lack of real-time, high quality depth data, which severely restricts their use as research tools. To mitigate this restriction, we provide a simple method to augment a HoloLens headset with much higher resolution depth data. To do so, we calibrate an external depth sensor connected to a computer stick that communicates with the HoloLens headset in real-time. To show how this system could be useful to the research community, we present an implementation of small object detection on HoloLens device.",
"title": ""
},
{
"docid": "2fa5df7c70c05445c9f300c7da0f8f87",
"text": "In this paper, we describe K-Extractor, a powerful NLP framework that provides integrated and seamless access to structured and unstructured information with minimal effort. The K-Extractor converts natural language documents into a rich set of semantic triples that, not only, can be stored within an RDF semantic index, but also, can be queried using natural language questions, thus eliminating the need to manually formulate SPARQL queries. The K-Extractor greatly outperforms a free text search index-based question answering system.",
"title": ""
},
{
"docid": "9673939625a3caafecf3da68a19742b0",
"text": "Automatic detection of road regions in aerial images remains a challenging research topic. Most existing approaches work well on the requirement of users to provide some seedlike points/strokes in the road area as the initial location of road regions, or detecting particular roads such as well-paved roads or straight roads. This paper presents a fully automatic approach that can detect generic roads from a single unmanned aerial vehicles (UAV) image. The proposed method consists of two major components: automatic generation of road/nonroad seeds and seeded segmentation of road areas. To know where roads probably are (i.e., road seeds), a distinct road feature is proposed based on the stroke width transformation (SWT) of road image. To the best of our knowledge, it is the first time to introduce SWT as road features, which show the effectiveness on capturing road areas in images in our experiments. Different road features, including the SWT-based geometry information, colors, and width, are then combined to classify road candidates. Based on the candidates, a Gaussian mixture model is built to produce road seeds and background seeds. Finally, starting from these road and background seeds, a convex active contour model segmentation is proposed to extract whole road regions. Experimental results on varieties of UAV images demonstrate the effectiveness of the proposed method. Comparison with existing techniques shows the robustness and accuracy of our method to different roads.",
"title": ""
},
{
"docid": "9d9428fe9adbe3d1197e12ba4cbafe87",
"text": "BACKGROUND\nLegalization of euthanasia and physician-assisted suicide has been heavily debated in many countries. To help inform this debate, we describe the practices of euthanasia and assisted suicide, and the use of life-ending drugs without an explicit request from the patient, in Flanders, Belgium, where euthanasia is legal.\n\n\nMETHODS\nWe mailed a questionnaire regarding the use of life-ending drugs with or without explicit patient request to physicians who certified a representative sample (n = 6927) of death certificates of patients who died in Flanders between June and November 2007.\n\n\nRESULTS\nThe response rate was 58.4%. Overall, 208 deaths involving the use of life-ending drugs were reported: 142 (weighted prevalence 2.0%) were with an explicit patient request (euthanasia or assisted suicide) and 66 (weighted prevalence 1.8%) were without an explicit request. Euthanasia and assisted suicide mostly involved patients less than 80 years of age, those with cancer and those dying at home. Use of life-ending drugs without an explicit request mostly involved patients 80 years of older, those with a disease other than cancer and those in hospital. Of the deaths without an explicit request, the decision was not discussed with the patient in 77.9% of cases. Compared with assisted deaths with the patient's explicit request, those without an explicit request were more likely to have a shorter length of treatment of the terminal illness, to have cure as a goal of treatment in the last week, to have a shorter estimated time by which life was shortened and to involve the administration of opioids.\n\n\nINTERPRETATION\nPhysician-assisted deaths with an explicit patient request (euthanasia and assisted suicide) and without an explicit request occurred in different patient groups and under different circumstances. Cases without an explicit request often involved patients whose diseases had unpredictable end-of-life trajectories. Although opioids were used in most of these cases, misconceptions seem to persist about their actual life-shortening effects.",
"title": ""
},
{
"docid": "11c117d839be466c369274f021caba13",
"text": "Android smartphones are becoming increasingly popular. The open nature of Android allows users to install miscellaneous applications, including the malicious ones, from third-party marketplaces without rigorous sanity checks. A large portion of existing malwares perform stealthy operations such as sending short messages, making phone calls and HTTP connections, and installing additional malicious components. In this paper, we propose a novel technique to detect such stealthy behavior. We model stealthy behavior as the program behavior that mismatches with user interface, which denotes the user's expectation of program behavior. We use static program analysis to attribute a top level function that is usually a user interaction function with the behavior it performs. Then we analyze the text extracted from the user interface component associated with the top level function. Semantic mismatch of the two indicates stealthy behavior. To evaluate AsDroid, we download a pool of 182 apps that are potentially problematic by looking at their permissions. Among the 182 apps, AsDroid reports stealthy behaviors in 113 apps, with 28 false positives and 11 false negatives.",
"title": ""
},
{
"docid": "118c147b4bca8036f2ce360609a3c3e5",
"text": "Robot manipulators are increasingly used in minimally invasive surgery (MIS). They are required to have small size, wide workspace, adequate dexterity and payload ability when operating in confined surgical cavity. Snake-like flexible manipulators are well suited to these applications. However, conventional fully actuated snake-like flexible manipulators are difficult to miniaturize and even after miniaturization the payload is very limited. The alternative is to use underactuated snake-like flexible manipulators. Three prevailing designs are tendon-driven continuum manipulators (TCM), tendon-driven serpentine manipulators (TSM) and concentric tube manipulators (CTM). In this paper, the three designs are compared at the mechanism level from the kinematics point of view. The workspace and distal end dexterity are compared for TCM, TSM and CTM with one, two and three sections, respectively. Other aspects of these designs are also discussed, including sweeping motion, scaling, force sensing, stiffness control, etc. From the results, the tendon-driven designs and concentric tube design complement each other in terms of their workspace, which is influenced by the number of sections as well as the length distribution among sections. The tendon-driven designs entail better distal end dexterity while generate larger sweeping motion in positions close to the shaft.",
"title": ""
},
{
"docid": "a408e25435dded29744cf2af0f7da1e5",
"text": "Using cloud storage to automatically back up content changes when editing documents is an everyday scenario. We demonstrate that current cloud storage services can cause unnecessary bandwidth consumption, especially for office suite documents, in this common scenario. Specifically, even with incremental synchronization approach in place, existing cloud storage services still incur whole-file transmission every time when the document file is synchronized. We analyze the problem causes in depth, and propose EdgeCourier, a system to address the problem. We also propose the concept of edge-hosed personal service (EPS), which has many benefits, such as helping deploy EdgeCourier easily in practice. We have prototyped the EdgeCourier system, deployed it in the form of EPS in a lab environment, and performed extensive experiments for evaluation. Evaluation results suggest that our prototype system can effectively reduce document synchronization bandwidth with negligible overheads.",
"title": ""
},
{
"docid": "ff705a36e71e2aa898e99fbcfc9ec9d2",
"text": "This paper presents a design concept for smart home automation system based on the idea of the internet of things (IoT) technology. The proposed system has two scenarios where first one is denoted as a wireless based and the second is a wire-line based scenario. Each scenario has two operational modes for manual and automatic use. In Case of the wireless scenario, Arduino-Uno single board microcontroller as a central controller for home appliances is applied. Cellular phone with Matlab-GUI platform for monitoring and controlling processes through Wi-Fi communication technology is addressed. For the wire-line scenario, field-programmable gate array (FPGA) kit as a main controller is used. Simulation and hardware realization for the proposed system show its reliability and effectiveness.",
"title": ""
},
{
"docid": "34cc6503494981fda7f69c794525776a",
"text": "In this article we investigate the problem of human action recognition in static images. By action recognition we intend a class of problems which includes both action classification and action detection (i.e. simultaneous localization and classification). Bag-of-words image representations yield promising results for action classification, and deformable part models perform very well object detection. The representations for action recognition typically use only shape cues and ignore color information. Inspired by the recent success of color in image classification and object detection, we investigate the potential of color for action classification and detection in static images. We perform a comprehensive evaluation of color descriptors and fusion approaches for action recognition. Experiments were conducted on the three datasets most used for benchmarking action recognition in still images: Willow, PASCAL VOC 2010 and Stanford-40. Our experiments demonstrate that incorporating color information considerably improves recognition performance, and that a descriptor based on color names outperforms pure color descriptors. Our experiments demonstrate that late fusion of color and shape information outperforms other approaches on action recognition. Finally, we show that the different color–shape fusion approaches result in complementary information and combining them yields state-of-the-art performance for action classification.",
"title": ""
},
{
"docid": "185ae8a2c89584385a810071c6003c15",
"text": "In this paper, we propose a free viewpoint image rendering method combined with filter based alpha matting for improving the image quality of image boundaries. When we synthesize a free viewpoint image, blur around object boundaries in an input image spills foreground/background color in the synthesized image. To generate smooth boundaries, alpha matting is a solution. In our method based on filtering, we make a boundary map from input images and depth maps, and then feather the map by using guided filter. In addition, we extend view synthesis method to deal the alpha channel. Experiment results show that the proposed method synthesizes 0.4 dB higher quality images than the conventional method without the matting. Also the proposed method synthesizes 0.2 dB higher quality images than the conventional method of robust matting. In addition, the computational cost of the proposed method is 100x faster than the conventional matting.",
"title": ""
},
{
"docid": "04e478610728f0aae76e5299c28da25a",
"text": "Single image super resolution is one of the most important topic in computer vision and image processing research, many convolutional neural networks (CNN) based super resolution algorithms were proposed and achieved advanced performance, especially in recovering image details, in which PixelCNN is the most representative one. However, due to the intensive computation requirement of PixelCNN model, running time remains a major challenge, which limited its wider application. In this paper, several modifications are proposed to improve PixelCNN based recursive super resolution model. First, a discrete logistic mixture likelihood is adopted, then a cache structure for generating process is proposed, with these modifications, numerous redundant computations are removed without loss of accuracy. Finally, a partial generating network is proposed for higher resolution generation. Experiments on CelebA dataset demonstrate the effectiveness the superiority of the proposed method.",
"title": ""
},
{
"docid": "4e9ca5976fc68c319e8303076ca80dc7",
"text": "A self-driving car, to be deployed in real-world driving environments, must be capable of reliably detecting and effectively tracking of nearby moving objects. This paper presents our new, moving object detection and tracking system that extends and improves our earlier system used for the 2007 DARPA Urban Challenge. We revised our earlier motion and observation models for active sensors (i.e., radars and LIDARs) and introduced a vision sensor. In the new system, the vision module detects pedestrians, bicyclists, and vehicles to generate corresponding vision targets. Our system utilizes this visual recognition information to improve a tracking model selection, data association, and movement classification of our earlier system. Through the test using the data log of actual driving, we demonstrate the improvement and performance gain of our new tracking system.",
"title": ""
},
{
"docid": "fb1c9fcea2f650197b79711606d4678b",
"text": "Self-similarity based super-resolution (SR) algorithms are able to produce visually pleasing results without extensive training on external databases. Such algorithms exploit the statistical prior that patches in a natural image tend to recur within and across scales of the same image. However, the internal dictionary obtained from the given image may not always be sufficiently expressive to cover the textural appearance variations in the scene. In this paper, we extend self-similarity based SR to overcome this drawback. We expand the internal patch search space by allowing geometric variations. We do so by explicitly localizing planes in the scene and using the detected perspective geometry to guide the patch search process. We also incorporate additional affine transformations to accommodate local shape variations. We propose a compositional model to simultaneously handle both types of transformations. We extensively evaluate the performance in both urban and natural scenes. Even without using any external training databases, we achieve significantly superior results on urban scenes, while maintaining comparable performance on natural scenes as other state-of-the-art SR algorithms.",
"title": ""
},
{
"docid": "40c87c73dad1bf79e1dd047b320a5b49",
"text": "Very recently, an increasing number of software companies adopted DevOps to adapt themselves to the ever-changing business environment. While it is important to mature adoption of the DevOps for these companies, no dedicated maturity models for DevOps exist. Meanwhile, maturity models such as CMMI models have demonstrated their effects in the traditional paradigm of software industry, however, it is not clear whether the CMMI models could guide the improvements with the context of DevOps. This paper reports a case study aiming at evaluating the feasibility to apply the CMMI models to guide process improvement for DevOps projects and identifying possible gaps. Using a structured method(i.e., SCAMPI C), we conducted a case study by interviewing four employees from one DevOps project. Based on evidence we collected in the case study, we managed to characterize the maturity/capability of the DevOps project, which implies the possibility to use the CMMI models to appraise the current processes in this DevOps project and guide future improvements. Meanwhile, several gaps also are identified between the CMMI models and the DevOps mode. In this sense, the CMMI models could be taken as a good foundation to design suitable maturity models so as to guide process improvement for projects adopting the DevOps.",
"title": ""
},
{
"docid": "c699ede2caeb5953decc55d8e42c2741",
"text": "Traditionally, two distinct approaches have been employed for exploratory factor analysis: maximum likelihood factor analysis and principal component analysis. A third alternative, called regularized exploratory factor analysis, was introduced recently in the psychometric literature. Small sample size is an important issue that has received considerable discussion in the factor analysis literature. However, little is known about the differential performance of these three approaches to exploratory factor analysis in a small sample size scenario. A simulation study and an empirical example demonstrate that regularized exploratory factor analysis may be recommended over the two traditional approaches, particularly when sample sizes are small (below 50) and the sample covariance matrix is near singular.",
"title": ""
},
{
"docid": "715877425204ebb5764bd6ca57ac54ea",
"text": "User Generated Content (UGC) is re-shaping the way people watch video and TV, with millions of video producers and consumers. In particular, UGC sites are creating new viewing patterns and social interactions, empowering users to be more creative, and developing new business opportunities. To better understand the impact of UGC systems, we have analyzed YouTube, the world's largest UGC VoD system. Based on a large amount of data collected, we provide an in-depth study of YouTube and other similar UGC systems. In particular, we study the popularity life-cycle of videos, the intrinsic statistical properties of requests and their relationship with video age, and the level of content aliasing or of illegal content in the system. We also provide insights on the potential for more efficient UGC VoD systems (e.g. utilizing P2P techniques or making better use of caching). Finally, we discuss the opportunities to leverage the latent demand for niche videos that are not reached today due to information filtering effects or other system scarcity distortions. Overall, we believe that the results presented in this paper are crucial in understanding UGC systems and can provide valuable information to ISPs, site administrators, and content owners with major commercial and technical implications.",
"title": ""
}
] |
scidocsrr
|
dcf8cff45ebdd25d6815418d29ddca7d
|
"Owl" and "Lizard": Patterns of Head Pose and Eye Pose in Driver Gaze Classification
|
[
{
"docid": "9b1e1e91b8aacd1ed5d1aee823de7fd3",
"text": "—This paper presents a novel adaptive algorithm to detect the center of pupil in frontal view faces. This algorithm, at first, employs the viola-Jones face detector to find the approximate location of face in an image. The knowledge of the face structure is exploited to detect the eye region. The histogram of the detected region is calculated and its CDF is employed to extract the eyelids and iris region in an adaptive way. The center of this region is considered as the pupil center. The experimental results show ninety one percent's accuracy in detecting pupil center.",
"title": ""
}
] |
[
{
"docid": "4fc356024295824f6c68360bf2fcb860",
"text": "Detecting depression is a key public health challenge, as almost 12% of all disabilities can be attributed to depression. Computational models for depression detection must prove not only that can they detect depression, but that they can do it early enough for an intervention to be plausible. However, current evaluations of depression detection are poor at measuring model latency. We identify several issues with the currently popular ERDE metric, and propose a latency-weighted F1 metric that addresses these concerns. We then apply this evaluation to several models from the recent eRisk 2017 shared task on depression detection, and show how our proposed measure can better capture system differences.",
"title": ""
},
{
"docid": "e870d5f8daac0d13bdcffcaec4ba04c1",
"text": "In this paper the design, fabrication and test of X-band and 2-18 GHz wideband high power SPDT MMIC switches in microstrip GaN technology are presented. Such switches have demonstrated state-of-the-art performances. In particular the X-band switch exhibits 1 dB insertion loss, better than 37 dB isolation and a power handling capability at 9 GHz of better than 39 dBm at 1 dB insertion loss compression point; the wideband switch has an insertion loss lower than 2.2 dB, better than 25 dB isolation and a power handling capability of better than 38 dBm in the entire bandwidth.",
"title": ""
},
{
"docid": "b93ab92ac82a34d3a83240e251cf714e",
"text": "Short text is becoming ubiquitous in many modern information systems. Due to the shortness and sparseness of short texts, there are less informative word co-occurrences among them, which naturally pose great difficulty for classification tasks on such data. To overcome this difficulty, this paper proposes a new way for effectively classifying the short texts. Our method is based on a key observation that there usually exists ordered subsets in short texts, which is termed ``information path'' in this work, and classification on each subset based on the classification results of some pervious subsets can yield higher overall accuracy than classifying the entire data set directly. We propose a method to detect the information path and employ it in short text classification. Different from the state-of-art methods, our method does not require any external knowledge or corpus that usually need careful fine-tuning, which makes our method easier and more robust on different data sets. Experiments on two real world data sets show the effectiveness of the proposed method and its superiority over the existing methods.",
"title": ""
},
{
"docid": "fd1e327327068a1373e35270ef257c59",
"text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.",
"title": ""
},
{
"docid": "42db53797dc57cfdb7f963c55bb7f039",
"text": "Vast amounts of artistic data is scattered on-line from both museums and art applications. Collecting, processing and studying it with respect to all accompanying attributes is an expensive process. With a motivation to speed up and improve the quality of categorical analysis in the artistic domain, in this paper we propose an efficient and accurate method for multi-task learning with a shared representation applied in the artistic domain. We continue to show how different multi-task configurations of our method behave on artistic data and outperform handcrafted feature approaches as well as convolutional neural networks. In addition to the method and analysis, we propose a challenge like nature to the new aggregated data set with almost half a million samples and structuredmeta-data to encourage further research and societal engagement. ACM Reference format: Gjorgji Strezoski and Marcel Worring. 2017. OmniArt: Multi-task Deep Learning for Artistic Data Analysis.",
"title": ""
},
{
"docid": "617ec3be557749e0646ad7092a1afcb6",
"text": "The difficulty of directly measuring gene flow has lead to the common use of indirect measures extrapolated from genetic frequency data. These measures are variants of FST, a standardized measure of the genetic variance among populations, and are used to solve for Nm, the number of migrants successfully entering a population per generation. Unfortunately, the mathematical model underlying this translation makes many biologically unrealistic assumptions; real populations are very likely to violate these assumptions, such that there is often limited quantitative information to be gained about dispersal from using gene frequency data. While studies of genetic structure per se are often worthwhile, and FST is an excellent measure of the extent of this population structure, it is rare that FST can be translated into an accurate estimate of Nm.",
"title": ""
},
{
"docid": "40f32d675f581230ca70fa2ba9389eb6",
"text": "We depend on exposure to light to guide us, inform us about the outside world, and regulate the biological rhythms in our bodies. We think about turning lights on to improve our lives; however, for some people, exposure to light creates pain and distress that can overwhelm their desire to see. Photophobia is ocular or headache pain caused by normal or dim light. People are symptomatic when irradiance levels inducing pain fall into a range needed for functionality and productivity, making photoallodynia a more accurate term. “Dazzle” is a momentary and normal aversion response to bright lights that subsides within seconds, but photoallodynia only subsides when light exposure is reduced. Milder degrees of sensitivity may manifest as greater perceived comfort in dim illumination. In severe cases, the pain is so debilitating that people are physically and socially isolated into darkness. The suffering and loss of function associated with photoallodynia can be devastating, but it is underappreciated in clinical assessment, treatment, and basic and clinical research. Transient photoallodynia generally improves when the underlying condition resolves, as in association with ocular inflammation, dry eye syndrome and laser-assisted in situ keratomileusis surgery. Migraine-associated light sensitivity can be severe during migraine or mild (and non-clinical) during the interictal period. With so many causes of photoallodynia, a singular underlying mechanism is unlikely, although different etiologies likely have shared and unique components and pathways. Photoallodynia may originate by alteration of a trigeminal nociceptive pathway or possibly through direct retinal projections to higher brain regions involved in pain perception, including but not limited to the periaqueductal gray, the anterior cingulate and somatorsensory cortices, which are collectively termed the “pain matrix.” However, persistent photoallodynia, occurring in a number of ocular and central brain causes, can be remarkably resistant to therapy. The initial light detection that triggers a pain response likely arises through interaction of cone photoreceptors (color and acuity), rod photoreceptors (low light vision), and intrinsically photosensitive retinal ganglion cells (ipRGCs, pupil light reflex and circadian photoentrainment). We can gain clues as to these interactions by examining retinal diseases that cause – or do not cause – photoallodynia.",
"title": ""
},
{
"docid": "d6cf367f29ed1c58fb8fd0b7edf69458",
"text": "Diabetes mellitus is a chronic disease that leads to complications including heart disease, stroke, kidney failure, blindness and nerve damage. Type 2 diabetes, characterized by target-tissue resistance to insulin, is epidemic in industrialized societies and is strongly associated with obesity; however, the mechanism by which increased adiposity causes insulin resistance is unclear. Here we show that adipocytes secrete a unique signalling molecule, which we have named resistin (for resistance to insulin). Circulating resistin levels are decreased by the anti-diabetic drug rosiglitazone, and increased in diet-induced and genetic forms of obesity. Administration of anti-resistin antibody improves blood sugar and insulin action in mice with diet-induced obesity. Moreover, treatment of normal mice with recombinant resistin impairs glucose tolerance and insulin action. Insulin-stimulated glucose uptake by adipocytes is enhanced by neutralization of resistin and is reduced by resistin treatment. Resistin is thus a hormone that potentially links obesity to diabetes.",
"title": ""
},
{
"docid": "1569bcea0c166d9bf2526789514609c5",
"text": "In this paper, we present the developmert and initial validation of a new self-report instrument, the Differentiation of Self Inventory (DSI). T. DSI represents the first attempt to create a multi-dimensional measure of differentiation based on Bowen Theory, focusing specifically on adults (ages 25 +), their current significant relationships, and their relations with families of origin. Principal components factor analysis on a sample of 313 normal adults (mean age = 36.8) suggested four dimensions: Emotional Reactivity, Reactive Distancing, Fusion with Parents, and \"I\" Position. Scales constructed from these factors were found to be moderately correlated in the expected direction, internally consistent, and significantly predictive of trait anxiety. The potential contribution of the DSI is discussed -for testing Bowen Theory, as a clinical assessment tool, and as an indicator of psychotherapeutic outcome.",
"title": ""
},
{
"docid": "9d9afbd6168c884f54f72d3daea57ca7",
"text": "0167-8655/$ see front matter 2009 Elsevier B.V. A doi:10.1016/j.patrec.2009.06.012 * Corresponding author. Tel.: +82 2 705 8931; fax: E-mail addresses: [email protected] (S. Yoon), sa Computer aided diagnosis (CADx) systems for digitized mammograms solve the problem of classification between benign and malignant tissues while studies have shown that using only a subset of features generated from the mammograms can yield higher classification accuracy. To this end, we propose a mutual information-based Support Vector Machine Recursive Feature Elimination (SVM-RFE) as the classification method with feature selection in this paper. We have conducted extensive experiments on publicly available mammographic data and the obtained results indicate that the proposed method outperforms other SVM and SVM-RFE-based methods. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "8d3f65dbeba6c158126ae9d82c886687",
"text": "Using dealer’s quotes and transactions prices on straight industrial bonds, we investigate the determinants of credit spread changes. Variables that should in theory determine credit spread changes have rather limited explanatory power. Further, the residuals from this regression are highly cross-correlated, and principal components analysis implies they are mostly driven by a single common factor. Although we consider several macroeconomic and financial variables as candidate proxies, we cannot explain this common systematic component. Our results suggest that monthly credit spread changes are principally driven by local supply0 demand shocks that are independent of both credit-risk factors and standard proxies for liquidity. THE RELATION BETWEEN STOCK AND BOND RETURNS has been widely studied at the aggregate level ~see, e.g., Keim and Stambaugh ~1986!, Fama and French ~1989, 1993!, Campbell and Ammer ~1993!!. Recently, a few studies have investigated that relation at both the individual firm level ~see, e.g., Kwan ~1996!! and portfolio level ~see, e.g., Blume, Keim, and Patel ~1991!, Cornell and Green ~1991!!. These studies focus on corporate bond returns, or yield changes. The main conclusions of these papers are: ~1! high-grade bonds behave like Treasury bonds, and ~2! low-grade bonds are more sensitive to stock returns. The implications of these studies may be limited in many situations of interest, however. For example, hedge funds often take highly levered positions in corporate bonds while hedging away interest rate risk by shorting treasuries. As a consequence, their portfolios become extremely sensitive to changes in credit spreads rather than changes in bond yields. The distinc* Collin-Dufresne is at Carnegie Mellon University. Goldstein is at Washington University in St. Louis. Martin is at Arizona State University. A significant portion of this paper was written while Goldstein and Martin were at The Ohio State University. We thank Rui Albuquerque, Gurdip Bakshi, Greg Bauer, Dave Brown, Francesca Carrieri, Peter Christoffersen, Susan Christoffersen, Greg Duffee, Darrell Duffie, Vihang Errunza, Gifford Fong, Mike Gallmeyer, Laurent Gauthier, Rick Green, John Griffin, Jean Helwege, Kris Jacobs, Chris Jones, Andrew Karolyi, Dilip Madan, David Mauer, Erwan Morellec, Federico Nardari, N.R. Prabhala, Tony Sanders, Sergei Sarkissian, Bill Schwert, Ken Singleton, Chester Spatt, René Stulz ~the editor!, Suresh Sundaresan, Haluk Unal, Karen Wruck, and an anonymous referee for helpful comments. We thank Ahsan Aijaz, John Puleo, and Laura Tuttle for research assistance. We are also grateful to seminar participants at Arizona State University, University of Maryland, McGill University, The Ohio State University, University of Rochester, and Southern Methodist University. THE JOURNAL OF FINANCE • VOL. LVI, NO. 6 • DEC. 2001",
"title": ""
},
{
"docid": "ce2e955ef4fba68411cafab52d206b52",
"text": "Voice-enabled user interfaces have become a popular means of interaction with various kinds of applications and services. In addition to more traditional interaction paradigms such as keyword search, voice interaction can be a convenient means of communication for many groups of users. Amazon Alexa has become a valuable tool for building custom voice-enabled applications. In this demo paper we describe how we use Amazon Alexa technologies to build a Semantic Web applications able to answer factual questions using the Wikidata knowledge graph. We describe how the Amazon Alexa voice interface allows the user to communicate with the metaphactory knowledge graph management platform and a reusable procedure for producing the Alexa application configuration from semantic data in an automated way.",
"title": ""
},
{
"docid": "151fd47f87944978edfafb121b655ad8",
"text": "We introduce a pair of tools, Rasa NLU and Rasa Core, which are open source python libraries for building conversational software. Their purpose is to make machine-learning based dialogue management and language understanding accessible to non-specialist software developers. In terms of design philosophy, we aim for ease of use, and bootstrapping from minimal (or no) initial training data. Both packages are extensively documented and ship with a comprehensive suite of tests. The code is available at https://github.com/RasaHQ/",
"title": ""
},
{
"docid": "bc758b1dd8e3a75df2255bb880a716ef",
"text": "In recent years, convolutional neural networks (CNNs) based machine learning algorithms have been widely applied in computer vision applications. However, for large-scale CNNs, the computation-intensive, memory-intensive and resource-consuming features have brought many challenges to CNN implementations. This work proposes an end-to-end FPGA-based CNN accelerator with all the layers mapped on one chip so that different layers can work concurrently in a pipelined structure to increase the throughput. A methodology which can find the optimized parallelism strategy for each layer is proposed to achieve high throughput and high resource utilization. In addition, a batch-based computing method is implemented and applied on fully connected layers (FC layers) to increase the memory bandwidth utilization due to the memory-intensive feature. Further, by applying two different computing patterns on FC layers, the required on-chip buffers can be reduced significantly. As a case study, a state-of-the-art large-scale CNN, AlexNet, is implemented on Xilinx VC709. It can achieve a peak performance of 565.94 GOP/s and 391 FPS under 156MHz clock frequency which outperforms previous approaches.",
"title": ""
},
{
"docid": "b2ba44fb536ad11295bac85ed23daedd",
"text": "This paper presents a framework for security requirements elicitation and analysis. The framework is based on constructing a context for the system, representing security requirements as constraints, and developing satisfaction arguments for the security requirements. The system context is described using a problem-oriented notation, then is validated against the security requirements through construction of a satisfaction argument. The satisfaction argument consists of two parts: a formal argument that the system can meet its security requirements and a structured informal argument supporting the assumptions expressed in the formal argument. The construction of the satisfaction argument may fail, revealing either that the security requirement cannot be satisfied in the context or that the context does not contain sufficient information to develop the argument. In this case, designers and architects are asked to provide additional design information to resolve the problems. We evaluate the framework by applying it to a security requirements analysis within an air traffic control technology evaluation project.",
"title": ""
},
{
"docid": "857658968e3e237b33073ed87ff0fa1a",
"text": "Analysis of a worldwide sample of sudden deaths of politicians reveals a market-adjusted 1.7% decline in the value of companies headquartered in the politician’s hometown. The decline in value is followed by a drop in the rate of growth in sales and access to credit. Our results are particularly pronounced for family firms, firms with high growth prospects, firms in industries over which the politician has jurisdiction, and firms headquartered in highly corrupt countries.",
"title": ""
},
{
"docid": "7a4c7c21ae35d4056844af341495f655",
"text": "The development of a new measure of concussion knowledge and attitudes that is more comprehensive and psychometrically sound than previous measures is described. A group of high-school students (N = 529) completed the measure. The measure demonstrated fair to satisfactory test-retest reliability (knowledge items, r = .67; attitude items, r = .79). Exploratory factor analysis of the attitude items revealed a four-factor solution (eigenvalues ranged from 1.07-3.35) that displayed adequate internal consistency (Cohen's alpha range = .59-.72). Cluster analysis of the knowledge items resulted in a three-cluster solution distributed according to their level of difficulty. The potential uses for the measure are described.",
"title": ""
},
{
"docid": "6d2abcdd728a2355259c60c870b411a4",
"text": "Although providing feedback is commonly practiced in education, there is no general agreement regarding what type of feedback is most helpful and why it is helpful. This study examined the relationship between various types of feedback, potential internal mediators, and the likelihood of implementing feedback. Five main predictions were developed from the feedback literature in writing, specifically regarding feedback features (summarization, identifying problems, providing solutions, localization, explanations, scope, praise, and mitigating language) as they relate to potential causal mediators of problem or solution understanding and problem or solution agreement, leading to the final outcome of feedback implementation. To empirically test the proposed feedback model, 1,073 feedback segments from writing assessed by peers was analyzed. Feedback was collected using SWoRD, an online peer review system. Each segment was coded for each of the feedback features, implementation, agreement, and understanding. The correlations between the feedback features, levels of mediating variables, and implementation rates revealed several significant relationships. Understanding was the only significant mediator of implementation. Several feedback features were associated with understanding: including solutions, a summary of the performance, and the location of the problem were associated with increased understanding; and explanations of problems were associated with decreased understanding. Implications of these results are discussed.",
"title": ""
},
{
"docid": "3f1939623798f46dec5204793bedab9e",
"text": "Predictive business process monitoring exploits event logs to predict how ongoing (uncompleted) cases will unfold up to their completion. A predictive process monitoring framework collects a range of techniques that allow users to get accurate predictions about the achievement of a goal or about the time required for such an achievement for a given ongoing case. These techniques can be combined and their parameters configured in different framework instances. Unfortunately, a unique framework instance that is general enough to outperform others for every dataset, goal or type of prediction is elusive. Thus, the selection and configuration of a framework instance needs to be done for a given dataset. This paper presents a predictive process monitoring framework armed with a hyperparameter optimization method to select a suitable framework instance for a given dataset.",
"title": ""
}
] |
scidocsrr
|
d34c16e0088ecc96d5a99da85ad63f4b
|
Emotion Communication System
|
[
{
"docid": "7e78dbc7ae4fd9a2adbf7778db634b33",
"text": "Dynamic Proof of Storage (PoS) is a useful cryptographic primitive that enables a user to check the integrity of outsourced files and to efficiently update the files in a cloud server. Although researchers have proposed many dynamic PoS schemes in singleuser environments, the problem in multi-user environments has not been investigated sufficiently. A practical multi-user cloud storage system needs the secure client-side cross-user deduplication technique, which allows a user to skip the uploading process and obtain the ownership of the files immediately, when other owners of the same files have uploaded them to the cloud server. To the best of our knowledge, none of the existing dynamic PoSs can support this technique. In this paper, we introduce the concept of deduplicatable dynamic proof of storage and propose an efficient construction called DeyPoS, to achieve dynamic PoS and secure cross-user deduplication, simultaneously. Considering the challenges of structure diversity and private tag generation, we exploit a novel tool called Homomorphic Authenticated Tree (HAT). We prove the security of our construction, and the theoretical analysis and experimental results show that our construction is efficient in practice.",
"title": ""
},
{
"docid": "9df0df8eb4f71d8c6952e07a179b2ec4",
"text": "In interpersonal interactions, speech and body gesture channels are internally coordinated towards conveying communicative intentions. The speech-gesture relationship is influenced by the internal emotion state underlying the communication. In this paper, we focus on uncovering the emotional effect on the interrelation between speech and body gestures. We investigate acoustic features describing speech prosody (pitch and energy) and vocal tract configuration (MFCCs), as well as three types of body gestures, viz., head motion, lower and upper body motions. We employ mutual information to measure the coordination between the two communicative channels, and analyze the quantified speech-gesture link with respect to distinct levels of emotion attributes, i.e., activation and valence. The results reveal that the speech-gesture coupling is generally tighter for low-level activation and high-level valence, compared to high-level activation and low-level valence. We further propose a framework for modeling the dynamics of speech-gesture interaction. Experimental studies suggest that such quantified coupling representations can well discriminate different levels of activation and valence, reinforcing that emotions are encoded in the dynamics of the multimodal link. We also verify that the structures of the coupling representations are emotiondependent using subspace-based analysis.",
"title": ""
}
] |
[
{
"docid": "0d8c38444954a0003117e7334195cb00",
"text": "Although mature technologies exist for acquiring images, geometry, and normals of small objects, they remain cumbersome and time-consuming for non-experts to employ on a large scale. In an archaeological setting, a practical acquisition system for routine use on every artifact and fragment would open new possibilities for archiving, analysis, and dissemination. We present an inexpensive system for acquiring all three types of information, and associated metadata, for small objects such as fragments of wall paintings. The acquisition system requires minimal supervision, so that a single, non-expert user can scan at least 10 fragments per hour. To achieve this performance, we introduce new algorithms to robustly and automatically align range scans, register 2-D scans to 3-D geometry, and compute normals from 2-D scans. As an illustrative application, we present a novel 3-D matching algorithm that efficiently searches for matching fragments using the scanned geometry.",
"title": ""
},
{
"docid": "8f97eed7ae59062915b422cb65c7729b",
"text": "In this modern scientific world, technologies are transforming rapidly but along with the ease and comfort they also bring in a big concern for security. Taking into account the physical security of the system to ensure access control and authentication of users, made us to switch to a new system of Biometric combined with ATM PIN as PIN can easily be guessed, stolen or misused. Biometric is added with the existing technology to double the security in order to reduce ATM frauds but it has also put forward several issues which include sensor durability and time consumption. This paper envelops two questions “Is it really worthy to go through the entire biometric process to just debit a low amount?” and “What could be the maximum amount one can lose if one's card is misused?” As an answer we propose a constraint on transactions by ATM involving biometric to improve the system performance and to solve the defined issues. The proposal is divided in two parts. The first part solves sensor performance issue by adding a limit on amount of cash and number of transactions is defined in such a way that if one need to withdraw a big amount OR attempts for multiple transactions by withdrawing small amount again and again, it shall be necessary to present biometric. On the other hand if one need to make only balance enquiry or the cash is low and the number of transactions in a day is less than defined attempts, biometric presentation is not mandatory. It may help users to save time and maintain sensor performance by not furnishing their biometric for few hundred apart from maintaining security. In the second part this paper explains how fingerprint verification is conducted if the claimant is allowed to access the system and what could be the measures to increase performance of fingerprint biometric system which could be added to our proposed system to enhance the overall system performance.",
"title": ""
},
{
"docid": "de0d2808f949723f1c0ee8e87052f889",
"text": "The notion of Cloud computing has not only reshaped the field of distributed systems but also fundamentally changed how businesses utilize computing today. While Cloud computing provides many advanced features, it still has some shortcomings such as the relatively high operating cost for both public and private Clouds. The area of Green computing is also becoming increasingly important in a world with limited energy resources and an ever-rising demand for more computational power. In this paper a new framework is presented that provides efficient green enhancements within a scalable Cloud computing architecture. Using power-aware scheduling techniques, variable resource management, live migration, and a minimal virtual machine design, overall system efficiency will be vastly improved in a data center based Cloud with minimal performance overhead.",
"title": ""
},
{
"docid": "9f328d46c30cac9bb210582113683432",
"text": "Clinical and hematologic studies of 16 adult patients whose leukemic cells had Tcell markers are reported from Japan, where the incidence of various lymphoproliferative diseases differs considerably from that in Western countries. Leukemic cells were studied by cytotoxicity tests with specific antisera against human T (ATS) and B cells (ABS) in addition to the usual Tand B-cell markers (E rosette, EAC rosette, and surface immunoglobulins). Characteristics of the clinical and hematologic findings were as follows: (1) onset in adulthood; (2) subacute or chronic leukemia with rapidly progressive terminal course; (3) leukemic cells killed by ATS and forming E rosettes; (4) Icykemic cells not morphologically monotonous and frequent cells with deeply indented or lobulated nuclei; (5) frequent skin involvement (9 patients); (6) common lymphadenopathy and hepatosplenomegaly; (7) no mediastinal mass; and, the most striking finding, (8) the clustering of the patients’ birthplaces, namely, 13 patients born in Kyushu. The relation. ship between our cases and other subacute or chronic adult T-ceIl malignancies such as chronic lymphocytic leukemia of T-cell origin, prolymphocytic leukemia with 1cell properties, S#{233}zarysyndrome, and mycosis fungoides is discussed.",
"title": ""
},
{
"docid": "a1af04cc0616533bd47bb660f0eff3cd",
"text": "Separating point clouds into ground and non-ground measurements is an essential step to generate digital terrain models (DTMs) from airborne LiDAR (light detection and ranging) data. However, most filtering algorithms need to carefully set up a number of complicated parameters to achieve high accuracy. In this paper, we present a new filtering method which only needs a few easy-to-set integer and Boolean parameters. Within the proposed approach, a LiDAR point cloud is inverted, and then a rigid cloth is used to cover the inverted surface. By analyzing the interactions between the cloth nodes and the corresponding LiDAR points, the locations of the cloth nodes can be determined to generate an approximation of the ground surface. Finally, the ground points can be extracted from the LiDAR point cloud by comparing the original LiDAR points and the generated surface. Benchmark datasets provided by ISPRS (International Society for Photogrammetry and Remote Sensing) working Group III/3 are used to validate the proposed filtering method, and the experimental results yield an average total error of 4.58%, which is comparable with most of the state-of-the-art filtering algorithms. The proposed easy-to-use filtering method may help the users without much experience to use LiDAR data and related technology in their own applications more easily.",
"title": ""
},
{
"docid": "d19503f965e637089d9fa200329f1349",
"text": "Almost a half century ago, regular endurance exercise was shown to improve the capacity of skeletal muscle to oxidize substrates to produce ATP for muscle work. Since then, adaptations in skeletal muscle mRNA level were shown to happen with a single bout of exercise. Protein changes occur within days if daily endurance exercise continues. Some of the mRNA and protein changes cause increases in mitochondrial concentrations. One mitochondrial adaptation that occurs is an increase in fatty acid oxidation at a given absolute, submaximal workload. Mechanisms have been described as to how endurance training increases mitochondria. Importantly, Pgc-1α is a master regulator of mitochondrial biogenesis by increasing many mitochondrial proteins. However, not all adaptations to endurance training are associated with increased mitochondrial concentrations. Recent evidence suggests that the energetic demands of muscle contraction are by themselves stronger controllers of body weight and glucose control than is muscle mitochondrial content. Endurance exercise has also been shown to regulate the processes of mitochondrial fusion and fission. Mitophagy removes damaged mitochondria, a process that maintains mitochondrial quality. Skeletal muscle fibers are composed of different phenotypes, which are based on concentrations of mitochondria and various myosin heavy chain protein isoforms. Endurance training at physiological levels increases type IIa fiber type with increased mitochondria and type IIa myosin heavy chain. Endurance training also improves capacity of skeletal muscle blood flow. Endurance athletes possess enlarged arteries, which may also exhibit decreased wall thickness. VEGF is required for endurance training-induced increases in capillary-muscle fiber ratio and capillary density.",
"title": ""
},
{
"docid": "fed52ce31aa0011f0ccb5392ded78979",
"text": "BACKGROUND\nEconomy, velocity/power at maximal oxygen uptake ([Formula: see text]) and endurance-specific muscle power tests (i.e. maximal anaerobic running velocity; vMART), are now thought to be the best performance predictors in elite endurance athletes. In addition to cardiovascular function, these key performance indicators are believed to be partly dictated by the neuromuscular system. One technique to improve neuromuscular efficiency in athletes is through strength training.\n\n\nOBJECTIVE\nThe aim of this systematic review was to search the body of scientific literature for original research investigating the effect of strength training on performance indicators in well-trained endurance athletes-specifically economy, [Formula: see text] and muscle power (vMART).\n\n\nMETHODS\nA search was performed using the MEDLINE, PubMed, ScienceDirect, SPORTDiscus and Web of Science search engines. Twenty-six studies met the inclusion criteria (athletes had to be trained endurance athletes with ≥6 months endurance training, training ≥6 h per week OR [Formula: see text] ≥50 mL/min/kg, the strength interventions had to be ≥5 weeks in duration, and control groups used). All studies were reviewed using the PEDro scale.\n\n\nRESULTS\nThe results showed that strength training improved time-trial performance, economy, [Formula: see text] and vMART in competitive endurance athletes.\n\n\nCONCLUSION\nThe present research available supports the addition of strength training in an endurance athlete's programme for improved economy, [Formula: see text], muscle power and performance. However, it is evident that further research is needed. Future investigations should include valid strength assessments (i.e. squats, jump squats, drop jumps) through a range of velocities (maximal-strength ↔ strength-speed ↔ speed-strength ↔ reactive-strength), and administer appropriate strength programmes (exercise, load and velocity prescription) over a long-term intervention period (>6 months) for optimal transfer to performance.",
"title": ""
},
{
"docid": "8c596d99bb1ba18f2fb444583c255d90",
"text": "FFT literature has been mostly concerned with minimizing the number of floating-point operations performed by an algorithm. Unfortunately, on present-day microprocessors this measure is far less important than it used to be, and interactions with the processor pipeline and the memory hierarchy have a larger impact on performance. Consequently, one must know the details of a computer architecture in order to design a fast algorithm. In this paper, we propose an adaptive FFT program that tunes the computation automatically for any particular hardware. We compared our program, called FFTW, with over 40 implementations of the FFT on 7 machines. Our tests show that FFTW’s self-optimizing approach usually yields significantly better performance than all other publicly available software. FFTW also compares favorably with machine-specific, vendor-optimized libraries.",
"title": ""
},
{
"docid": "cbcdc411e22786dcc1b3655c5e917fae",
"text": "Local intracellular Ca(2+) transients, termed Ca(2+) sparks, are caused by the coordinated opening of a cluster of ryanodine-sensitive Ca(2+) release channels in the sarcoplasmic reticulum of smooth muscle cells. Ca(2+) sparks are activated by Ca(2+) entry through dihydropyridine-sensitive voltage-dependent Ca(2+) channels, although the precise mechanisms of communication of Ca(2+) entry to Ca(2+) spark activation are not clear in smooth muscle. Ca(2+) sparks act as a positive-feedback element to increase smooth muscle contractility, directly by contributing to the global cytoplasmic Ca(2+) concentration ([Ca(2+)]) and indirectly by increasing Ca(2+) entry through membrane potential depolarization, caused by activation of Ca(2+) spark-activated Cl(-) channels. Ca(2+) sparks also have a profound negative-feedback effect on contractility by decreasing Ca(2+) entry through membrane potential hyperpolarization, caused by activation of large-conductance, Ca(2+)-sensitive K(+) channels. In this review, the roles of Ca(2+) sparks in positive- and negative-feedback regulation of smooth muscle function are explored. We also propose that frequency and amplitude modulation of Ca(2+) sparks by contractile and relaxant agents is an important mechanism to regulate smooth muscle function.",
"title": ""
},
{
"docid": "c3c15cc4edc816e53d1a8c19472ad203",
"text": "Among different Business Process Management strategies and methodologies, one common feature is to capture existing processes and representing the new processes adequately. Business Process Modelling (BPM) plays a crucial role on such an effort. This paper proposes a “to-be” inbound logistics business processes model using BPMN 2.0 standard specifying the structure and behaviour of the system within the SME environment. The generic framework of inbound logistics model consists of one main high-level module-based system named Order System comprising of four main sub-systems of the Order core, Procure, Auction, and Purchase systems. The system modelingis elaborately discussed to provide a business analytical perspective from various activities in inbound logistics system. Since the main purpose of the paper is to map out the functionality and behaviour of Logistics system requirements, employing the model is of a great necessity on the future applications at system development such as in the data modelling effort. Moreover, employing BPMN 2.0 method and providing explanatory techniques as a nifty guideline and framework to assist the business process practitioners, analysts and managers at identical systems.",
"title": ""
},
{
"docid": "69b1c87a06b1d83fd00d9764cdadc2e9",
"text": "Sarcos Research Corporation, and the Center for Engineering Design at the University of Utah, have long been interested in both the fundamental and the applied aspects of robots and other computationally driven machines. We have produced substantial numbers of systems that function as products for commercial applications, and as advanced research tools specifically designed for experimental",
"title": ""
},
{
"docid": "9edd6f8e6349689b71a351f5947497f7",
"text": "Convolutional Neural Networks (CNNs) have been applied to visual tracking with demonstrated success in recent years. Most CNN-based trackers utilize hierarchical features extracted from a certain layer to represent the target. However, features from a certain layer are not always effective for distinguishing the target object from the backgrounds especially in the presence of complicated interfering factors (e.g., heavy occlusion, background clutter, illumination variation, and shape deformation). In this work, we propose a CNN-based tracking algorithm which hedges deep features from different CNN layers to better distinguish target objects and background clutters. Correlation filters are applied to feature maps of each CNN layer to construct a weak tracker, and all weak trackers are hedged into a strong one. For robust visual tracking, we propose a hedge method to adaptively determine weights of weak classifiers by considering both the difference between the historical as well as instantaneous performance, and the difference among all weak trackers over time. In addition, we design a siamese network to define the loss of each weak tracker for the proposed hedge method. Extensive experiments on large benchmark datasets demonstrate the effectiveness of the proposed algorithm against the state-of-the-art tracking methods.",
"title": ""
},
{
"docid": "887a80309231e055fd46b9341a4ab83b",
"text": "This paper presents radar cross section (RCS) measurement for pedestrian detection in 79GHz-band radar system. For a human standing at 6.2 meters, the RCS distribution's median value is -11.1 dBsm and the 90 % of RCS fluctuation is between -20.7 dBsm and -4.8 dBsm. Other measurement results (human body poses beside front) are shown. And we calculated the coefficient values of the Weibull distribution fitting to the human body RCS distribution.",
"title": ""
},
{
"docid": "07e93064b1971a32b5c85b251f207348",
"text": "With the growing demand on automotive electronics for the advanced driver assistance systems and autonomous driving, the functional safety becomes one of the most important issues in the hardware development. Thus, the safety standard for automotive E/E system, ISO-26262, becomes state-of-the-art guideline to ensure that the required safety level can be achieved. In this study, we base on ISO-26262 to develop a FMEDA-based fault injection and data analysis framework. The main contribution of this study is to effectively reduce the effort for generating FMEDA report which is used to evaluate hardware's safety level based on ISO-26262 standard.",
"title": ""
},
{
"docid": "adf3678a3f1fcd5db580a417194239f2",
"text": "In training deep neural networks for semantic segmentation, the main limiting factor is the low amount of ground truth annotation data that is available in currently existing datasets. The limited availability of such data is due to the time cost and human effort required to accurately and consistently label real images on a pixel level. Modern sandbox video game engines provide open world environments where traffic and pedestrians behave in a pseudo-realistic manner. This caters well to the collection of a believable road-scene dataset. Utilizing open-source tools and resources found in single-player modding communities, we provide a method for persistent, ground truth, asset annotation of a game world. By collecting a synthetic dataset containing upwards of 1, 000, 000 images, we demonstrate realtime, on-demand, ground truth data annotation capability of our method. Supplementing this synthetic data to Cityscapes dataset, we show that our data generation method provides qualitative as well as quantitative improvements—for training networks—over previous methods that use video games as surrogate.",
"title": ""
},
{
"docid": "d3e8dce306eb20a31ac6b686364d0415",
"text": "Lung diseases are the deadliest disease in the world. The computer aided detection system in lung diseases needed accurate lung segmentation to preplan the pulmonary treatment and surgeries. The researchers undergone the lung segmentation need a deep study and understanding of the traditional and recent papers developed in the lung segmentation field so that they can continue their research journey in an efficient way with successful outcomes. The need of reviewing the research papers is now a most wanted one for researches so this paper makes a survey on recent trends of pulmonary lung segmentation. Seven recent papers are carried out to analyze the performance characterization of themselves. The working methods, purpose for development, name of algorithm and drawbacks of the method are taken into consideration for the survey work. The tables and charts are drawn based on the reviewed papers. The study of lung segmentation research is more helpful to new and fresh researchers who are committed their research in lung segmentation.",
"title": ""
},
{
"docid": "b1845c42902075de02c803e77345a30f",
"text": "Unsupervised representation learning algorithms such as word2vec and ELMo improve the accuracy of many supervised NLP models, mainly because they can take advantage of large amounts of unlabeled text. However, the supervised models only learn from taskspecific labeled data during the main training phase. We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data. On labeled examples, standard supervised learning is used. On unlabeled examples, CVT teaches auxiliary prediction modules that see restricted views of the input (e.g., only part of a sentence) to match the predictions of the full model seeing the whole input. Since the auxiliary modules and the full model share intermediate representations, this in turn improves the full model. Moreover, we show that CVT is particularly effective when combined with multitask learning. We evaluate CVT on five sequence tagging tasks, machine translation, and dependency parsing, achieving state-of-the-art results.1",
"title": ""
},
{
"docid": "5b67f07b5ce37c0dd1bb9be1af6c6005",
"text": "Anomaly detection is the identification of items or observations which deviate from an expected pattern in a dataset. This paper proposes a novel real time anomaly detection framework for dynamic resource scheduling of a VMware-based cloud data center. The framework monitors VMware performance stream data (e.g. CPU load, memory usage, etc.). Hence, the framework continuously needs to collect data and make decision without any delay. We have used Apache Storm, distributed framework for handling performance stream data and making prediction without any delay. Storm is chosen over a traditional distributed framework (e.g., Hadoop and MapReduce, Mahout) that is good for batch processing. An incremental clustering algorithm to model benign characteristics is incorporated in our storm-based framework. During continuous incoming test stream, if the model finds data deviated from its benign behavior, it considers that as an anomaly. We have shown effectiveness of our framework by providing real-time complex analytic functionality over stream data.",
"title": ""
},
{
"docid": "955882547c8d7d455f3d0a6c2bccd2b4",
"text": "Recently there has been quite a number of independent research activities that investigate the potentialities of integrating social networking concepts into Internet of Things (IoT) solutions. The resulting paradigm, named Social Internet of Things (SIoT), has the potential to support novel applications and networking services for the IoT in more effective and efficient ways. In this context, the main contributions of this paper are the following: i) we identify appropriate policies for the establishment and the management of social relationships between objects in such a way that the resulting social network is navigable; ii) we describe a possible architecture for the IoT that includes the functionalities required to integrate things into a social network; iii) we analyze the characteristics of the SIoT network structure by means of simulations.",
"title": ""
},
{
"docid": "3567ec67dc263a6585e8d3af62b1d9f1",
"text": "SemStim is a graph-based recommendation algorithm which is based on Spreading Activation and adds targeted activation and duration constraints. SemStim is not affected by data sparsity, the cold-start problem or data quality issues beyond the linking of items to DBpedia. The overall results show that the performance of SemStim for the diversity task of the challenge is comparable to the other participants, as it took 3rd place out of 12 participants with 0.0413 F1@20 and 0.476 ILD@20. In addition, as SemStim has been designed for the requirements of cross-domain recommendations with different target and source domains, this shows that SemStim can also provide competitive single-domain recommendations.",
"title": ""
}
] |
scidocsrr
|
9d7233877a79481ef40fe83b7edbf01f
|
Who Owns the Data? Open Data for Healthcare
|
[
{
"docid": "f1294ba7d894db9c5145d11f1251a498",
"text": "A grand goal of future medicine is in modelling the complexity of patients to tailor medical decisions, health practices and therapies to the individual patient. This trend towards personalized medicine produces unprecedented amounts of data, and even though the fact that human experts are excellent at pattern recognition in dimensions of ≤ 3, the problem is that most biomedical data is in dimensions much higher than 3, making manual analysis difficult and often impossible. Experts in daily medical routine are decreasingly capable of dealing with the complexity of such data. Moreover, they are not interested the data, they need knowledge and insight in order to support their work. Consequently, a big trend in computer science is to provide efficient, useable and useful computational methods, algorithms and tools to discover knowledge and to interactively gain insight into high-dimensional data. A synergistic combination of methodologies of two areas may be of great help here: Human–Computer Interaction (HCI) and Knowledge Discovery/Data Mining (KDD), with the goal of supporting human intelligence with machine learning. A trend in both disciplines is the acquisition and adaptation of representations that support efficient learning. Mapping higher dimensional data into lower dimensions is a major task in HCI, and a concerted effort of computational methods including recent advances from graphtheory and algebraic topology may contribute to finding solutions. Moreover, much biomedical data is sparse, noisy and timedependent, hence entropy is also amongst promising topics. This paper provides a rough overview of the HCI-KDD approach and focuses on three future trends: graph-based mining, topological data mining and entropy-based data mining.",
"title": ""
}
] |
[
{
"docid": "21af4ea62f07966097c8ab46f7226907",
"text": "With the introduction of Microsoft Kinect, there has been considerable interest in creating various attractive and feasible applications in related research fields. Kinect simultaneously captures the depth and color information and provides real-time reliable 3D full-body human-pose reconstruction that essentially turns the human body into a controller. This article presents a finger-writing system that recognizes characters written in the air without the need for an extra handheld device. This application adaptively merges depth, skin, and background models for the hand segmentation to overcome the limitations of the individual models, such as hand-face overlapping problems and the depth-color nonsynchronization. The writing fingertip is detected by a new real-time dual-mode switching method. The recognition accuracy rate is greater than 90 percent for the first five candidates of Chinese characters, English characters, and numbers.",
"title": ""
},
{
"docid": "7654ada6aabee2f8abf411dba5383d96",
"text": "In the past decade, Convolutional Neural Networks (CNNs) have been demonstrated successful for object detections. However, the size of network input is limited by the amount of memory available on GPUs. Moreover, performance degrades when detecting small objects. To alleviate the memory usage and improve the performance of detecting small traffic signs, we proposed an approach for detecting small traffic signs from large images under real world conditions. In particular, large images are broken into small patches as input to a Small-Object-Sensitive-CNN (SOS-CNN) modified from a Single Shot Multibox Detector (SSD) framework with a VGG-16 network as the base network to produce patch-level object detection results. Scale invariance is achieved by applying the SOS-CNN on an image pyramid. Then, image-level object detection is obtained by projecting all the patch-level detection results to the image at the original scale. Experimental results on a real-world conditioned traffic sign dataset have demonstrated the effectiveness of the proposed method in terms of detection accuracy and recall, especially for those with small sizes.",
"title": ""
},
{
"docid": "947a96e2115f5b271f5550e090859133",
"text": "Degenerative lumbar spinal stenosis is caused by mechanical factors and/or biochemical alterations within the intervertebral disk that lead to disk space collapse, facet joint hypertrophy, soft-tissue infolding, and osteophyte formation, which narrows the space available for the thecal sac and exiting nerve roots. The clinical consequence of this compression is neurogenic claudication and varying degrees of leg and back pain. Degenerative lumbar spinal stenosis is a major cause of pain and impaired quality of life in the elderly. The natural history of this condition varies; however, it has not been shown to worsen progressively. Nonsurgical management consists of nonsteroidal anti-inflammatory drugs, physical therapy, and epidural steroid injections. If nonsurgical management is unsuccessful and neurologic decline persists or progresses, surgical treatment, most commonly laminectomy, is indicated. Recent prospective randomized studies have demonstrated that surgery is superior to nonsurgical management in terms of controlling pain and improving function in patients with lumbar spinal stenosis.",
"title": ""
},
{
"docid": "4261e44dad03e8db3c0520126b9c7c4d",
"text": "One of the major drawbacks of magnetic resonance imaging (MRI) has been the lack of a standard and quantifiable interpretation of image intensities. Unlike in other modalities, such as X-ray computerized tomography, MR images taken for the same patient on the same scanner at different times may appear different from each other due to a variety of scanner-dependent variations and, therefore, the absolute intensity values do not have a fixed meaning. The authors have devised a two-step method wherein all images (independent of patients and the specific brand of the MR scanner used) can be transformed in such a may that for the same protocol and body region, in the transformed images similar intensities will have similar tissue meaning. Standardized images can be displayed with fixed windows without the need of per-case adjustment. More importantly, extraction of quantitative information about healthy organs or about abnormalities can be considerably simplified. This paper introduces and compares new variants of this standardizing method that can help to overcome some of the problems with the original method.",
"title": ""
},
{
"docid": "a949afe3f53bf7695a35c0c1cc8374c3",
"text": "Increasingly complex proteins can be made by a recombinant chemical approach where proteins that can be made easily can be combined by site-specific chemical conjugation to form multifunctional or more active protein therapeutics. Protein dimers may display increased avidity for cell surface receptors. The increased size of protein dimers may also increase circulation times. Cytokines bind to cell surface receptors that dimerise, so much of the solvent accessible surface of a cytokine is involved in binding to its target. Interferon (IFN) homo-dimers (IFN-PEG-IFN) were prepared by two methods: site-specific bis-alkylation conjugation of PEG to the two thiols of a native disulphide or to two imidazoles on a histidine tag of two His8-tagged IFN (His8IFN). Several control conjugates were also prepared to assess the relative activity of these IFN homo-dimers. The His8IFN-PEG20-His8IFN obtained by histidine-specific conjugation displayed marginally greater in vitro antiviral activity compared to the IFN-PEG20-IFN homo-dimer obtained by disulphide re-bridging conjugation. This result is consistent with previous observations in which enhanced retention of activity was made possible by conjugation to an N-terminal His-tag on the IFN. Comparison of the antiviral and antiproliferative activities of the two IFN homo-dimers prepared by disulphide re-bridging conjugation indicated that IFN-PEG10-IFN was more biologically active than IFN-PEG20-IFN. This result suggests that the size of PEG may influence the antiviral activity of IFN-PEG-IFN homo-dimers.",
"title": ""
},
{
"docid": "49ff096deb6621438286942b792d6af3",
"text": "Fast fashion is a business model that offers (the perception of) fashionable clothes at affordable prices. From an operations standpoint, fast fashion requires a highly responsive supply chain that can support a product assortment that is periodically changing. Though the underlying principles are simple, the successful execution of the fast-fashion business model poses numerous challenges. We present a careful examination of this business model and discuss its execution by analyzing the most prominent firms in the industry. We then survey the academic literature for research that is specifically relevant or directly related to fast fashion. Our goal is to expose the main components of fast fashion and to identify untapped research opportunities.",
"title": ""
},
{
"docid": "274373d46b748d92e6913496507353b1",
"text": "This paper introduces a blind watermarking based on a convolutional neural network (CNN). We propose an iterative learning framework to secure robustness of watermarking. One loop of learning process consists of the following three stages: Watermark embedding, attack simulation, and weight update. We have learned a network that can detect a 1-bit message from a image sub-block. Experimental results show that this learned network is an extension of the frequency domain that is widely used in existing watermarking scheme. The proposed scheme achieved robustness against geometric and signal processing attacks with a learning time of one day.",
"title": ""
},
{
"docid": "3fd8092faee792a316fb3d1d7c2b6244",
"text": "The complete dynamics model of a four-Mecanum-wheeled robot considering mass eccentricity and friction uncertainty is derived using the Lagrange’s equation. Then based on the dynamics model, a nonlinear stable adaptive control law is derived using the backstepping method via Lyapunov stability theory. In order to compensate for the model uncertainty, a nonlinear damping term is included in the control law, and the parameter update law with σ-modification is considered for the uncertainty estimation. Computer simulations are conducted to illustrate the suggested control approach.",
"title": ""
},
{
"docid": "a6b5f49b8161b45540bdd333d8588cd8",
"text": "Personality inconsistency is one of the major problems for chit-chat sequence to sequence conversational agents. Works studying this problem have proposed models with the capability of generating personalized responses, but there is not an existing evaluation method for measuring the performance of these models on personality. This thesis develops a new evaluation method based on the psychological study of personality, in particular the Big Five personality traits. With the new evaluation method, the thesis examines if the responses generated by personalized chit-chat sequence to sequence conversational agents are distinguished for speakers with different personalities. The thesis also proposes a new model that generates distinguished responses based on given personalities. The results of our experiments in the thesis show that: for both the existing personalized model and the new model that we propose, the generated responses for speakers with different personalities are significantly more distinguished than a random baseline; specially for our new model, it has the capability of generating distinguished responses for different types of personalities measured by the Big Five personality traits.",
"title": ""
},
{
"docid": "1148cc41ee6d016a495856789a7b739d",
"text": "Visual reasoning is a special visual question answering problem that is multi-step and compositional by nature, and also requires intensive text-vision interactions. We propose CMM: Cascaded Mutual Modulation as a novel end-to-end visual reasoning model. CMM includes a multi-step comprehension process for both question and image. In each step, we use a Feature-wise Linear Modulation (FiLM) technique to enable textual/visual pipeline to mutually control each other. Experiments show that CMM significantly outperforms most related models, and reach stateof-the-arts on two visual reasoning benchmarks: CLEVR and NLVR, collected from both synthetic and natural languages. Ablation studies confirm that both our multistep framework and our visual-guided language modulation are critical to the task. Our code is available at https://github. com/FlamingHorizon/CMM-VR.",
"title": ""
},
{
"docid": "dfb83ad16854797137e34a5c7cb110ae",
"text": "The increasing computing requirements for GPUs (Graphics Processing Units) have favoured the design and marketing of commodity devices that nowadays can also be used to accelerate general purpose computing. Therefore, future high performance clusters intended for HPC (High Performance Computing) will likely include such devices. However, high-end GPU-based accelerators used in HPC feature a considerable energy consumption, so that attaching a GPU to every node of a cluster has a strong impact on its overall power consumption. In this paper we detail a framework that enables remote GPU acceleration in HPC clusters, thus allowing a reduction in the number of accelerators installed in the cluster. This leads to energy, acquisition, maintenance, and space savings.",
"title": ""
},
{
"docid": "15a76f43782ef752e4b8e61e38726d69",
"text": "This paper considers invariant texture analysis. Texture analysis approaches whose performances are not a,ected by translation, rotation, a.ne, and perspective transform are addressed. Existing invariant texture analysis algorithms are carefully studied and classi0ed into three categories: statistical methods, model based methods, and structural methods. The importance of invariant texture analysis is presented 0rst. Each approach is reviewed according to its classi0cation, and its merits and drawbacks are outlined. The focus of possible future work is also suggested. ? 2001 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "18968ed6bec670e4c8ba93933d0cc3e3",
"text": "The medial prefrontal cortex (MPFC) is regarded as a region of the brain that supports self-referential processes, including the integration of sensory information with self-knowledge and the retrieval of autobiographical information. I used functional magnetic resonance imaging and a novel procedure for eliciting autobiographical memories with excerpts of popular music dating to one's extended childhood to test the hypothesis that music and autobiographical memories are integrated in the MPFC. Dorsal regions of the MPFC (Brodmann area 8/9) were shown to respond parametrically to the degree of autobiographical salience experienced over the course of individual 30 s excerpts. Moreover, the dorsal MPFC also responded on a second, faster timescale corresponding to the signature movements of the musical excerpts through tonal space. These results suggest that the dorsal MPFC associates music and memories when we experience emotionally salient episodic memories that are triggered by familiar songs from our personal past. MPFC acted in concert with lateral prefrontal and posterior cortices both in terms of tonality tracking and overall responsiveness to familiar and autobiographically salient songs. These findings extend the results of previous autobiographical memory research by demonstrating the spontaneous activation of an autobiographical memory network in a naturalistic task with low retrieval demands.",
"title": ""
},
{
"docid": "04c52aa382cf53c3ab208bd3c0fc5354",
"text": "This article is the last of our series of articles on survey research. In it, we discuss how to analyze survey data. We provide examples of correct and incorrect analysis techniques used in software engineering surveys.",
"title": ""
},
{
"docid": "bb547f90a98aa25d0824dc63b9de952d",
"text": "When designing distributed web services, there are three properties that are commonly desired: consistency, availability, and partition tolerance. It is impossible to achieve all three. In this note, we prove this conjecture in the asynchronous network model, and then discuss solutions to this dilemma in the partially synchronous model.",
"title": ""
},
{
"docid": "0e2a3870af4b7636c0f5e56d658fcc77",
"text": "In this review, we provide an overview of protein synthesis in the yeast Saccharomyces cerevisiae The mechanism of protein synthesis is well conserved between yeast and other eukaryotes, and molecular genetic studies in budding yeast have provided critical insights into the fundamental process of translation as well as its regulation. The review focuses on the initiation and elongation phases of protein synthesis with descriptions of the roles of translation initiation and elongation factors that assist the ribosome in binding the messenger RNA (mRNA), selecting the start codon, and synthesizing the polypeptide. We also examine mechanisms of translational control highlighting the mRNA cap-binding proteins and the regulation of GCN4 and CPA1 mRNAs.",
"title": ""
},
{
"docid": "8695757545e44358fd63f06936335903",
"text": "We propose a neural language model capable of unsupervised syntactic structure induction. The model leverages the structure information to form better semantic representations and better language modeling. Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic information. On the other hand, tree-structured recursive networks usually require additional structural supervision at the cost of human expert annotation. In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. In our model, the gradient can be directly back-propagated from the language model loss into the neural parsing network. Experiments show that the proposed model can discover the underlying syntactic structure and achieve state-of-the-art performance on word/character-level language model tasks.",
"title": ""
},
{
"docid": "a06274d9bf6dba90ea0178ec11a20fb6",
"text": "Osteoporosis has become one of the most prevalent and costly diseases in the world. It is a metabolic disease characterized by reduction in bone mass due to an imbalance between bone formation and resorption. Osteoporosis causes fractures, prolongs bone healing, and impedes osseointegration of dental implants. Its pathological features include osteopenia, degradation of bone tissue microstructure, and increase of bone fragility. In traditional Chinese medicine, the herb Rhizoma Drynariae has been commonly used to treat osteoporosis and bone nonunion. However, the precise underlying mechanism is as yet unclear. Osteoprotegerin is a cytokine receptor shown to play an important role in osteoblast differentiation and bone formation. Hence, activators and ligands of osteoprotegerin are promising drug targets and have been the focus of studies on the development of therapeutics against osteoporosis. In the current study, we found that naringin could synergistically enhance the action of 1α,25-dihydroxyvitamin D3 in promoting the secretion of osteoprotegerin by osteoblasts in vitro. In addition, naringin can also influence the generation of osteoclasts and subsequently bone loss during organ culture. In conclusion, this study provides evidence that natural compounds such as naringin have the potential to be used as alternative medicines for the prevention and treatment of osteolysis.",
"title": ""
},
{
"docid": "3907bddf6a56b96c4e474d46ddd04359",
"text": "The aim of this review is to discuss the accumulating evidence that suggests that grape extracts and purified grape polyphenols possess a diverse array of biological actions and may be beneficial in the prevention of some inflammatory-mediated diseases including cardiovascular disease. The active components from grape extracts, which include the grape seed, grape skin, and grape juice, that have been identified thus far include polyphenols such as resveratrol, phenolic acids, anthocyanins, and flavonoids. All possess potent antioxidant properties and have been shown to decrease low-density lipoprotein-cholesterol oxidation and platelet aggregation. These compounds also possess a range of additional cardioprotective and vasoprotective properties including antiatherosclerotic, antiarrhythmic, and vasorelaxation actions. Although not exclusive, antioxidant properties of grape polyphenols are likely to be central to their mechanism(s) of action, which also include cellular signaling mechanisms and interactions at the genomic level. This review discusses some of the evidence favoring the consumption of grape extracts rich in polyphenols in the prevention of cardiovascular disease. Consumption of grape and grape extracts and/or grape products such as red wine may be beneficial in preventing the development of chronic degenerative diseases such as cardiovascular disease.",
"title": ""
}
] |
scidocsrr
|
48464a669170e50b8671e779355d6e92
|
EgoGesture: A New Dataset and Benchmark for Egocentric Hand Gesture Recognition
|
[
{
"docid": "9c562763cac968ce38359635d1826ff9",
"text": "This paper proposes a novel multi-layered gesture recognition method with Kinect. We explore the essential linguistic characters of gestures: the components concurrent character and the sequential organization character, in a multi-layered framework, which extracts features from both the segmented semantic units and the whole gesture sequence and then sequentially classifies the motion, location and shape components. In the first layer, an improved principle motion is applied to model the motion component. In the second layer, a particle-based descriptor and a weighted dynamic time warping are proposed for the location component classification. In the last layer, the spatial path warping is further proposed to classify the shape component represented by unclosed shape context. The proposed method can obtain relatively high performance for one-shot learning gesture recognition on the ChaLearn Gesture Dataset comprising more than 50, 000 gesture sequences recorded with Kinect.",
"title": ""
}
] |
[
{
"docid": "2eb303f3382491ae1977a3e907f197c0",
"text": "Image-to-image translation tasks have been widely investigated with Generative Adversarial Networks (GANs) and dual learning. However, existing models lack the ability to control the translated results in the target domain and their results usually lack of diversity in the sense that a fixed image usually leads to (almost) deterministic translation result. In this paper, we study a new problem, conditional image-to-image translation, which is to translate an image from the source domain to the target domain conditioned on a given image in the target domain. It requires that the generated image should inherit some domain-specific features of the conditional image from the target domain. Therefore, changing the conditional image in the target domain will lead to diverse translation results for a fixed input image from the source domain, and therefore the conditional input image helps to control the translation results. We tackle this problem with unpaired data based on GANs and dual learning. We twist two conditional translation models (one translation from A domain to B domain, and the other one from B domain to A domain) together for inputs combination and reconstruction while preserving domain independent features. We carry out experiments on men's faces from-to women's faces translation and edges to shoes&bags translations. The results demonstrate the effectiveness of our proposed method.",
"title": ""
},
{
"docid": "ce305309d82e2d2a3177852c0bb08105",
"text": "BACKGROUND\nEmpathizing is a specific component of social cognition. Empathizing is also specifically impaired in autism spectrum condition (ASC). These are two dimensions, measurable using the Empathy Quotient (EQ) and the Autism Spectrum Quotient (AQ). ASC also involves strong systemizing, a dimension measured using the Systemizing Quotient (SQ). The present study examined the relationship between the EQ, AQ and SQ. The EQ and SQ have been used previously to test for sex differences in 5 'brain types' (Types S, E, B and extremes of Type S or E). Finally, people with ASC have been conceptualized as an extreme of the male brain.\n\n\nMETHOD\nWe revised the SQ to avoid a traditionalist bias, thus producing the SQ-Revised (SQ-R). AQ and EQ were not modified. All 3 were administered online.\n\n\nSAMPLE\nStudents (723 males, 1038 females) were compared to a group of adults with ASC group (69 males, 56 females).\n\n\nAIMS\n(1) To report scores from the SQ-R. (2) To test for SQ-R differences among students in the sciences vs. humanities. (3) To test if AQ can be predicted from EQ and SQ-R scores. (4) To test for sex differences on each of these in a typical sample, and for the absence of a sex difference in a sample with ASC if both males and females with ASC are hyper-masculinized. (5) To report percentages of males, females and people with an ASC who show each brain type.\n\n\nRESULTS\nAQ score was successfully predicted from EQ and SQ-R scores. In the typical group, males scored significantly higher than females on the AQ and SQ-R, and lower on the EQ. The ASC group scored higher than sex-matched controls on the SQ-R, and showed no sex differences on any of the 3 measures. More than twice as many typical males as females were Type S, and more than twice as many typical females as males were Type E. The majority of adults with ASC were Extreme Type S, compared to 5% of typical males and 0.9% of typical females. The EQ had a weak negative correlation with the SQ-R.\n\n\nDISCUSSION\nEmpathizing is largely but not completely independent of systemizing. The weak but significant negative correlation may indicate a trade-off between them. ASC involves impaired empathizing alongside intact or superior systemizing. Future work should investigate the biological basis of these dimensions, and the small trade-off between them.",
"title": ""
},
{
"docid": "a1c1c0402902712c033e999ffc060b4f",
"text": "The traditional Vivaldi antenna has an ultrawide bandwidth, but low directivity. To enhance the directivity, we propose a high-gain Vivaldi antenna based on compactly anisotropic zero-index metamaterials (ZIM). Such anisotropic ZIM are designed and fabricated using resonant meander-line structures, which are integrated with the Vivaldi antenna smoothly and hence have compact size. Measurement results show that the directivity and gain of the Vivaldi antenna have been enhanced significantly in the designed bandwidth of anisotropic ZIM (9.5-10.5 GHz), but not affected in other frequency bands (2.5-9.5 GHz and 10.5-13.5 GHz).",
"title": ""
},
{
"docid": "2313822a08269b3dd125190c4874b808",
"text": "General-purpose knowledge bases are increasingly growing in terms of depth (content) and width (coverage). Moreover, algorithms for entity linking and entity retrieval have improved tremendously in the past years. These developments give rise to a new line of research that exploits and combines these developments for the purposes of text-centric information retrieval applications. This tutorial focuses on a) how to retrieve a set of entities for an ad-hoc query, or more broadly, assessing relevance of KB elements for the information need, b) how to annotate text with such elements, and c) how to use this information to assess the relevance of text. We discuss different kinds of information available in a knowledge graph and how to leverage each most effectively.\n We start the tutorial with a brief overview of different types of knowledge bases, their structure and information contained in popular general-purpose and domain-specific knowledge bases. In particular, we focus on the representation of entity-centric information in the knowledge base through names, terms, relations, and type taxonomies. Next, we will provide a recap on ad-hoc object retrieval from knowledge graphs as well as entity linking and retrieval. This is essential technology, which the remainder of the tutorial builds on. Next we will cover essential components within successful entity linking systems, including the collection of entity name information and techniques for disambiguation with contextual entity mentions. We will present the details of four previously proposed systems that successfully leverage knowledge bases to improve ad-hoc document retrieval. These systems combine the notion of entity retrieval and semantic search on one hand, with text retrieval models and entity linking on the other. Finally, we also touch on entity aspects and links in the knowledge graph as it can help to understand the entities' context.\n This tutorial is the first to compile, summarize, and disseminate progress in this emerging area and we provide both an overview of state-of-the-art methods and outline open research problems to encourage new contributions.",
"title": ""
},
{
"docid": "6ec4079c4afdd545b531146c86c1e2fb",
"text": "A thorough comprehension of image content demands a complex grasp of the interactions that may occur in the natural world. One of the key issues is to describe the visual relationships between objects. When dealing with real world data, capturing these very diverse interactions is a difficult problem. It can be alleviated by incorporating common sense in a network. For this, we propose a framework that makes use of semantic knowledge and estimates the relevance of object pairs during both training and test phases. Extracted from precomputed models and training annotations, this information is distilled into the neural network dedicated to this task. Using this approach, we observe a significant improvement on all classes of Visual Genome, a challenging visual relationship dataset. A 68.5 % relative gain on the recall at 100 is directly related to the relevance estimate and a 32.7% gain to the knowledge distillation.",
"title": ""
},
{
"docid": "33b1c3b2a999c62fe4f1da5d3cc7f534",
"text": "Individuals often appear with multiple names when considering large bibliographic datasets, giving rise to the synonym ambiguity problem. Although most related works focus on resolving name ambiguities, this work focus on classifying and characterizing multiple name usage patterns—the root cause for such ambiguity. By considering real examples bibliographic datasets, we identify and classify patterns of multiple name usage by individuals, which can be interpreted as name change, rare name usage, and name co-appearance. In particular, we propose a methodology to classify name usage patterns through a supervised classification task and show that different classes are robust (across datasets) and exhibit significantly different properties. We show that the collaboration network structure emerging around nodes corresponding to ambiguous names from different name usage patterns have strikingly different characteristics, such as their common neighborhood and degree evolution. We believe such differences in network structure and in name usage patterns can be leveraged to design more efficient name disambiguation algorithms that target the synonym problem.",
"title": ""
},
{
"docid": "1234c156c0dcebf9c3d1794cd7cbca59",
"text": "We present the mathematical basis of a new approach to the analysis of temporal coding. The foundation of the approach is the construction of several families of novel distances (metrics) between neuronal impulse trains. In contrast to most previous approaches to the analysis of temporal coding, the present approach does not attempt to embed impulse trains in a vector space, and does not assume a Euclidean notion of distance. Rather, the proposed metrics formalize physiologically based hypotheses for those aspects of the firing pattern that might be stimulus dependent, and make essential use of the point-process nature of neural discharges. We show that these families of metrics endow the space of impulse trains with related but inequivalent topological structures. We demonstrate how these metrics can be used to determine whether a set of observed responses has a stimulus-dependent temporal structure without a vector-space embedding. We show how multidimensional scaling can be used to assess the similarity of these metrics to Euclidean distances. For two of these families of metrics (one based on spike times and one based on spike intervals), we present highly efficient computational algorithms for calculating the distances. We illustrate these ideas by application to artificial data sets and to recordings from auditory and visual cortex.",
"title": ""
},
{
"docid": "873bb52a5fe57335c30a0052b5bde4af",
"text": "Firth and Wagner (1997) questioned the dichotomies nonnative versus native speaker, learner versus user , and interlanguage versus target language , which reflect a bias toward innateness, cognition, and form in language acquisition. Research on lingua franca English (LFE) not only affirms this questioning, but reveals what multilingual communities have known all along: Language learning and use succeed through performance strategies, situational resources, and social negotiations in fluid communicative contexts. Proficiency is therefore practicebased, adaptive, and emergent. These findings compel us to theorize language acquisition as multimodal, multisensory, multilateral, and, therefore, multidimensional. The previously dominant constructs such as form, cognition, and the individual are not ignored; they get redefined as hybrid, fluid, and situated in a more socially embedded, ecologically sensitive, and interactionally open model.",
"title": ""
},
{
"docid": "dedc509f31c9b7e6c4409d655a158721",
"text": "Envelope tracking (ET) is by now a well-established technique that improves the efficiency of microwave power amplifiers (PAs) compared to what can be obtained with conventional class-AB or class-B operation for amplifying signals with a time-varying envelope, such as most of those used in present wireless communication systems. ET is poised to be deployed extensively in coming generations of amplifiers for cellular handsets because it can reduce power dissipation for signals using the long-term evolution (LTE) standard required for fourthgeneration (4G) wireless systems, which feature high peak-to-average power ratios (PAPRs). The ET technique continues to be actively developed for higher carrier frequencies and broader bandwidths. This article reviews the concepts and history of ET, discusses several applications currently on the drawing board, presents challenges for future development, and highlights some directions for improving the technique.",
"title": ""
},
{
"docid": "8c50fc49815e406e732f282caba67c7b",
"text": "This paper presents GOM, a language for describing abstract syntax trees and generating a Java implementation for those trees. GOM includes features allowing to specify and modify the interface of the data structure. These features provide in particular the capability to maintain the internal representation of data in canonical form with respect to a rewrite system. This explicitly guarantees that the client program only manipulates normal forms for this rewrite system, a feature which is only implicitly used in many implementations.",
"title": ""
},
{
"docid": "d6496dd2c1e8ac47dc12fde28c83a3d4",
"text": "We describe a natural extension of the banker’s algorithm for deadlock avoidance in operating systems. Representing the control flow of each process as a rooted tree of nodes corresponding to resource requests and releases, we propose a quadratic-time algorithm which decomposes each flow graph into a nested family of regions, such that all allocated resources are released before the control leaves a region. Also, information on the maximum resource claims for each of the regions can be extracted prior to process execution. By inserting operating system calls when entering a new region for each process at runtime, and applying the original banker’s algorithm for deadlock avoidance, this method has the potential to achieve better resource utilization because information on the “localized approximate maximum claims” is used for testing system safety.",
"title": ""
},
{
"docid": "2f2c36452ab45c4234904d9b11f28eb7",
"text": "Bitcoin is a potentially disruptive new crypto-currency based on a decentralized opensource protocol which is gradually gaining popularity. Perhaps the most important question that will affect Bitcoin’s success, is whether or not it will be able to scale to support the high volume of transactions required from a global currency system. We investigate the restrictions on the rate of transaction processing in Bitcoin as a function of both the bandwidth available to nodes and the network delay, both of which lower the efficiency of Bitcoin’s transaction processing. The security analysis done by Bitcoin’s creator Satoshi Nakamoto [12] assumes that block propagation delays are negligible compared to the time between blocks—an assumption that does not hold when the protocol is required to process transactions at high rates. We improve upon the original analysis and remove this assumption. Using our results, we are able to give bounds on the number of transactions per second the protocol can handle securely. Building on previously published measurements by Decker and Wattenhofer [5], we show these bounds are currently more restrictive by an order of magnitude than the bandwidth needed to stream all transactions. We additionally show how currently planned improvements to the protocol, namely the use of transaction hashes in blocks (instead of complete transaction records), will dramatically alleviate these restrictions. Finally, we present an easily implementable modification to the way Bitcoin constructs its main data structure, the blockchain, that immensely improves security from attackers, especially when the network operates at high rates. This improvement allows for further increases in the number of transactions processed per second. We show that with our proposed modification, significant speedups can be gained in confirmation time of transactions as well. The block generation rate can be securely increased to more than one block per second – a 600 fold speedup compared to today’s rate, while still allowing the network to processes many transactions per second.",
"title": ""
},
{
"docid": "ba1cbd5fcd98158911f4fb6f677863f9",
"text": "Classical approaches to clean data have relied on using integrity constraints, statistics, or machine learning. These approaches are known to be limited in the cleaning accuracy, which can usually be improved by consulting master data and involving experts to resolve ambiguity. The advent of knowledge bases KBs both general-purpose and within enterprises, and crowdsourcing marketplaces are providing yet more opportunities to achieve higher accuracy at a larger scale. We propose KATARA, a knowledge base and crowd powered data cleaning system that, given a table, a KB, and a crowd, interprets table semantics to align it with the KB, identifies correct and incorrect data, and generates top-k possible repairs for incorrect data. Experiments show that KATARA can be applied to various datasets and KBs, and can efficiently annotate data and suggest possible repairs.",
"title": ""
},
{
"docid": "9001ffb48ab4dc2437094284df78dfd8",
"text": "This paper develops two motion generation methods for the upper body of humanoid robots based on compensating for the yaw moment of whole body during motion. These upper body motions can effectively solve the stability problem of feet spin for robot walk. We analyze the ground reactive torque, separate the yaw moment as the compensating object and discuss the effect of arms swinging on whole body locomotion. By taking the ZMP as the reference point, trunk spin motion and arms swinging motion are generated to improve the biped motion stability, based on compensating for the yaw moment. The methods are further compared from the energy consumption point of view. Simulated experimental results validate the performance and the feasibility of the proposed methods.",
"title": ""
},
{
"docid": "f530ebff8396da2345537363449b99c9",
"text": "In this research, a fast, accurate, and stable system of lung cancer detection based on novel deep learning techniques is proposed. A convolutional neural network (CNN) structure akin to that of GoogLeNet was built using a transfer learning approach. In contrast to previous studies, Median Intensity Projection (MIP) was employed to include multi-view features of three-dimensional computed tomography (CT) scans. The system was evaluated on the LIDC-IDRI public dataset of lung nodule images and 100-fold data augmentation was performed to ensure training efficiency. The trained system produced 81% accuracy, 84% sensitivity, and 78% specificity after 300 epochs, better than other available programs. In addition, a t-based confidence interval for the population mean of the validation accuracies verified that the proposed system would produce consistent results for multiple trials. Subsequently, a controlled variable experiment was performed to elucidate the net effects of two core factors of the system - fine-tuned GoogLeNet and MIPs - on its detection accuracy. Four treatment groups were set by training and testing fine-tuned GoogLeNet and Alexnet on MIPs and common 2D CT scans, respectively. It was noteworthy that MIPs improved the network's accuracy by 12.3%, and GoogLeNet outperformed Alexnet by 2%. Lastly, remote access to the GPU-based system was enabled through a web server, which allows long-distance management of the system and its future transition into a practical tool.",
"title": ""
},
{
"docid": "88bf67ec7ff0cfa3f1dc6af12140d33b",
"text": "Cloud computing is set of resources and services offered through the Internet. Cloud services are delivered from data centers located throughout the world. Cloud computing facilitates its consumers by providing virtual resources via internet. General example of cloud services is Google apps, provided by Google and Microsoft SharePoint. The rapid growth in field of “cloud computing” also increases severe security concerns. Security has remained a constant issue for Open Systems and internet, when we are talking about security cloud really suffers. Lack of security is the only hurdle in wide adoption of cloud computing. Cloud computing is surrounded by many security issues like securing data, and examining the utilization of cloud by the cloud computing vendors. The wide acceptance www has raised security risks along with the uncountable benefits, so is the case with cloud computing. The boom in cloud computing has brought lots of security challenges for the consumers and service providers. How the end users of cloud computing know that their information is not having any availability and security issues? Every one poses, Is their information secure? This study aims to identify the most vulnerable security threats in cloud computing, which will enable both end users and vendors to know about the key security threats associated with cloud computing. Our work will enable researchers and security professionals to know about users and vendors concerns and critical analysis about the different security models and tools proposed.",
"title": ""
},
{
"docid": "ce0f21b03d669b72dd954352e2c35ab1",
"text": "In this letter, a new technique is proposed for the design of a compact high-power low-pass rectangular waveguide filter with a wide spurious-free frequency behavior. Specifically, the new filter is intended for the suppression of the fundamental mode over a wide band in much higher power applications than the classical corrugated filter with the same frequency specifications. Moreover, the filter length is dramatically reduced when compared to alternative techniques previously considered.",
"title": ""
},
{
"docid": "cc8c46399664594cdaa1bfc6c480a455",
"text": "INTRODUCTION\nPatients will typically undergo awake surgery for permanent implantation of spinal cord stimulation (SCS) in an attempt to optimize electrode placement using patient feedback about the distribution of stimulation-induced paresthesia. The present study compared efficacy of first-time electrode placement under awake conditions with that of neurophysiologically guided placement under general anesthesia.\n\n\nMETHODS\nA retrospective review was performed of 387 SCS surgeries among 259 patients which included 167 new stimulator implantation to determine whether first time awake surgery for placement of spinal cord stimulators is preferable to non-awake placement.\n\n\nRESULTS\nThe incidence of device failure for patients implanted using neurophysiologically guided placement under general anesthesia was one-half that for patients implanted awake (14.94% vs. 29.7%).\n\n\nCONCLUSION\nNon-awake surgery is associated with fewer failure rates and therefore fewer re-operations, making it a viable alternative. Any benefits of awake implantation should carefully be considered in the future.",
"title": ""
},
{
"docid": "41c69d2cc40964e54d9ea8a8d4f5f154",
"text": "In computer vision, action recognition refers to the act of classifying an action that is present in a given video and action detection involves locating actions of interest in space and/or time. Videos, which contain photometric information (e.g. RGB, intensity values) in a lattice structure, contain information that can assist in identifying the action that has been imaged. The process of action recognition and detection often begins with extracting useful features and encoding them to ensure that the features are specific to serve the task of action recognition and detection. Encoded features are then processed through a classifier to identify the action class and their spatial and/or temporal locations. In this report, a thorough review of various action recognition and detection algorithms in computer vision is provided by analyzing the two-step process of a typical action recognition and detection algorithm: (i) extraction and encoding of features, and (ii) classifying features into action classes. In efforts to ensure that computer vision-based algorithms reach the capabilities that humans have of identifying actions irrespective of various nuisance variables that may be present within the field of view, the state-of-the-art methods are reviewed and some remaining problems are addressed in the final chapter.",
"title": ""
}
] |
scidocsrr
|
d77057f8632c4afac993c093d101deee
|
Towards operationalizing complexity leadership : How generative , administrative and community-building leadership practices enact organizational outcomes
|
[
{
"docid": "018d05daa52fb79c17519f29f31026d7",
"text": "The aim of this paper is to review conceptual and empirical literature on the concept of distributed leadership (DL) in order to identify its origins, key arguments and areas for further work. Consideration is given to the similarities and differences between DL and related concepts, including ‘shared’, ‘collective’, ‘collaborative’, ‘emergent’, ‘co-’ and ‘democratic’ leadership. Findings indicate that, while there are some common theoretical bases, the relative usage of these concepts varies over time, between countries and between sectors. In particular, DL is a notion that has seen a rapid growth in interest since the year 2000, but research remains largely restricted to the field of school education and of proportionally more interest to UK than US-based academics. Several scholars are increasingly going to great lengths to indicate that, in order to be ‘distributed’, leadership need not necessarily be widely ‘shared’ or ‘democratic’ and, in order to be effective, there is a need to balance different ‘hybrid configurations’ of practice. The paper highlights a number of areas for further attention, including three factors relating to the context of much work on DL (power and influence; organizational boundaries and context; and ethics and diversity), and three methodological and developmental challenges (ontology; research methods; and leadership development, reward and recognition). It is concluded that descriptive and normative perspectives which dominate the literature should be supplemented by more critical accounts which recognize the rhetorical and discursive significance of DL in (re)constructing leader– follower identities, mobilizing collective engagement and challenging or reinforcing traditional forms of organization.",
"title": ""
}
] |
[
{
"docid": "7e02da9e8587435716db99396c0fbbc7",
"text": "To examine thrombus formation in a living mouse, new technologies involving intravital videomicroscopy have been applied to the analysis of vascular windows to directly visualize arterioles and venules. After vessel wall injury in the microcirculation, thrombus development can be imaged in real time. These systems have been used to explore the role of platelets, blood coagulation proteins, endothelium, and the vessel wall during thrombus formation. The study of biochemistry and cell biology in a living animal offers new understanding of physiology and pathology in complex biologic systems.",
"title": ""
},
{
"docid": "2cde7564c83fe2b75135550cb4847af0",
"text": "The twenty-first century global population will be increasingly urban-focusing the sustainability challenge on cities and raising new challenges to address urban resilience capacity. Landscape ecologists are poised to contribute to this challenge in a transdisciplinary mode in which science and research are integrated with planning policies and design applications. Five strategies to build resilience capacity and transdisciplinary collaboration are proposed: biodiversity; urban ecological networks and connectivity; multifunctionality; redundancy and modularization, adaptive design. Key research questions for landscape ecologists, planners and designers are posed to advance the development of knowledge in an adaptive mode.",
"title": ""
},
{
"docid": "712ce2aaf021d863c02a4de6b3596bf4",
"text": "A spatial outlier is a spatial referenced object whose non-spatial attribute values are significantly different from those of other spatially referenced objects in its spatial neighborhood. It represents locations that are significantly different from their neighborhoods even though they may not be significantly different from the entire population. Here we adopt this definition to spatio-temporal domain and define a spatialtemporal outlier (STO) to be a spatial-temporal referenced object whose thematic attribute values are significantly different from those of other spatially and temporally referenced objects in its spatial or/and temporal neighborhood. Identification of STOs can lead to the discovery of unexpected, interesting, and implicit knowledge, such as local instability. Many methods have been recently proposed to detect spatial outliers, but how to detect the temporal outliers or spatial-temporal outliers has been seldom discussed. In this paper we propose a hybrid approach which integrates several data mining methods such as clustering, aggregation and comparisons to detect the STOs by evaluating the change between consecutive spatial and temporal scales. INTRODUCTION Outliers are data objects that appear inconsistent with respect to the remainder of the database (Barnett and Lewis, 1994). While in many cases these can be anomalies or noise, sometimes these represent rare or unusual events to be investigated further. In general, direct methods for outlier detection include distribution-based, depth-based and distancebased approaches. Distribution-based approaches use standard statistical distribution, depth-based technique map data objects into an m-dimensional information space (where m is the number of attribute) and distance-based approaches calculate the proportion of database objects that are a specified distance from a target object (Ng, 2001). A spatial outlier is a spatial referenced object whose non-spatial attribute values are significantly different from those of other spatially referenced objects in its spatial neighborhood. It represents locations that are significantly different from their neighborhoods even though they may not be significantly different from the entire population (Shekhar et al, 2003). Identification of spatial outliers can lead to the discovery of unexpected, interesting, and implicit knowledge, such as local instability. Many methods have been recently proposed to detect spatial outliers by the distributionbased approach. These methods can be broadly classified into two categories, namely 1-D (linear) outlier detection methods and multi-dimensional outlier detection methods (Shekhar et al, 2003). The 1-D outlier detection algorithms consider the statistical distribution of non-spatial attribute values, ignoring the spatial relationships between items.",
"title": ""
},
{
"docid": "cb4c33d4adfc7f3c0b659edcfd774e8b",
"text": "Convolutional Neural Networks (CNNs) have achieved comparable error rates to well-trained human on ILSVRC2014 image classification task. To achieve better performance, the complexity of CNNs is continually increasing with deeper and bigger architectures. Though CNNs achieved promising external classification behavior, understanding of their internal work mechanism is still limited. In this work, we attempt to understand the internal work mechanism of CNNs by probing the internal representations in two comprehensive aspects, i.e., visualizing patches in the representation spaces constructed by different layers, and visualizing visual information kept in each layer. We further compare CNNs with different depths and show the advantages brought by deeper architecture.",
"title": ""
},
{
"docid": "6599d981e445798f5b1ba3dcbf233435",
"text": "Global climate change is expected to affect temperature and precipitation patterns, oceanic and atmospheric circulation, rate of rising sea level, and the frequency, intensity, timing, and distribution of hurricanes and tropical storms. The magnitude of these projected physical changes and their subsequent impacts on coastal wetlands will vary regionally. Coastal wetlands in the southeastern United States have naturally evolved under a regime of rising sea level and specific patterns of hurricane frequency, intensity, and timing. A review of known ecological effects of tropical storms and hurricanes indicates that storm timing, frequency, and intensity can alter coastal wetland hydrology, geomorphology, biotic structure, energetics, and nutrient cycling. Research conducted to examine the impacts of Hurricane Hugo on colonial waterbirds highlights the importance of longterm studies for identifying complex interactions that may otherwise be dismissed as stochastic processes. Rising sea level and even modest changes in the frequency, intensity, timing, and distribution of tropical storms and hurricanes are expected to have substantial impacts on coastal wetland patterns and processes. Persistence of coastal wetlands will be determined by the interactions of climate and anthropogenic effects, especially how humans respond to rising sea level and how further human encroachment on coastal wetlands affects resource exploitation, pollution, and water use. Long-term changes in the frequency, intensity, timing, and distribution of hurricanes and tropical storms will likely affect biotic functions (e.g., community structure, natural selection, extinction rates, and biodiversity) as well as underlying processes such as nutrient cycling and primary and secondary productivity. Reliable predictions of global-change impacts on coastal wetlands will require better understanding of the linkages among terrestrial, aquatic, wetland, atmospheric, oceanic, and human components. Developing this comprehensive understanding of the ecological ramifications of global change will necessitate close coordination among scientists from multiple disciplines and a balanced mixture of appropriate scientific approaches. For example, insights may be gained through the careful design and implementation of broadscale comparative studies that incorporate salient patterns and processes, including treatment of anthropogenic influences. Well-designed, broad-scale comparative studies could serve as the scientific framework for developing relevant and focused long-term ecological research, monitoring programs, experiments, and modeling studies. Two conceptual models of broad-scale comparative research for assessing ecological responses to climate change are presented: utilizing space-for-time substitution coupled with long-term studies to assess impacts of rising sea level and disturbance on coastal wetlands, and utilizing the moisturecontinuum model for assessing the effects of global change and associated shifts in moisture regimes on wetland ecosystems. Increased understanding of climate change will require concerted scientific efforts aimed at facilitating interdisciplinary research, enhancing data and information management, and developing new funding strategies.",
"title": ""
},
{
"docid": "a9f8c6d1d10bedc23b100751c607f7db",
"text": "Successful efforts in hand gesture recognition research within the last two decades paved the path for natural human–computer interaction systems. Unresolved challenges such as reliable identification of gesturing phase, sensitivity to size, shape, and speed variations, and issues due to occlusion keep hand gesture recognition research still very active. We provide a review of vision-based hand gesture recognition algorithms reported in the last 16 years. The methods using RGB and RGB-D cameras are reviewed with quantitative and qualitative comparisons of algorithms. Quantitative comparison of algorithms is done using a set of 13 measures chosen from different attributes of the algorithm and the experimental methodology adopted in algorithm evaluation. We point out the need for considering these measures together with the recognition accuracy of the algorithm to predict its success in real-world applications. The paper also reviews 26 publicly available hand gesture databases and provides the web-links for their download. © 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "1cacfd4da5273166debad8a6c1b72754",
"text": "This article presents a paradigm case portrait of female romantic partners of heavy pornography users. Based on a sample of 100 personal letters, this portrait focuses on their often traumatic discovery of the pornography usage and the significance they attach to this usage for (a) their relationships, (b) their own worth and desirability, and (c) the character of their partners. Finally, we provide a number of therapeutic recommendations for helping these women to think and act more effectively in their very difficult circumstances.",
"title": ""
},
{
"docid": "08331361929f3634bc705221ec25287c",
"text": "The present study used pleasant and unpleasant music to evoke emotion and functional magnetic resonance imaging (fMRI) to determine neural correlates of emotion processing. Unpleasant (permanently dissonant) music contrasted with pleasant (consonant) music showed activations of amygdala, hippocampus, parahippocampal gyrus, and temporal poles. These structures have previously been implicated in the emotional processing of stimuli with (negative) emotional valence; the present data show that a cerebral network comprising these structures can be activated during the perception of auditory (musical) information. Pleasant (contrasted to unpleasant) music showed activations of the inferior frontal gyrus (IFG, inferior Brodmann's area (BA) 44, BA 45, and BA 46), the anterior superior insula, the ventral striatum, Heschl's gyrus, and the Rolandic operculum. IFG activations appear to reflect processes of music-syntactic analysis and working memory operations. Activations of Rolandic opercular areas possibly reflect the activation of mirror-function mechanisms during the perception of the pleasant tunes. Rolandic operculum, anterior superior insula, and ventral striatum may form a motor-related circuitry that serves the formation of (premotor) representations for vocal sound production during the perception of pleasant auditory information. In all of the mentioned structures, except the hippocampus, activations increased over time during the presentation of the musical stimuli, indicating that the effects of emotion processing have temporal dynamics; the temporal dynamics of emotion have so far mainly been neglected in the functional imaging literature.",
"title": ""
},
{
"docid": "638e0059bf390b81de2202c22427b937",
"text": "Oral and gastrointestinal mucositis is a toxicity of many forms of radiotherapy and chemotherapy. It has a significant impact on health, quality of life and economic outcomes that are associated with treatment. It also indirectly affects the success of antineoplastic therapy by limiting the ability of patients to tolerate optimal tumoricidal treatment. The complex pathogenesis of mucositis has only recently been appreciated and reflects the dynamic interactions of all of the cell and tissue types that comprise the epithelium and submucosa. The identification of the molecular events that lead to treatment-induced mucosal injury has provided targets for mechanistically based interventions to prevent and treat mucositis.",
"title": ""
},
{
"docid": "de05e649c6e77278b69665df3583d3d8",
"text": "This context-aware emotion-based model can help design intelligent agents for group decision making processes. Experiments show that agents with emotional awareness reach agreement more quickly than those without it.",
"title": ""
},
{
"docid": "8f2c7770fdcd9bfe6a7e9c6e10569fc7",
"text": "The purpose of this paper is to explore the importance of Information Technology (IT) Governance models for public organizations and presenting an IT Governance model that can be adopted by both practitioners and researchers. A review of the literature in IT Governance has been initiated to shape the intended theoretical background of this study. The systematic literature review formalizes a richer context for the IT Governance concept. An empirical survey, using a questionnaire based on COBIT 4.1 maturity model used to investigate IT Governance practice in multiple case studies from Kingdom of Bahrain. This method enabled the researcher to gain insights to evaluate IT Governance practices. The results of this research will enable public sector organizations to adopt an IT Governance model in a simple and dynamic manner. The model provides a basic structure of a concept; for instance, this allows organizations to gain a better perspective on IT Governance processes and provides a clear focus for decision-making attention. IT Governance model also forms as a basis for further research in IT Governance adoption models and bridges the gap between conceptual frameworks, real life and functioning governance.",
"title": ""
},
{
"docid": "4a9debbbe5b21adcdb50bfdc0c81873c",
"text": "Stealth Dicing (SD) technology has high potential to replace the conventional blade sawing and laser grooving. The dicing method has been widely researched since 2005 [1-3] especially for thin wafer (⇐ 12 mils). SD cutting has good quality because it has dry process during laser cutting, extremely narrow scribe line and multi-die sawing capability. However, along with complicated package technology, the chip quality demands fine and accurate pitch which conventional blade saw is impossible to achieve. This paper is intended as an investigation in high performance SD sawing, including multi-pattern wafer and DAF dicing tape capability. With the improvement of low-K substrate technology and min chip scale size, SD cutting is more important than other methods used before. Such sawing quality also occurs in wafer level chip scale package. With low-K substrate and small package, the SD cutting method can cut the narrow scribe line easily (15 um), which can lead the WLCSP to achieve more complicated packing method successfully.",
"title": ""
},
{
"docid": "07354d1830a06a565e94b46334acda69",
"text": "Evidence from developmental psychology suggests that understanding other minds constitutes a special domain of cognition with at least two components: an early-developing system for reasoning about goals, perceptions, and emotions, and a later-developing system for representing the contents of beliefs. Neuroimaging reinforces and elaborates upon this view by providing evidence that (a) domain-specific brain regions exist for representing belief contents, (b) these regions are apparently distinct from other regions engaged in reasoning about goals and actions (suggesting that the two developmental stages reflect the emergence of two distinct systems, rather than the elaboration of a single system), and (c) these regions are distinct from brain regions engaged in inhibitory control and in syntactic processing. The clear neural distinction between these processes is evidence that belief attribution is not dependent on either inhibitory control or syntax, but is subserved by a specialized neural system for theory of mind.",
"title": ""
},
{
"docid": "c6005a99e6a60a4ee5f958521dcad4d3",
"text": "We document initial experiments with Canid, a freestanding, power-autonomous quadrupedal robot equipped with a parallel actuated elastic spine. Research into robotic bounding and galloping platforms holds scientific and engineering interest because it can both probe biological hypotheses regarding bounding and galloping mammals and also provide the engineering community with a new class of agile, efficient and rapidly-locomoting legged robots. We detail the design features of Canid that promote our goals of agile operation in a relatively cheap, conventionally prototyped, commercial off-the-shelf actuated platform. We introduce new measurement methodology aimed at capturing our robot’s “body energy” during real time operation as a means of quantifying its potential for agile behavior. Finally, we present joint motor, inertial and motion capture data taken from Canid’s initial leaps into highly energetic regimes exhibiting large accelerations that illustrate the use of this measure and suggest its future potential as a platform for developing efficient, stable, hence useful bounding gaits. For more information: Kod*Lab Disciplines Electrical and Computer Engineering | Engineering | Systems Engineering Comments BibTeX entry @article{canid_spie_2013, author = {Pusey, Jason L. and Duperret, Jeffrey M. and Haynes, G. Clark and Knopf, Ryan and Koditschek , Daniel E.}, title = {Free-Standing Leaping Experiments with a PowerAutonomous, Elastic-Spined Quadruped}, pages = {87410W-87410W-15}, year = {2013}, doi = {10.1117/ 12.2016073} } This work is supported by the National Science Foundation Graduate Research Fellowship under Grant Number DGE-0822, and by the Army Research Laboratory under Cooperative Agreement Number W911NF-10–2−0016. Copyright 2013 Society of Photo-Optical Instrumentation Engineers. Postprint version. This paper was (will be) published in Proceedings of the SPIE Defense, Security, and Sensing Conference, Unmanned Systems Technology XV (8741), and is made available as an electronic reprint with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/ese_papers/655 Free-Standing Leaping Experiments with a Power-Autonomous, Elastic-Spined Quadruped Jason L. Pusey a , Jeffrey M. Duperret b , G. Clark Haynes c , Ryan Knopf b , and Daniel E. Koditschek b a U.S. Army Research Laboratory, Aberdeen Proving Ground, MD, b University of Pennsylvania, Philadelphia, PA, c National Robotics Engineering Center, Carnegie Mellon University, Pittsburgh, PA",
"title": ""
},
{
"docid": "c0b22c68ee02c2adffa7fa9cdfd15812",
"text": "In this paper the design issues of input electromagnetic interference (EMI) filters for inverter-fed motor drives including motor Common Mode (CM) voltage active compensation are studied. A coordinated design of motor CM-voltage active compensator and input EMI filter allows the drive system to comply with EMC standards and to yield an increased reliability at the same time. Two CM input EMI filters are built and compared. They are, designed, respectively, according to the conventional design procedure and considering the actual impedance mismatching between EMI source and receiver. In both design procedures, the presence of the active compensator is taken into account. The experimental evaluation of both filters' performance is given in terms of compliance of the system to standard limits.",
"title": ""
},
{
"docid": "b49e61ecb2afbaa8c3b469238181ec26",
"text": "Stylistic variations of language, such as formality, carry speakers’ intention beyond literal meaning and should be conveyed adequately in translation. We propose to use lexical formality models to control the formality level of machine translation output. We demonstrate the effectiveness of our approach in empirical evaluations, as measured by automatic metrics and human assessments.",
"title": ""
},
{
"docid": "2cc7019de113899274080f538de0540c",
"text": "Chitosan was prepared from shrimp processing waste (shell) using the same chemical process as described for the other crustacean species with minor modification in the treatment condition. The physicochemical properties, molecular weight (165394g/mole), degree of deacetylation (75%), ash content as well as yield (15%) of prepared chitosan indicated that shrimp processing waste (shell) are a good source of chitosan. The water binding capacity (502%) and fat binding capacity (370%) of prepared chitosan are good agreement with the commercial chitosan. FT-IR spectra gave characteristics bands of –NH2 at 3443cm -1 and carbonyl at 1733cm. X-ray diffraction (XRD) patterns also indicated two characteristics crystalline peaks approximately at 10° and 20° (2θ).The surface morphology was examined using scanning electron microscopy (SEM). Index Term-Shrimp waste, Chitin, Deacetylation, Chitosan,",
"title": ""
},
{
"docid": "e077a3c57b1df490d418a2b06cf14b2c",
"text": "Inductive power transfer (IPT) is widely discussed for the automated opportunity charging of plug-in hybrid and electric public transport buses without moving mechanical components and reduced maintenance requirements. In this paper, the design of an on-board active rectifier and dc–dc converter for interfacing the receiver coil of a 50 kW/85 kHz IPT system is designed. Both conversion stages employ 1.2 kV SiC MOSFET devices for their low switching losses. For the dc–dc conversion, a modular, nonisolated buck+boost-type topology with coupled magnetic devices is used for increasing the power density. For the presented hardware prototype, a power density of 9.5 kW/dm3 (or 156 W/in3) is achieved, while the ac–dc efficiency from the IPT receiver coil to the vehicle battery is 98.6%. Comprehensive experimental results are presented throughout this paper to support the theoretical analysis.",
"title": ""
},
{
"docid": "4de2c6422d8357e6cb00cce21e703370",
"text": "OBJECTIVE\nFalls and fall-related injuries are leading problems in residential aged care facilities. The objective of this study was to provide descriptive data about falls in nursing homes.\n\n\nDESIGN/SETTING/PARTICIPANTS\nProspective recording of all falls over 1 year covering all residents from 528 nursing homes in Bavaria, Germany.\n\n\nMEASUREMENTS\nFalls were reported on a standardized form that included a facility identification code, date, time of the day, sex, age, degree of care need, location of the fall, and activity leading to the fall. Data detailing homes' bed capacities and occupancy levels were used to estimate total person-years under exposure and to calculate fall rates. All analyses were stratified by residents' degree of care need.\n\n\nRESULTS\nMore than 70,000 falls were recorded during 42,843 person-years. The fall rate was higher in men than in women (2.18 and 1.49 falls per person-year, respectively). Fall risk differed by degree of care need with lower fall risks both in the least and highest care categories. About 75% of all falls occurred in the residents' rooms or in the bathrooms and only 22% were reported within the common areas. Transfers and walking were responsible for 41% and 36% of all falls respectively. Fall risk varied during the day. Most falls were observed between 10 am and midday and between 2 pm and 8 pm.\n\n\nCONCLUSION\nThe differing fall risk patterns in specific subgroups may help to target preventive measures.",
"title": ""
},
{
"docid": "2753c131bafcd392116383a04d3066b2",
"text": "With the massive construction of the China high-speed railway, it is of a great significance to propose an automatic approach to inspect the defects of the catenary support devices. Based on the obtained high resolution images, the detection and extraction of the components on the catenary support devices are the vital steps prior to their defect report. Inspired by the existing object detection Faster R-CNN framework, a cascaded convolutional neural network (CNN) architecture is built to successively detect the various components and the tiny fasteners in the complex catenary support device structures. Meanwhile, some missing states of the fasteners on the cantilever joints are directly reported via our proposed architecture. Experiments on the Wuhan-Guangzhou high-speed railway dataset demonstrate a practical performance of the component detection with good adaptation and robustness in complex environments, feasible to accurately inspect the extremely tiny defects on the various catenary components.",
"title": ""
}
] |
scidocsrr
|
d9f5438e76dc0fddb745e99e13477dcf
|
Edgecourier: an edge-hosted personal service for low-bandwidth document synchronization in mobile cloud storage services
|
[
{
"docid": "2c4babb483ddd52c9f1333cbe71a3c78",
"text": "The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.",
"title": ""
}
] |
[
{
"docid": "9d28e5b6ad14595cd2d6b4071a867f6f",
"text": "This paper presents the analysis and the comparison study of a High-voltage High-frequency Ozone Generator using PWM and Phase-Shifted PWM full-bridge inverter as a power supply. The circuits operations of the inverters are fully described. In order to ensure that zero voltage switching (ZVS) mode always operated over a certain range of a frequency variation, a series-compensated resonant inductor is included. The comparison study are ozone quantity and output voltage that supplied by the PWM and Phase-Shifted PWM full-bridge inverter. The ozone generator fed by Phase-Shifted PWM full-bridge inverter, is capability of varying ozone gas production quantity by varying the frequency and phase shift angle of the converter whilst the applied voltage to the electrode is kept constant. However, the ozone generator fed by PWM full-bridge inverter, is capability of varying ozone gas production quantity by varying the frequency of the converter whilst the applied voltage to the electrode is decreased. As a consequence, the absolute ozone quantity affected by the frequency is possibly achieved.",
"title": ""
},
{
"docid": "423cba015a9cfc247943dd7d3c4ea1cf",
"text": "No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or informa tion storage and retrieval) without permission in writing from the publisher. Preface Probability is common sense reduced to calculation Laplace This book is an outgrowth of our involvement in teaching an introductory prob ability course (\"Probabilistic Systems Analysis'�) at the Massachusetts Institute of Technology. The course is attended by a large number of students with diverse back grounds, and a broad range of interests. They span the entire spectrum from freshmen to beginning graduate students, and from the engineering school to the school of management. Accordingly, we have tried to strike a balance between simplicity in exposition and sophistication in analytical reasoning. Our key aim has been to develop the ability to construct and analyze probabilistic models in a manner that combines intuitive understanding and mathematical precision. In this spirit, some of the more mathematically rigorous analysis has been just sketched or intuitively explained in the text. so that complex proofs do not stand in the way of an otherwise simple exposition. At the same time, some of this analysis is developed (at the level of advanced calculus) in theoretical prob lems, that are included at the end of the corresponding chapter. FUrthermore, some of the subtler mathematical issues are hinted at in footnotes addressed to the more attentive reader. The book covers the fundamentals of probability theory (probabilistic mod els, discrete and continuous random variables, multiple random variables, and limit theorems), which are typically part of a first course on the subject. It also contains, in Chapters 4-6 a number of more advanced topics, from which an instructor can choose to match the goals of a particular course. In particular, in Chapter 4, we develop transforms, a more advanced view of conditioning, sums of random variables, least squares estimation, and the bivariate normal distribu-v vi Preface tion. Furthermore, in Chapters 5 and 6, we provide a fairly detailed introduction to Bernoulli, Poisson, and Markov processes. Our M.LT. course covers all seven chapters in a single semester, with the ex ception of the material on the bivariate normal (Section 4.7), and on continuous time Markov chains (Section 6.5). However, in an alternative course, the material on stochastic processes could be omitted, thereby allowing additional emphasis on foundational material, or coverage of other topics of the instructor's choice. Our …",
"title": ""
},
{
"docid": "7e1712f9e2846862d072c902a84b2832",
"text": "Reinforcement learning is a computational approach to learn from interaction. However, learning from scratch using reinforcement learning requires exorbitant number of interactions with the environment even for simple tasks. One way to alleviate the problem is to reuse previously learned skills as done by humans. This thesis provides frameworks and algorithms to build and reuse Skill Library. Firstly, we extend the Parameterized Action Space formulation using our Skill Library to multi-goal setting and show improvements in learning using hindsight at coarse level. Secondly, we use our Skill Library for exploring at a coarser level to learn the optimal policy for continuous control. We demonstrate the benefits, in terms of speed and accuracy, of the proposed approaches for a set of real world complex robotic manipulation tasks in which some state-of-the-art methods completely fail.",
"title": ""
},
{
"docid": "6f484310532a757a28c427bad08f7623",
"text": "We address the problem of tracking and recognizing faces in real-world, noisy videos. We track faces using a tracker that adaptively builds a target model reflecting changes in appearance, typical of a video setting. However, adaptive appearance trackers often suffer from drift, a gradual adaptation of the tracker to non-targets. To alleviate this problem, our tracker introduces visual constraints using a combination of generative and discriminative models in a particle filtering framework. The generative term conforms the particles to the space of generic face poses while the discriminative one ensures rejection of poorly aligned targets. This leads to a tracker that significantly improves robustness against abrupt appearance changes and occlusions, critical for the subsequent recognition phase. Identity of the tracked subject is established by fusing pose-discriminant and person-discriminant features over the duration of a video sequence. This leads to a robust video-based face recognizer with state-of-the-art recognition performance. We test the quality of tracking and face recognition on real-world noisy videos from YouTube as well as the standard Honda/UCSD database. Our approach produces successful face tracking results on over 80% of all videos without video or person-specific parameter tuning. The good tracking performance induces similarly high recognition rates: 100% on Honda/UCSD and over 70% on the YouTube set containing 35 celebrities in 1500 sequences.",
"title": ""
},
{
"docid": "09e9a3c3ae9552d675aea363b672312d",
"text": "Substrate Integrated Waveguides (SIW) are used for transmission of Electromagnetic waves. They are planar structures belonging to the family of Substrate Integrated Circuits. Because of their planar nature, they can be fabricated on planar circuits like Printed Circuit Boards (PCB) and can be integrated with other planar transmission lines like microstrips. They retain the low loss property of their conventional metallic waveguides and are widely used as interconnection in high speed circuits, filters, directional couplers, antennas. This paper is a comprehensive review of Substrate Integrated Waveguide and its integration with Microstrip line. In this paper, design techniques for SIW and its microstrip interconnect are presented. HFSS is used for simulation results. The objective of this paper is to provide broad perspective of SIW Technology.",
"title": ""
},
{
"docid": "c2f807e336be1b8d918d716c07668ae1",
"text": "The present article proposes and describes a new ZCS non-isolated bidirectional buck-boost DC-DC converter for energy storage applications in electric vehicles. Usually, the conventional converters are adapted with an auxiliary resonant cell to provide the zero current switching turn-on/turn-off condition for the main switching devices. The advantages of proposed converter has reduced switching losses, reduced component count and improved efficiency. The proposed converter operates either in boost or buck mode. This paper mainly deals with the operating principles, analysis and design simulations of the proposed converter in order to prove the better soft-switching capability, reduced switching losses and efficiency improvement than the conventional converter.",
"title": ""
},
{
"docid": "e92f19a7d99df50321f21ce639a84a35",
"text": "Software tagging has been shown to be an efficient, lightweight social computing mechanism to improve different social and technical aspects of software development. Despite the importance of tags, there exists limited support for automatic tagging for software artifacts, especially during the evolutionary process of software development. We conducted an empirical study on IBM Jazz's repository and found that there are several missing tags in artifacts and more precise tags are desirable. This paper introduces a novel, accurate, automatic tagging recommendation tool that is able to take into account users' feedbacks on tags, and is very efficient in coping with software evolution. The core technique is an automatic tagging algorithm that is based on fuzzy set theory. Our empirical evaluation on the real-world IBM Jazz project shows the usefulness and accuracy of our approach and tool.",
"title": ""
},
{
"docid": "460aa0df99a3e88a752d5f657f1565de",
"text": "Recent case studies have suggested that emotion perception and emotional experience of music have independent cognitive processing. We report a patient who showed selective impairment of emotional experience only in listening to music, that is musical anhednia. A 71-year-old right-handed man developed an infarction in the right parietal lobe. He found himself unable to experience emotion in listening to music, even to which he had listened pleasantly before the illness. In neuropsychological assessments, his intellectual, memory, and constructional abilities were normal. Speech audiometry and recognition of environmental sounds were within normal limits. Neuromusicological assessments revealed no abnormality in the perception of elementary components of music, expression and emotion perception of music. Brain MRI identified the infarct lesion in the right inferior parietal lobule. These findings suggest that emotional experience of music could be selectively impaired without any disturbance of other musical, neuropsychological abilities. The right parietal lobe might participate in emotional experience in listening to music.",
"title": ""
},
{
"docid": "dfae6cf3df890c8cfba756384c4e88e6",
"text": "In this paper, we propose a second order optimization method to learn models where both the dimensionality of the parameter space and the number of training samples is high. In our method, we construct on each iteratio n a Krylov subspace formed by the gradient and an approximation to the Hess ian matrix, and then use a subset of the training data samples to optimize ove r this subspace. As with the Hessian Free (HF) method of [6], the Hessian matrix i s never explicitly constructed, and is computed using a subset of data. In p ractice, as in HF, we typically use a positive definite substitute for the Hessi an matrix such as the Gauss-Newton matrix. We investigate the effectiveness of o ur proposed method on learning the parameters of deep neural networks, and comp are its performance to widely used methods such as stochastic gradient descent, conjugate gradient descent and L-BFGS, and also to HF. Our method leads to faster convergence than either L-BFGS or HF, and generally performs better than either of them in cross-validation accuracy. It is also simpler and more gene ral than HF, as it does not require a positive semi-definite approximation of the He ssian matrix to work well nor the setting of a damping parameter. The chief drawba ck versus HF is the need for memory to store a basis for the Krylov subspace.",
"title": ""
},
{
"docid": "c92807c973f51ac56fe6db6c2bb3f405",
"text": "Machine learning relies on the availability of a vast amount of data for training. However, in reality, most data are scattered across different organizations and cannot be easily integrated under many legal and practical constraints. In this paper, we introduce a new technique and framework, known as federated transfer learning (FTL), to improve statistical models under a data federation. The federation allows knowledge to be shared without compromising user privacy, and enables complimentary knowledge to be transferred in the network. As a result, a target-domain party can build more flexible and powerful models by leveraging rich labels from a source-domain party. A secure transfer cross validation approach is also proposed to guard the FTL performance under the federation. The framework requires minimal modifications to the existing model structure and provides the same level of accuracy as the nonprivacy-preserving approach. This framework is very flexible and can be effectively adapted to various secure multi-party machine learning tasks.",
"title": ""
},
{
"docid": "c9077052caa804aaa58d43aaf8ba843f",
"text": "Many authors have laid down a concept about organizational learning and the learning organization. Amongst them They contributed an explanation on how organizations learn and provided tools to transfer the theoretical concepts of organizational learning into practice. Regarding the present situation it seems, that organizational learning becomes even more important. This paper provides a complementary view on the learning organization from the perspective of the evolutionary epistemology. The evolutionary epistemology gives an answer, where the subjective structures of cognition come from and why they are similar in all human beings. Applying this evolutionary concept to organizations it could be possible to provide a deeper insight of the cognition processes of organizations and explain the principles that lay behind a learning organization. It also could give an idea, which impediments in learning, caused by natural dispositions, deduced from genetic barriers of cognition in biology are existing and managers must be aware of when trying to facilitate organizational learning within their organizations.",
"title": ""
},
{
"docid": "ad0892ee2e570a8a2f5a90883d15f2d2",
"text": "Supervised event extraction systems are limited in their accuracy due to the lack of available training data. We present a method for self-training event extraction systems by bootstrapping additional training data. This is done by taking advantage of the occurrence of multiple mentions of the same event instances across newswire articles from multiple sources. If our system can make a highconfidence extraction of some mentions in such a cluster, it can then acquire diverse training examples by adding the other mentions as well. Our experiments show significant performance improvements on multiple event extractors over ACE 2005 and TAC-KBP 2015 datasets.",
"title": ""
},
{
"docid": "c08fa2224b8a38b572ea546abd084bd1",
"text": "Off-chip main memory has long been a bottleneck for system performance. With increasing memory pressure due to multiple on-chip cores, effective cache utilization is important. In a system with limited cache space, we would ideally like to prevent 1) cache pollution, i.e., blocks with low reuse evicting blocks with high reuse from the cache, and 2) cache thrashing, i.e., blocks with high reuse evicting each other from the cache.\n In this paper, we propose a new, simple mechanism to predict the reuse behavior of missed cache blocks in a manner that mitigates both pollution and thrashing. Our mechanism tracks the addresses of recently evicted blocks in a structure called the Evicted-Address Filter (EAF). Missed blocks whose addresses are present in the EAF are predicted to have high reuse and all other blocks are predicted to have low reuse. The key observation behind this prediction scheme is that if a block with high reuse is prematurely evicted from the cache, it will be accessed soon after eviction. We show that an EAF-implementation using a Bloom filter, which is cleared periodically, naturally mitigates the thrashing problem by ensuring that only a portion of a thrashing working set is retained in the cache, while incurring low storage cost and implementation complexity.\n We compare our EAF-based mechanism to five state-of-the-art mechanisms that address cache pollution or thrashing, and show that it provides significant performance improvements for a wide variety of workloads and system configurations.",
"title": ""
},
{
"docid": "ada1db1673526f98840291977998773d",
"text": "The effect of immediate versus delayed feedback on rule-based and information-integration category learning was investigated. Accuracy rates were examined to isolate global performance deficits, and model-based analyses were performed to identify the types of response strategies used by observers. Feedback delay had no effect on the accuracy of responding or on the distribution of best fitting models in the rule-based category-learning task. However, delayed feedback led to less accurate responding in the information-integration category-learning task. Model-based analyses indicated that the decline in accuracy with delayed feedback was due to an increase in the use of rule-based strategies to solve the information-integration task. These results provide support for a multiple-systems approach to category learning and argue against the validity of single-system approaches.",
"title": ""
},
{
"docid": "fee191728bc0b1fbf11344961be10215",
"text": "In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems. Disciplines Computer Sciences Comments Vanderwende, L., Suzuki, H., Brockett, C., & Nenkova, A., Beyond SumBasic: Task-Focused Summarization with Sentence Simplification and Lexical Expansion, Information Processing and Management, Special Issue on Summarization Volume 43, Issue 6, 2007, doi: 10.1016/j.ipm.2007.01.023 This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/cis_papers/736",
"title": ""
},
{
"docid": "5528f1ee010e7fba440f1f7ff84a3e8e",
"text": "In presenting this thesis in partial fulfillment of the requirements for a Master's degree at the University of Washington, I agree that the Library shall make its copies freely available for inspection. I further agree that extensive copying of this thesis is allowable only for scholarly purposes, consistent with \"fair use\" as prescribed in the U.S. Copyright Law. Any other reproduction for any purposes or by any means shall not be allowed without my written permission. PREFACE Over the last several years, professionals from many different fields have come to the Human Interface Technology Laboratory (H.I.T.L) to discover and learn about virtual environments. In general, they are impressed by their experiences and express the tremendous potential the tool has in their respective fields. But the potentials are always projected far in the future, and the tool remains just a concept. This is justifiable because the quality of the visual experience is so much less than what people are used to seeing; high definition television, breathtaking special cinematographic effects and photorealistic computer renderings. Instead, the models in virtual environments are very simple looking; they are made of small spaces, filled with simple or abstract looking objects of little color distinctions as seen through displays of noticeably low resolution and at an update rate which leaves much to be desired. Clearly, for most applications, the requirements of precision have not been met yet with virtual interfaces as they exist today. However, there are a few domains where the relatively low level of the technology could be perfectly appropriate. In general, these are applications which require that the information be presented in symbolic or representational form. Having studied architecture, I knew that there are moments during the early part of the design process when conceptual decisions are made which require precisely the simple and representative nature available in existing virtual environments. This was a marvelous discovery for me because I had found a viable use for virtual environments which could be immediately beneficial to architecture, my shared area of interest. It would be further beneficial to architecture in that the virtual interface equipment I would be evaluating at the H.I.T.L. happens to be relatively less expensive and more practical than other configurations such as the \"Walkthrough\" at the University of North Carolina. The setup at the H.I.T.L. could be easily introduced into architectural firms because it takes up very little physical room (150 …",
"title": ""
},
{
"docid": "980bc7323411806e6e4faffe0b7303e2",
"text": "The ability to generate intermediate frames between two given images in a video sequence is an essential task for video restoration and video post-processing. In addition, restoration requires robust denoising algorithms, must handle corrupted frames and recover from impaired frames accordingly. In this paper we present a unified framework for all these tasks. In our approach we use a variant of the TV-L denoising algorithm that operates on image sequences in a space-time volume. The temporal derivative is modified to take the pixels’ movement into account. In order to steer the temporal gradient in the desired direction we utilize optical flow to estimate the velocity vectors between consecutive frames. We demonstrate our approach on impaired movie sequences as well as on benchmark datasets where the ground-truth is known.",
"title": ""
},
{
"docid": "d690cfa0fbb63e53e3d3f7a1c7a6a442",
"text": "Ambient intelligence has acquired great importance in recent years and requires the development of new innovative solutions. This paper presents a distributed telemonitoring system, aimed at improving healthcare and assistance to dependent people at their homes. The system implements a service-oriented architecture based platform, which allows heterogeneous wireless sensor networks to communicate in a distributed way independent of time and location restrictions. This approach provides the system with a higher ability to recover from errors and a better flexibility to change their behavior at execution time. Preliminary results are presented in this paper.",
"title": ""
},
{
"docid": "cb1e6d11d372e72f7675a55c8f2c429d",
"text": "We evaluate the performance of a hardware/software architecture designed to perform a wide range of fast image processing tasks. The system ar chitecture is based on hardware featuring a Field Programmable Gate Array (FPGA) co-processor and a h ost computer. A LabVIEW TM host application controlling a frame grabber and an industrial camer a is used to capture and exchange video data with t he hardware co-processor via a high speed USB2.0 chann el, implemented with a standard macrocell. The FPGA accelerator is based on a Altera Cyclone II ch ip and is designed as a system-on-a-programmablechip (SOPC) with the help of an embedded Nios II so ftware processor. The SOPC system integrates the CPU, external and on chip memory, the communication channel and typical image filters appropriate for the evaluation of the system performance. Measured tran sfer rates over the communication channel and processing times for the implemented hardware/softw are logic are presented for various frame sizes. A comparison with other solutions is given and a rang e of applications is also discussed.",
"title": ""
},
{
"docid": "d88ce8a3e9f669c40b21710b69ac11be",
"text": "The smart city concept represents a compelling platform for IT-enabled service innovation. It offers a view of the city where service providers use information technologies to engage with citizens to create more effective urban organizations and systems that can improve the quality of life. The emerging Internet of Things (IoT) model is foundational to the development of smart cities. Integrated cloud-oriented architecture of networks, software, sensors, human interfaces, and data analytics are essential for value creation. IoT smart-connected products and the services they provision will become essential for the future development of smart cities. This paper will explore the smart city concept and propose a strategy development model for the implementation of IoT systems in a smart city context.",
"title": ""
}
] |
scidocsrr
|
e2acab0b5a67c2b65198d6c2461e33c6
|
Identification and Detection of Phishing Emails Using Natural Language Processing Techniques
|
[
{
"docid": "5cb8c778f0672d88241cc22da9347415",
"text": "Phishing websites, fraudulent sites that impersonate a trusted third party to gain access to private data, continue to cost Internet users over a billion dollars each year. In this paper, we describe the design and performance characteristics of a scalable machine learning classifier we developed to detect phishing websites. We use this classifier to maintain Google’s phishing blacklist automatically. Our classifier analyzes millions of pages a day, examining the URL and the contents of a page to determine whether or not a page is phishing. Unlike previous work in this field, we train the classifier on a noisy dataset consisting of millions of samples from previously collected live classification data. Despite the noise in the training data, our classifier learns a robust model for identifying phishing pages which correctly classifies more than 90% of phishing pages several weeks after training concludes.",
"title": ""
},
{
"docid": "00410fcb0faa85d5423ccf0a7cc2f727",
"text": "Phishing is form of identity theft that combines social engineering techniques and sophisticated attack vectors to harvest financial information from unsuspecting consumers. Often a phisher tries to lure her victim into clicking a URL pointing to a rogue page. In this paper, we focus on studying the structure of URLs employed in various phishing attacks. We find that it is often possible to tell whether or not a URL belongs to a phishing attack without requiring any knowledge of the corresponding page data. We describe several features that can be used to distinguish a phishing URL from a benign one. These features are used to model a logistic regression filter that is efficient and has a high accuracy. We use this filter to perform thorough measurements on several million URLs and quantify the prevalence of phishing on the Internet today",
"title": ""
}
] |
[
{
"docid": "ce5c5d0d0cb988c96f0363cfeb9610d4",
"text": "Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration/snapshotting and the diversity/compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and/or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.",
"title": ""
},
{
"docid": "6432df2102cc9140f9a586abd5d44a90",
"text": "BACKGROUND\nLimited information is available from randomized clinical trials comparing the longevity of amalgam and resin-based compomer/composite restorations. The authors compared replacement rates of these types of restorations in posterior teeth during the five-year follow-up of the New England Children's Amalgam Trial.\n\n\nMETHODS\nThe authors randomized children aged 6 to 10 years who had two or more posterior occlusal carious lesions into groups that received amalgam (n=267) or compomer (primary teeth)/composite (permanent teeth) (n=267) restorations and followed them up semiannually. They compared the longevity of restorations placed on all posterior surfaces using random effects survival analysis.\n\n\nRESULTS\nThe average+/-standard deviation follow-up was 2.8+/-1.4 years for primary tooth restorations and 3.4+/-1.9 years for permanent tooth restorations. In primary teeth, the replacement rate was 5.8 percent of compomers versus 4.0 percent of amalgams (P=.10), with 3.0 percent versus 0.5 percent (P=.002), respectively, due to recurrent caries. In permanent teeth, the replacement rate was 14.9 percent of composites versus 10.8 percent of amalgams (P=.45), and the repair rate was 2.8 percent of composites versus 0.4 percent of amalgams (P=.02).\n\n\nCONCLUSION\nAlthough the overall difference in longevity was not statistically significant, compomer was replaced significantly more frequently owing to recurrent caries, and composite restorations required seven times as many repairs as did amalgam restorations.\n\n\nCLINICAL IMPLICATIONS\nCompomer/composite restorations on posterior tooth surfaces in children may require replacement or repair at higher rates than amalgam restorations, even within five years of placement.",
"title": ""
},
{
"docid": "919ce1951d219970a05086a531b9d796",
"text": "Anti-neutrophil cytoplasmic autoantibodies (ANCA) and anti-glomerular basement membrane (GBM) necrotizing and crescentic glomerulonephritis are aggressive and destructive glomerular diseases that are associated with and probably caused by circulating ANCA and anti-GBM antibodies. These necrotizing lesions are manifested by acute nephritis and deteriorating kidney function often accompanied by distinctive clinical features of systemic disease. Prompt diagnosis requires clinical acumen that allows for the prompt institution of therapy aimed at removing circulating autoantibodies and quelling the inflammatory process. Continuing exploration of the etiology and pathogenesis of these aggressive inflammatory diseases have gradually uncovered new paradigms for the cause of and more specific therapy for these particular glomerular disorders and for autoimmune glomerular diseases in general.",
"title": ""
},
{
"docid": "5c50099c8a4e638736f430e3b5622b1d",
"text": "BACKGROUND\nAccording to the existential philosophers, meaning, purpose and choice are necessary for quality of life. Qualitative researchers exploring the perspectives of people who have experienced health crises have also identified the need for meaning, purpose and choice following life disruptions. Although espousing the importance of meaning in occupation, occupational therapy theory has been primarily preoccupied with purposeful occupations and thus appears inadequate to address issues of meaning within people's lives.\n\n\nPURPOSE\nThis paper proposes that the fundamental orientation of occupational therapy should be the contributions that occupation makes to meaning in people's lives, furthering the suggestion that occupation might be viewed as comprising dimensions of meaning: doing, being, belonging and becoming. Drawing upon perspectives and research from philosophers, social scientists and occupational therapists, this paper will argue for a renewed understanding of occupation in terms of dimensions of meaning rather than as divisible activities of self-care, productivity and leisure.\n\n\nPRACTICE IMPLICATIONS\nFocusing on meaningful, rather than purposeful occupations more closely aligns the profession with its espoused aspiration to enable the enhancement of quality of life.",
"title": ""
},
{
"docid": "2494840a6f833bd5b20b9b1fadcfc2f8",
"text": "Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists.",
"title": ""
},
{
"docid": "3ed8fc0084bd836a3f4034a5099b374a",
"text": "A model hypothesizing differential relationships among predictor variables and individual commitment to the organization and work team was tested. Data from 485 members of sewing teams supported the existence of differential relationships between predictors and organizational and team commitment. In particular, intersender conflict and satisfaction with coworkers were more strongly related to team commitment than to organizational commitment. Resource-related conflict and satisfaction with supervision were more strongly related to organizational commitment than to team commitment. Perceived task interdependence was strongly related to both commitment foci. Contrary to prediction, the relationships between perceived task interdependence and the 2 commitment foci were not significantly different. Relationships with antecedent variables help explain how differential levels of commitment to the 2 foci may be formed. Indirect effects of exogenous variables are reported.",
"title": ""
},
{
"docid": "91cf217b2c5fa968bc4e893366ec53e1",
"text": "Importance\nPostpartum hypertension complicates approximately 2% of pregnancies and, similar to antepartum severe hypertension, can have devastating consequences including maternal death.\n\n\nObjective\nThis review aims to increase the knowledge and skills of women's health care providers in understanding, diagnosing, and managing hypertension in the postpartum period.\n\n\nResults\nHypertension complicating pregnancy, including postpartum, is defined as systolic blood pressure 140 mm Hg or greater and/or diastolic blood pressure 90 mm Hg or greater on 2 or more occasions at least 4 hours apart. Severe hypertension is defined as systolic blood pressure 160 mm Hg or greater and/or diastolic blood pressure 110 mm Hg or greater on 2 or more occasions repeated at a short interval (minutes). Workup for secondary causes of hypertension should be pursued, especially in patients with severe or resistant hypertension, hypokalemia, abnormal creatinine, or a strong family history of renal disease. Because severe hypertension is known to cause maternal stroke, women with severe hypertension sustained over 15 minutes during pregnancy or in the postpartum period should be treated with fast-acting antihypertension medication. Labetalol, hydralazine, and nifedipine are all effective for acute management, although nifedipine may work the fastest. For persistent postpartum hypertension, a long-acting antihypertensive agent should be started. Labetalol and nifedipine are also both effective, but labetalol may achieve control at a lower dose with fewer adverse effects.\n\n\nConclusions and Relevance\nProviders must be aware of the risks associated with postpartum hypertension and educate women about the symptoms of postpartum preeclampsia. Severe acute hypertension should be treated in a timely fashion to avoid morbidity and mortality. Women with persistent postpartum hypertension should be administered a long-acting antihypertensive agent.\n\n\nTarget Audience\nObstetricians and gynecologists, family physicians.\n\n\nLearning Objectives\nAfter completing this activity, the learner should be better able to assist patients and providers in identifying postpartum hypertension; provide a framework for the evaluation of new-onset postpartum hypertension; and provide instructions for the management of acute severe and persistent postpartum hypertension.",
"title": ""
},
{
"docid": "54d54094acea1900e183144d32b1910f",
"text": "A large body of work has been devoted to address corporate-scale privacy concerns related to social networks. Most of this work focuses on how to share social networks owned by organizations without revealing the identities or the sensitive relationships of the users involved. Not much attention has been given to the privacy risk of users posed by their daily information-sharing activities.\n In this article, we approach the privacy issues raised in online social networks from the individual users’ viewpoint: we propose a framework to compute the privacy score of a user. This score indicates the user’s potential risk caused by his or her participation in the network. Our definition of privacy score satisfies the following intuitive properties: the more sensitive information a user discloses, the higher his or her privacy risk. Also, the more visible the disclosed information becomes in the network, the higher the privacy risk. We develop mathematical models to estimate both sensitivity and visibility of the information. We apply our methods to synthetic and real-world data and demonstrate their efficacy and practical utility.",
"title": ""
},
{
"docid": "9556a7f345a31989bff1ee85fc31664a",
"text": "The neural basis of variation in human intelligence is not well delineated. Numerous studies relating measures of brain size such as brain weight, head circumference, CT or MRI brain volume to different intelligence test measures, with variously defined samples of subjects have yielded inconsistent findings with correlations from approximately 0 to 0.6, with most correlations approximately 0.3 or 0.4. The study of intelligence in relation to postmortem cerebral volume is not available to date. We report the results of such a study on 100 cases (58 women and 42 men) having prospectively obtained Full Scale Wechsler Adult Intelligence Scale scores. Ability correlated with cerebral volume, but the relationship depended on the realm of intelligence studied, as well as the sex and hemispheric functional lateralization of the subject. General verbal ability was positively correlated with cerebral volume and each hemisphere's volume in women and in right-handed men accounting for 36% of the variation in verbal intelligence. There was no evidence of such a relationship in non-right-handed men, indicating that at least for verbal intelligence, functional asymmetry may be a relevant factor in structure-function relationships in men, but not in women. In women, general visuospatial ability was also positively correlated with cerebral volume, but less strongly, accounting for approximately 10% of the variance. In men, there was a non-significant trend of a negative correlation between visuospatial ability and cerebral volume, suggesting that the neural substrate of visuospatial ability may differ between the sexes. Analyses of additional research subjects used as test cases provided support for our regression models. In men, visuospatial ability and cerebral volume were strongly linked via the factor of chronological age, suggesting that the well-documented decline in visuospatial intelligence with age is related, at least in right-handed men, to the decrease in cerebral volume with age. We found that cerebral volume decreased only minimally with age in women. This leaves unknown the neural substrate underlying the visuospatial decline with age in women. Body height was found to account for 1-4% of the variation in cerebral volume within each sex, leaving the basis of the well-documented sex difference in cerebral volume unaccounted for. With finer testing instruments of specific cognitive abilities and measures of their associated brain regions, it is likely that stronger structure-function relationships will be observed. Our results point to the need for responsibility in the consideration of the possible use of brain images as intelligence tests.",
"title": ""
},
{
"docid": "410a173b55faaad5a7ab01cf6e4d4b69",
"text": "BACKGROUND\nCommunication skills training (CST) based on the Japanese SHARE model of family-centered truth telling in Asian countries has been adopted in Taiwan. However, its effectiveness in Taiwan has only been preliminarily verified. This study aimed to test the effect of SHARE model-centered CST on Taiwanese healthcare providers' truth-telling preference, to determine the effect size, and to compare the effect of 1-day and 2-day CST programs on participants' truth-telling preference.\n\n\nMETHOD\nFor this one-group, pretest-posttest study, 10 CST programs were conducted from August 2010 to November 2011 under certified facilitators and with standard patients. Participants (257 healthcare personnel from northern, central, southern, and eastern Taiwan) chose the 1-day (n = 94) or 2-day (n = 163) CST program as convenient. Participants' self-reported truth-telling preference was measured before and immediately after CST programs, with CST program assessment afterward.\n\n\nRESULTS\nThe CST programs significantly improved healthcare personnel's truth-telling preference (mean pretest and posttest scores ± standard deviation (SD): 263.8 ± 27.0 vs. 281.8 ± 22.9, p < 0.001). The CST programs effected a significant, large (d = 0.91) improvement in overall truth-telling preference and significantly improved method of disclosure, emotional support, and additional information (p < 0.001). Participation in 1-day or 2-day CST programs did not significantly affect participants' truth-telling preference (p > 0.05) except for the setting subscale. Most participants were satisfied with the CST programs (93.8%) and were willing to recommend them to colleagues (98.5%).\n\n\nCONCLUSIONS\nThe SHARE model-centered CST programs significantly improved Taiwanese healthcare personnel's truth-telling preference. Future studies should objectively assess participants' truth-telling preference, for example, by cancer patients, their families, and other medical team personnel and at longer times after CST programs.",
"title": ""
},
{
"docid": "ef7e973a5c6f9e722917a283a1f0fe52",
"text": "We live in a digital society that provides a range of opportunities for virtual interaction. Consequently, emojis have become popular for clarifying online communication. This presents an exciting opportunity for psychologists, as these prolific online behaviours can be used to help reveal something unique about contemporary human behaviour.",
"title": ""
},
{
"docid": "9b0ffe566f7887c53e272d897e46100d",
"text": "3D registration or matching is a crucial step in 3D model reconstruction. Registration applications span along a variety of research fields, including computational geometry, computer vision, and geometric modeling. This variety of applications produces many diverse approaches to the problem but at the same time yields divergent notations and a lack of standardized algorithms and guidelines to classify existing methods. In this article, we review the state of the art of the 3D rigid registration topic (focused on Coarse Matching) and offer qualitative comparison between the most relevant approaches. Furthermore, we propose a pipeline to classify the existing methods and define a standard formal notation, offering a global point of view of the literature.\n Our discussion, based on the results presented in the analyzed papers, shows how, although certain aspects of the registration process still need to be tested further in real application situations, the registration pipeline as a whole has progressed steadily. As a result of this progress in all registration aspects, it is now possible to put together algorithms that are able to tackle new and challenging problems with unprecedented data sizes and meeting strict precision criteria.",
"title": ""
},
{
"docid": "f8c4fd23f163c0a604569b5ecf4bdefd",
"text": "The goal of interactive machine learning is to help scientists and engineers exploit more specialized data from within their deployed environment in less time, with greater accuracy and fewer costs. A basic introduction to the main components is provided here, untangling the many ideas that must be combined to produce practical interactive learning systems. This article also describes recent developments in machine learning that have significantly advanced the theoretical and practical foundations for the next generation of interactive tools.",
"title": ""
},
{
"docid": "2b8cf99331158bd7aea2958b1b64f741",
"text": "Purpose – The purpose of this paper is to understand blog users’ negative emotional norm compliance decision-making in crises (blog users’-NNDC). Design/methodology/approach – A belief– desire–intention (BDI) model to evaluate the blog users’-NNDC (the BDI-NNDC model) was developed. This model was based on three social characteristics: self-interests, expectations and emotions. An experimental study was conducted to evaluate the efficiency of the BDI-NNDC model by using data retrieved from a popular Chinese social network called “Sina Weibo” about three major crises. Findings – The BDI-NNDC model strongly predicted the Blog users’-NNDC. The predictions were as follows: a self-interested blog user posted content that was targeting his own interests; a blogger with high expectations wrote and commented emotionally negative blogs on the condition that the numbers of negative posts increased, while he ignored the norm when there was relatively less negative emotional news; and an emotional blog user obeyed the norm based on the emotional intentions of the blogosphere in most of the cases. Research limitations/implications – The BDI-NNDC model can explain the diffusion of negative emotions by blog users during crises, and this paper shows a way to bridge the social norm modelling and the research of blog users’ activity and behaviour characteristics in the context of “real life” crises. However, the criterion for differentiating blog users according to social characteristics needs to be further revised, as the generalizability of the results is limited by the number of cases selected in this study. Practical implications – The current method could be applied to predict emotional trends of blog users who have different social characteristics and it could support government agencies to build strategic responses to crises. The authors thank Mr Jon Walker and Ms Celia Zazo Seco in this work for their dedication and time. This paper is supported by the Key project of National Social Science Foundation under contract No. 13&ZD174; National Natural Science Foundation of China under contract No. 71273132, 71303111, 71471089, 71403121, 71503124 and 71503126; National Social Science Foundation under contract No. 15BTQ063; “Fundamental Research Funds for the Central Universities”, No: 30920140111006; Jiangsu “Qinlan” project (2016); Priority Academic Program Development of Jiangsu Higher Education Institutions; and Hubei Collaborative Innovation Center for Early Warning and Emergency Response Research project under contract JD20150401. The current issue and full text archive of this journal is available on Emerald Insight at: www.emeraldinsight.com/0264-0473.htm",
"title": ""
},
{
"docid": "3ddcf5f0e4697a0d43eff2cca77a1ab7",
"text": "Lymph nodes are assessed routinely in clinical practice and their size is followed throughout radiation or chemotherapy to monitor the effectiveness of cancer treatment. This paper presents a robust learning-based method for automatic detection and segmentation of solid lymph nodes from CT data, with the following contributions. First, it presents a learning based approach to solid lymph node detection that relies on marginal space learning to achieve great speedup with virtually no loss in accuracy. Second, it presents a computationally efficient segmentation method for solid lymph nodes (LN). Third, it introduces two new sets of features that are effective for LN detection, one that self-aligns to high gradients and another set obtained from the segmentation result. The method is evaluated for axillary LN detection on 131 volumes containing 371 LN, yielding a 83.0% detection rate with 1.0 false positive per volume. It is further evaluated for pelvic and abdominal LN detection on 54 volumes containing 569 LN, yielding a 80.0% detection rate with 3.2 false positives per volume. The running time is 5-20 s per volume for axillary areas and 15-40 s for pelvic. An added benefit of the method is the capability to detect and segment conglomerated lymph nodes.",
"title": ""
},
{
"docid": "4ce67aeca9e6b31c5021712f148108e2",
"text": "Self-endorsing—the portrayal of potential consumers using products—is a novel advertising strategy made possible by the development of virtual environments. Three experiments compared self-endorsing to endorsing by an unfamiliar other. In Experiment 1, self-endorsing in online advertisements led to higher brand attitude and purchase intention than other-endorsing. Moreover, photographs were a more effective persuasion channel than text. In Experiment 2, participants wore a brand of clothing in a high-immersive virtual environment and preferred the brand worn by their virtual self to the brand worn by others. Experiment 3 demonstrated that an additional mechanism behind self-endorsing was the interactivity of the virtual representation. Evidence for self-referencing as a mediator is presented. 94 The Journal of Advertising context, consumers can experience presence while interacting with three-dimensional products on Web sites (Biocca et al. 2001; Edwards and Gangadharbatla 2001; Li, Daugherty, and Biocca 2001). When users feel a heightened sense of presence and perceive the virtual experience to be real, they are more easily persuaded by the advertisement (Kim and Biocca 1997). The differing degree, or the objectively measurable property of presence, is called immersion. Immersion is the extent to which media are capable of delivering a vivid illusion of reality using rich layers of sensory input (Slater and Wilbur 1997). Therefore, different levels of immersion (objective unit) lead to different experiences of presence (subjective unit), and both concepts are closely related to interactivity. Web sites are considered to be low-immersive virtual environments because of limited interactive capacity and lack of richness in sensory input, which decreases the sense of presence, whereas virtual reality is considered a high-immersive virtual environment because of its ability to reproduce perceptual richness, which heightens the sense of feeling that the virtual experience is real. Another differentiating aspect of virtual environments is that they offer plasticity of the appearance and behavior of virtual self-representations. It is well known that virtual selves may or may not be true replications of physical appearances (Farid 2009; Yee and Bailenson 2006), but users can also be faced with situations in which they are not controlling the behaviors of their own virtual representations (Fox and Bailenson 2009). In other words, a user can see himor herself using (and perhaps enjoying) a product he or she has never physically used. Based on these unique features of virtual platforms, the current study aims to explore the effect of viewing a virtual representation that may or may not look like the self, endorsing a brand by use. We also manipulate the interactivity of endorsers within virtual environments to provide evidence for the mechanism behind self-endorsing. THE SELF-ENDORSED ADVERTISEMENT Recent studies have confirmed that positive connections between the self and brands can be created by subtle manipulations, such as mimicry of the self ’s nonverbal behaviors (Tanner et al. 2008). The slightest affiliation between the self and the other can lead to positive brand evaluations. In a study by Ferraro, Bettman, and Chartrand (2009), an unfamiliar ingroup or out-group member was portrayed in a photograph with a water bottle bearing a brand name. The simple detail of the person wearing a baseball cap with the same school logo (i.e., in-group affiliation) triggered participants to choose the brand associated with the in-group member. Thus, the self–brand relationship significantly influences brand attitude, but self-endorsing has not received scientific attention to date, arguably because it was not easy to implement before the onset of virtual environments. Prior research has studied the effectiveness of different types of endorsers and their influence on the persuasiveness of advertisements (Friedman and Friedman 1979; Stafford, Stafford, and Day 2002), but the self was not considered in these investigations as a possible source of endorsement. However, there is the possibility that the currently sporadic use of self-endorsing (e.g., www.myvirtualmodel.com) will increase dramatically. For instance, personalized recommendations are being sent to consumers based on online “footsteps” of prior purchases (Tam and Ho 2006). Furthermore, Google has spearheaded keyword search advertising, which displays text advertisements in real-time based on search words ( Jansen, Hudson, and Hunter 2008), and Yahoo has begun to display video and image advertisements based on search words (Clifford 2009). Considering the availability of personal images on the Web due to the widespread employment of social networking sites, the idea of self-endorsing may spread quickly. An advertiser could replace the endorser shown in the image advertisement called by search words with the user to create a self-endorsed advertisement. Thus, the timely investigation of the influence of self-endorsing on users, as well as its mechanism, is imperative. Based on positivity biases related to the self (Baumeister 1998; Chambers and Windschitl 2004), self-endorsing may be a powerful persuasion tool. However, there may be instances when using the self in an advertisement may not be effective, such as when the virtual representation does not look like the consumer and the consumer fails to identify with the representation. Self-endorsed advertisements may also lose persuasiveness when movements of the representation are not synched with the actions of the consumer. Another type of endorser that researchers are increasingly focusing on is the typical user endorser. Typical endorsers have an advantage in that they appeal to the similarity of product usage with the average user. For instance, highly attractive models are not always effective compared with normally attractive models, even for beauty-enhancing products (i.e., acne treatment), when users perceive that the highly attractive models do not need those products (Bower and Landreth 2001). Moreover, with the advancement of the Internet, typical endorsers are becoming more influential via online testimonials (Lee, Park, and Han 2006; Wang 2005). In the current studies, we compared the influence of typical endorsers (i.e., other-endorsing) and self-endorsers on brand attitude and purchase intentions. In addition to investigating the effects of self-endorsing, this work extends results of earlier studies on the effectiveness of different types of endorsers and makes important theoretical contributions by studying self-referencing as an underlying mechanism of self-endorsing.",
"title": ""
},
{
"docid": "cc8b0cd938bc6315864925a7a057e211",
"text": "Despite the continuous growth in the number of smartphones around the globe, Short Message Service (SMS) still remains as one of the most popular, cheap and accessible ways of exchanging text messages using mobile phones. Nevertheless, the lack of security in SMS prevents its wide usage in sensitive contexts such as banking and health-related applications. Aiming to tackle this issue, this paper presents SMSCrypto, a framework for securing SMS-based communications in mobile phones. SMSCrypto encloses a tailored selection of lightweight cryptographic algorithms and protocols, providing encryption, authentication and signature services. The proposed framework is implemented both in Java (target at JVM-enabled platforms) and in C (for constrained SIM Card processors) languages, thus being suitable",
"title": ""
},
{
"docid": "fa9571673fe848d1d119e2d49f21d28d",
"text": "Convolutional Neural Networks (CNNs) trained on large scale RGB databases have become the secret sauce in the majority of recent approaches for object categorization from RGB-D data. Thanks to colorization techniques, these methods exploit the filters learned from 2D images to extract meaningful representations in 2.5D. Still, the perceptual signature of these two kind of images is very different, with the first usually strongly characterized by textures, and the second mostly by silhouettes of objects. Ideally, one would like to have two CNNs, one for RGB and one for depth, each trained on a suitable data collection, able to capture the perceptual properties of each channel for the task at hand. This has not been possible so far, due to the lack of a suitable depth database. This paper addresses this issue, proposing to opt for synthetically generated images rather than collecting by hand a 2.5D large scale database. While being clearly a proxy for real data, synthetic images allow to trade quality for quantity, making it possible to generate a virtually infinite amount of data. We show that the filters learned from such data collection, using the very same architecture typically used on visual data, learns very different filters, resulting in depth features (a) able to better characterize the different facets of depth images, and (b) complementary with respect to those derived from CNNs pre-trained on 2D datasets. Experiments on two publicly available databases show the power of our approach.",
"title": ""
},
{
"docid": "33285ad9f7bc6e33b48e3f1e27a1ccc9",
"text": "Information visualization is a very important tool in BigData analytics. BigData, structured and unstructured data which contains images, videos, texts, audio and other forms of data, collected from multiple datasets, is too big, too complex and moves too fast to analyse using traditional methods. This has given rise to two issues; 1) how to reduce multidimensional data without the loss of any data patterns for multiple datasets, 2) how to visualize BigData patterns for analysis. In this paper, we have classified the BigData attributes into `5Ws' data dimensions, and then established a `5Ws' density approach that represents the characteristics of data flow patterns. We use parallel coordinates to display the `5Ws' sending and receiving densities which provide more analytic features for BigData analysis. The experiment shows that this new model with parallel coordinate visualization can be efficiently used for BigData analysis and visualization.",
"title": ""
},
{
"docid": "a75919f4a4abcc0796ae6ba269cb91c1",
"text": "Interacting systems are prevalent in nature, from dynamical systems in physics to complex societal dynamics. The interplay of components can give rise to complex behavior, which can often be explained using a simple model of the system’s constituent parts. In this work, we introduce the neural relational inference (NRI) model: an unsupervised model that learns to infer interactions while simultaneously learning the dynamics purely from observational data. Our model takes the form of a variational auto-encoder, in which the latent code represents the underlying interaction graph and the reconstruction is based on graph neural networks. In experiments on simulated physical systems, we show that our NRI model can accurately recover ground-truth interactions in an unsupervised manner. We further demonstrate that we can find an interpretable structure and predict complex dynamics in real motion capture and sports tracking data.",
"title": ""
}
] |
scidocsrr
|
1bf21d6db5865db497850fe615b3d462
|
Deadline Based Resource Provisioningand Scheduling Algorithm for Scientific Workflows on Clouds
|
[
{
"docid": "4936a07e1b6a42fde7a8fdf1b420776c",
"text": "One of many advantages of the cloud is the elasticity, the ability to dynamically acquire or release computing resources in response to demand. However, this elasticity is only meaningful to the cloud users when the acquired Virtual Machines (VMs) can be provisioned in time and be ready to use within the user expectation. The long unexpected VM startup time could result in resource under-provisioning, which will inevitably hurt the application performance. A better understanding of the VM startup time is therefore needed to help cloud users to plan ahead and make in-time resource provisioning decisions. In this paper, we study the startup time of cloud VMs across three real-world cloud providers -- Amazon EC2, Windows Azure and Rackspace. We analyze the relationship between the VM startup time and different factors, such as time of the day, OS image size, instance type, data center location and the number of instances acquired at the same time. We also study the VM startup time of spot instances in EC2, which show a longer waiting time and greater variance compared to on-demand instances.",
"title": ""
},
{
"docid": "27a4b74d3c47fc25a8564cd824aa9e66",
"text": "Grid computing is increasingly considered as a promising next-generation computational platform that supports wide-area parallel and distributed computing. In grid environments, applications are always regarded as workflows. The problem of scheduling workflows in terms of certain quality of service (QoS) requirements is challenging and it significantly influences the performance of grids. By now, there have been some algorithms for grid workflow scheduling, but most of them can only tackle the problems with a single QoS parameter or with small-scale workflows. In this frame, this paper aims at proposing an ant colony optimization (ACO) algorithm to schedule large-scale workflows with various QoS parameters. This algorithm enables users to specify their QoS preferences as well as define the minimum QoS thresholds for a certain application. The objective of this algorithm is to find a solution that meets all QoS constraints and optimizes the user-preferred QoS parameter. Based on the characteristics of workflow scheduling, we design seven new heuristics for the ACO approach and propose an adaptive scheme that allows artificial ants to select heuristics based on pheromone values. Experiments are done in ten workflow applications with at most 120 tasks, and the results demonstrate the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "795e9da03d2b2d6e66cf887977fb24e9",
"text": "Researchers working on the planning, scheduling, and execution of scientific workflows need access to a wide variety of scientific workflows to evaluate the performance of their implementations. This paper provides a characterization of workflows from six diverse scientific applications, including astronomy, bioinformatics, earthquake science, and gravitational-wave physics. The characterization is based on novel workflow profiling tools that provide detailed information about the various computational tasks that are present in the workflow. This information includes I/O, memory and computational characteristics. Although the workflows are diverse, there is evidence that each workflow has a job type that consumes the most amount of runtime. The study also uncovered inefficiency in a workflow component implementation, where the component was re-reading the same data multiple times. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "a993a7a5aa45fb50e19326ec4c98472d",
"text": "Innumerable terror and suspicious messages are sent through Instant Messengers (IM) and Social Networking Sites (SNS) which are untraced, leading to hindrance for network communications and cyber security. We propose a Framework that discover and predict such messages that are sent using IM or SNS like Facebook, Twitter, LinkedIn, and others. Further, these instant messages are put under surveillance that identifies the type of suspected cyber threat activity by culprit along with their personnel details. Framework is developed using Ontology based Information Extraction technique (OBIE), Association rule mining (ARM) a data mining technique with set of pre-defined Knowledge-based rules (logical), for decision making process that are learned from domain experts and past learning experiences of suspicious dataset like GTD (Global Terrorist Database). The experimental results obtained will aid to take prompt decision for eradicating cyber crimes.",
"title": ""
},
{
"docid": "70830fc4130b4c3281f596e8d7d2529e",
"text": "In 1948 Shannon developed fundamental limits on the efficiency of communication over noisy channels. The coding theorem asserts that there are block codes with code rates arbitrarily close to channel capacity and probabilities of error arbitrarily close to zero. Fifty years later, codes for the Gaussian channel have been discovered that come close to these fundamental limits. There is now a substantial algebraic theory of error-correcting codes with as many connections to mathematics as to engineering practice, and the last 20 years have seen the construction of algebraic-geometry codes that can be encoded and decoded in polynomial time, and that beat the Gilbert–Varshamov bound. Given the size of coding theory as a subject, this review is of necessity a personal perspective, and the focus is reliable communication, and not source coding or cryptography. The emphasis is on connecting coding theories for Hamming and Euclidean space and on future challenges, specifically in data networking, wireless communication, and quantum information theory.",
"title": ""
},
{
"docid": "31fc886990140919aabce17aa7774910",
"text": "Today, at the low end of the communication protocols we find the inter-integrated circuit (I2C) and the serial peripheral interface (SPI) protocols. Both protocols are well suited for communications between integrated circuits for slow communication with on-board peripherals. The two protocols coexist in modern digital electronics systems, and they probably will continue to compete in the future, as both I2C and SPI are actually quite complementary for this kind of communication.",
"title": ""
},
{
"docid": "a67574d560911af698b7dddac4e8dd8a",
"text": "Ciliates are an ancient and diverse group of microbial eukaryotes that have emerged as powerful models for RNA-mediated epigenetic inheritance. They possess extensive sets of both tiny and long noncoding RNAs that, together with a suite of proteins that includes transposases, orchestrate a broad cascade of genome rearrangements during somatic nuclear development. This Review emphasizes three important themes: the remarkable role of RNA in shaping genome structure, recent discoveries that unify many deeply diverged ciliate genetic systems, and a surprising evolutionary \"sign change\" in the role of small RNAs between major species groups.",
"title": ""
},
{
"docid": "41481b2f081831d28ead1b685465d535",
"text": "Triticum aestivum (Wheat grass juice) has high concentrations of chlorophyll, amino acids, minerals, vitamins, and enzymes. Fresh juice has been shown to possess anti-cancer activity, anti-ulcer activity, anti-inflammatory, antioxidant activity, anti-arthritic activity, and blood building activity in Thalassemia. It has been argued that wheat grass helps blood flow, digestion, and general detoxification of the body due to the presence of biologically active compounds and minerals in it and due to its antioxidant potential which is derived from its high content of bioflavonoids such as apigenin, quercitin, luteoline. Furthermore, indole compounds, amely choline, which known for antioxidants and also possess chelating property for iron overload disorders. The presence of 70% chlorophyll, which is almost chemically identical to haemoglobin. The only difference is that the central element in chlorophyll is magnesium and in hemoglobin it is iron. In wheat grass makes it more useful in various clinical conditions involving hemoglobin deficiency and other chronic disorders ultimately considered as green blood.",
"title": ""
},
{
"docid": "4053bbaf8f9113bef2eb3b15e34a209a",
"text": "With the recent availability of commodity Virtual Reality (VR) products, immersive video content is receiving a significant interest. However, producing high-quality VR content often requires upgrading the entire production pipeline, which is costly and time-consuming. In this work, we propose using video feeds from regular broadcasting cameras to generate immersive content. We utilize the motion of the main camera to generate a wide-angle panorama. Using various techniques, we remove the parallax and align all video feeds. We then overlay parts from each video feed on the main panorama using Poisson blending. We examined our technique on various sports including basketball, ice hockey and volleyball. Subjective studies show that most participants rated their immersive experience when viewing our generated content between Good to Excellent. In addition, most participants rated their sense of presence to be similar to ground-truth content captured using a GoPro Omni 360 camera rig.",
"title": ""
},
{
"docid": "5bd7df3bfcb5b99f8bcb4a9900af980e",
"text": "A learning model predictive controller for iterative tasks is presented. The controller is reference-free and is able to improve its performance by learning from previous iterations. A safe set and a terminal cost function are used in order to guarantee recursive feasibility and nondecreasing performance at each iteration. This paper presents the control design approach, and shows how to recursively construct terminal set and terminal cost from state and input trajectories of previous iterations. Simulation results show the effectiveness of the proposed control logic.",
"title": ""
},
{
"docid": "6e690c5aa54b28ba23d9ac63db4cc73a",
"text": "The Topic Detection and Tracking (TDT) evaluation program has included a \"cluster detection\" task since its inception in 1996. Systems were required to process a stream of broadcast news stories and partition them into non-overlapping clusters. A system's effectiveness was measured by comparing the generated clusters to \"truth\" clusters created by human annotators. Starting in 2003, TDT is moving to a more realistic model that permits overlapping clusters (stories may be on more than one topic) and encourages the creation of a hierarchy to structure the relationships between clusters (topics). We explore a range of possible evaluation models for this modified TDT clustering task to understand the best approach for mapping between the human-generated \"truth\" clusters and a much richer hierarchical structure. We demonstrate that some obvious evaluation techniques fail for degenerate cases. For a few others we attempt to develop an intuitive sense of what the evaluation numbers mean. We settle on some approaches that incorporate a strong balance between cluster errors (misses and false alarms) and the distance it takes to travel between stories within the hierarchy.",
"title": ""
},
{
"docid": "87748bcc07ab498218233645bdd4dd0c",
"text": "This paper proposes a method of recognizing and classifying the basic activities such as forward and backward motions by applying a deep learning framework on passive radio frequency (RF) signals. The echoes from the moving body possess unique pattern which can be used to recognize and classify the activity. A passive RF sensing test- bed is set up with two channels where the first one is the reference channel providing the un- altered echoes of the transmitter signals and the other one is the surveillance channel providing the echoes of the transmitter signals reflecting from the moving body in the area of interest. The echoes of the transmitter signals are eliminated from the surveillance signals by performing adaptive filtering. The resultant time series signal is classified into different motions as predicted by proposed novel method of convolutional neural network (CNN). Extensive amount of training data has been collected to train the model, which serves as a reference benchmark for the later studies in this field.",
"title": ""
},
{
"docid": "79685eeb67edbb3fbb6e6340fac420c3",
"text": "Fatma Özcan IBM Almaden Research Center San Jose, CA [email protected] Nesime Tatbul Intel Labs and MIT Cambridge, MA [email protected] Daniel J. Abadi Yale University New Haven, CT [email protected] Marcel Kornacker Cloudera San Francisco, CA [email protected] C Mohan IBM Almaden Research Center San Jose, CA [email protected] Karthik Ramasamy Twitter, Inc. San Francisco, CA [email protected] Janet Wiener Facebook, Inc. Menlo Park, CA [email protected]",
"title": ""
},
{
"docid": "4e23abcd1746d23c54e36c51e0a59208",
"text": "Recognizing actions is one of the important challenges in computer vision with respect to video data, with applications to surveillance, diagnostics of mental disorders, and video retrieval. Compared to other data modalities such as documents and images, processing video data demands orders of magnitude higher computational and storage resources. One way to alleviate this difficulty is to focus the computations to informative (salient) regions of the video. In this paper, we propose a novel global spatio-temporal selfsimilarity measure to score saliency using the ideas of dictionary learning and sparse coding. In contrast to existing methods that use local spatio-temporal feature detectors along with descriptors (such as HOG, HOG3D, HOF, etc.), dictionary learning helps consider the saliency in a global setting (on the entire video) in a computationally efficient way. We consider only a small percentage of the most salient (least self-similar) regions found using our algorithm, over which spatio-temporal descriptors such as HOG and region covariance descriptors are computed. The ensemble of such block descriptors in a bag-of-features framework provides a holistic description of the motion sequence which can be used in a classification setting. Experiments on several benchmark datasets in video based action classification demonstrate that our approach performs competitively to the state of the art.",
"title": ""
},
{
"docid": "51b8fe57500d1d74834d1f9faa315790",
"text": "Simulations of smoke are pervasive in the production of visual effects for commercials, movies and games: from cigarette smoke and subtle dust to large-scale clouds of soot and vapor emanating from fires and explosions. In this talk we present a new Eulerian method that targets the simulation of such phenomena on a structured spatially adaptive voxel grid --- thereby achieving an improvement in memory usage and computational performance over regular dense and sparse grids at uniform resolution. Contrary to e.g. Setaluri et al. [2014], we use velocities collocated at voxel corners which allows sharper interpolation for spatially adaptive simulations, is faster for sampling, and promotes ease-of-use in an open procedural environment where technical artists often construct small computational graphs that apply forces, dissipation etc. to the velocities. The collocated method requires special treatment when projecting out the divergent velocity modes to prevent non-physical high frequency oscillations (not addressed by Ferstl et al. [2014]). To this end we explored discretization and filtering methods from computational physics, combining them with a matrix-free adaptive multigrid scheme based on MLAT and FAS [Trottenberg and Schuller 2001]. Finally we contribute a new volumetric quadrature approach to temporally smooth emission which outperforms e.g. Gaussian quadrature at large time steps. We have implemented our method in the cross-platform Autodesk Bifrost procedural environment which facilitates customization by the individual technical artist, and our implementation is in production use at several major studios. We refer the reader to the accompanying video for examples that illustrate our novel workflows for spatially adaptive simulations and the benefits of our approach. We note that several methods for adaptive fluid simulation have been proposed in recent years, e.g. [Ferstl et al. 2014; Setaluri et al. 2014], and we have drawn a lot of inspiration from these. However, to the best of our knowledge we are the first in computer graphics to propose a collocated velocity, spatially adaptive and matrix-free smoke simulation method that explicitly mitigates non-physical divergent modes.",
"title": ""
},
{
"docid": "0ec8872c972335c11a63380fe1f1c51f",
"text": "MOTIVATION\nMany complex disease syndromes such as asthma consist of a large number of highly related, rather than independent, clinical phenotypes, raising a new technical challenge in identifying genetic variations associated simultaneously with correlated traits. Although a causal genetic variation may influence a group of highly correlated traits jointly, most of the previous association analyses considered each phenotype separately, or combined results from a set of single-phenotype analyses.\n\n\nRESULTS\nWe propose a new statistical framework called graph-guided fused lasso to address this issue in a principled way. Our approach represents the dependency structure among the quantitative traits explicitly as a network, and leverages this trait network to encode structured regularizations in a multivariate regression model over the genotypes and traits, so that the genetic markers that jointly influence subgroups of highly correlated traits can be detected with high sensitivity and specificity. While most of the traditional methods examined each phenotype independently, our approach analyzes all of the traits jointly in a single statistical method to discover the genetic markers that perturb a subset of correlated traits jointly rather than a single trait. Using simulated datasets based on the HapMap consortium data and an asthma dataset, we compare the performance of our method with the single-marker analysis, and other sparse regression methods that do not use any structural information in the traits. Our results show that there is a significant advantage in detecting the true causal single nucleotide polymorphisms when we incorporate the correlation pattern in traits using our proposed methods.\n\n\nAVAILABILITY\nSoftware for GFlasso is available at http://www.sailing.cs.cmu.edu/gflasso.html.",
"title": ""
},
{
"docid": "dd2322ad8956e3a8cc490e6b6e6bc2c8",
"text": "Wireless networking has witnessed an explosion of interest from consumers in recent years for its applications in mobile and personal communications. As wireless networks become an integral component of the modern communication infrastructure, energy efficiency will be an important design consideration due to the limited battery life of mobile terminals. Power conservation techniques are commonly used in the hardware design of such systems. Since the network interface is a significant consumer of power, considerable research has been devoted to low-power design of the entire network protocol stack of wireless networks in an effort to enhance energy efficiency. This paper presents a comprehensive summary of recent work addressing energy efficient and low-power design within all layers of the wireless network protocol stack.",
"title": ""
},
{
"docid": "6c6e4e776a3860d1df1ccd7af7f587d5",
"text": "We introduce new families of Integral Probability Metrics (IPM) for training Generative Adversarial Networks (GAN). Our IPMs are based on matching statistics of distributions embedded in a finite dimensional feature space. Mean and covariance feature matching IPMs allow for stable training of GANs, which we will call McGan. McGan minimizes a meaningful loss between distributions.",
"title": ""
},
{
"docid": "011a9ac960aecc4a91968198ac6ded97",
"text": "INTRODUCTION\nPsychological empowerment is really important and has remarkable effect on different organizational variables such as job satisfaction, organizational commitment, productivity, etc. So the aim of this study was to investigate the relationship between psychological empowerment and productivity of Librarians in Isfahan Medical University.\n\n\nMETHODS\nThis was correlational research. Data were collected through two questionnaires. Psychological empowerment questionnaire and the manpower productivity questionnaire of Gold. Smith Hersey which their content validity was confirmed by experts and their reliability was obtained by using Cronbach's Alpha coefficient, 0.89 and 0.9 respectively. Due to limited statistical population, did not used sampling and review was taken via census. So 76 number of librarians were evaluated. Information were reported on both descriptive and inferential statistics (correlation coefficient tests Pearson, Spearman, T-test, ANOVA), and analyzed by using the SPSS19 software.\n\n\nFINDINGS\nIn our study, the trust between partners and efficacy with productivity had the highest correlation. Also there was a direct relationship between psychological empowerment and the productivity of labor (r =0.204). In other words, with rising of mean score of psychological empowerment, the mean score of efficiency increase too.\n\n\nCONCLUSIONS\nThe results showed that if development programs of librarian's psychological empowerment increase in order to their productivity, librarians carry out their duties with better sense. Also with using the capabilities of librarians, the development of creativity with happen and organizational productivity will increase.",
"title": ""
},
{
"docid": "a3dc04fe9478f881608289ae13e979cb",
"text": "Background: The white matter of the cerebellum has a population of GFAP+ cells with neurogenic potential restricted to early postnatal development (P2-P12), these astrocytes are the precursors of stellate cells and basket cells in the molecular layer. On the other hand, GABA is known to serve as a feedback regulator of neural production and migration through tonic activation of GABA-A receptors. Aim: To investigate the functional expression of GABA-A receptors in the cerebellar white matter astrocytes at P7-9 and P18-20. Methods: Immunofluorescence for α1, α2, β1 subunits & GAD67 enzyme in GFAP-EGFP mice (n=10 P8; n= 8 P18). Calcium Imaging: horizontal acute slices were incubated with Fluo4 AM in order to measure the effect of GABA-A or GATs antagonist bicuculline or nipecotic acid on spontaneous calcium oscillations, as well as on GABA application evoked responses. Results: Our results showed that α1 (3.18%), α2 (10.4%) and β1 (not detected) subunits were not predominantly expressed in astrocytes of white matter at P8. However, GAD67 co-localized with 54% of GFAP+ cells, suggesting that a fraction of astrocytes could synthesize GABA. Moreover, calcium imaging experiments showed that white matter cells responded to GABA. This response was antagonized by bicuculline suggesting functional expression of GABA-A receptors. Conclusions: Together these results suggest that GABA is synthesized by half astrocytes in white matter at P8 and that GABA could be released locally to activate GABA-A receptors that are also expressed in cells of the white matter of the cerebellum, during early postnatal development. (D) Acknowledgements: We thank the technical support of E. N. Hernández-Ríos, A, Castilla, L. Casanova, A. E. Espino & M. García-Servín. F.E. Labrada-Moncada is a CONACyT (640190) scholarship holder. This work was supported by PAPIIT-UNAM grants (IN201913 e IN201915) to A. Martínez-Torres and D. Reyes-Haro. 48. PROLACTIN PROTECTS AGAINST JOINT INFLAMMATION AND BONE LOSS IN ARTHRITIS Ledesma-Colunga MG, Adán N, Ortiz G, Solis-Gutierrez M, López-Barrera F, Martínez de la Escalera G, y Clapp C. Departamento de Neurobiología Celular y Molecular, Instituto de Neurobiología, UNAM Campus Juriquilla, Querétaro, México. Prolactin (PRL) reduces joint inflammation, pannus formation, and bone destruction in rats with polyarticular adjuvant-induced arthritis (AIA). Here, we investigate the mechanism of PRL protection against bone loss in AIA and in monoarticular AIA (MAIA). Joint inflammation and osteoclastogenesis were evaluated in rats with AIA treated with PRL (via osmotic minipumps) and in mice with MAIA that were null (Prlr-/-) or not (Prlr+/+) for the PRL receptor. To help define target cells, synovial fibroblasts isolated from healthy Prlr+/+ mice were treated or not with T-cell-derived cytokines (Cyt: TNFa, IL-1b, and IFNg) with or without PRL. In AIA, PRL treatment reduced joint swelling, lowered joint histochemical accumulation of the osteoclast marker, tartrateresistant acid phosphatase (TRAP), and decreased joint mRNA levels of osteoclasts-associated genes (Trap, Cathepsin K, Mmp9, Rank) and of cytokines with osteoclastogenic activity (Tnfa, Il-1b, Il-6, Rankl). Prlr-/mice with MAIA showed enhanced joint swelling, increased TRAP activity, and elevated expression of Trap, Rankl, and Rank. The expression of the long PRL receptor form increased in arthritic joints, and in joints and cultured synovial fibroblasts treated with Cyt. PRL induced the phosphorylation/activation of Jornadas Académicas, 2016 Martes 27 de Septiembre, Cartel 35 al 67 signal transducer and activator of transcription-3 (STAT3) and inhibited the Cyt-induced expression of Il-1b, Il-6, and Rankl in synovial cultures. The STAT3 inhibitor S31-201 blocked inhibition of Rankl by PRL. PRL protects against bone loss in inflammatory arthritis by inhibiting cytokine-induced activation of RANKL in joints and synoviocytes via its canonical STAT3 signaling pathway. Hyperprolactinemia-inducing drugs are promising therapeutics for preventing bone loss in rheumatoid arthritis. We thank Gabriel Nava, Daniel Mondragón, Antonio Prado, Martín García, and Alejandra Castilla for technical assistance. Research Support: UNAM-PAPIIT Grant IN201315. M.G.L.C is a doctoral student from Programa de Doctorado en Ciencias Biomédicas, Universidad Nacional Autónoma de México (UNAM) receiving fellowship 245828 from CONACYT. (D) 49. ADC MEASUREMENT IN LATERALY MEDULLARY INFARCTION (WALLENBERG SYNDROME) León-Castro LR1, Fourzán-Martínez M1, Rivas-Sánchez LA1, García-Zamudio E1, Nigoche J2, Ortíz-Retana J1, Barragán-Campos HM1. 1.Magnetic Resonance Unit, Institute of Neurobiology, Campus Juriquilla, National Autonomous University of México. Querétaro, Qro., 2.Department of Radiology. Naval Highly Specialized General Hospital, México City, México. BACKGROUND: The stroke of the vertebrobasilar system (VBS) represents 20% of ischemic vascular events. When the territory of the posterior inferior cerebellar artery (PICA) is affected, lateral medullary infarction (LMI) occurs, typically called Wallenberg syndrome; it accounts for 2-7% of strokes of VBS. Given the diversity of symptoms that causes, it is a difficult disease to diagnose. The reference exam to evaluate cerebral blood flow is digital subtraction angiography (DSA); however, it is an invasive method. Magnetic resonance imaging (MRI) is a noninvasive study and the sequence of diffusion (DWI) can detect early ischemic changes, after 20 minutes of ischemia onset, it also allows to locate and determine the extent of the affected parenchyma. Measurement of the apparent diffusion coefficient (ADC) is a semiquantitative parameter that confirms or rule out the presence of infarction, although the diffusion sequence (DWI) has restriction signal. OBJECTIVE: To measure the ADC values in patients with LMI and compare their values with the contralateral healthy tissue. MATERIALS AND METHODS: The database of Unit Magnetic Resonance Unit of studies carried out from January 2010 to July 2016 was revised to include cases diagnosed by MRI with LMI. The images were acquired in two resonators of 3.0 T (Phillips Achieva TX and General Electric Discovery 750 MR). DWI sequence with b value of 1000 was used to look after LMI, then ADC value measurement of the infarcted area and the contralateral area was performed in the same patient. Two groups were identified: a) infarction and b) healthy tissue. Eleven patients, 5 female (45.5%) and 6 males (54.5%), were included. A descriptive statistic was performed and infarction and healthy tissue were analyzed with U-Mann-Whitney test. RESULTS: In the restriction areas observed in DWI, ADC values were measured; the infarction tissue has a median of 0.54X10-3 mm2/s, interquartile range 0.41-1.0X10-3 mm2/seg; the healthy tissue has a median of 0.24X103 mm2/seg, interquartile range 0.19-0.56X10-3 mm2/seg. The U-Mann-Whitney test has a statistical significance of p<0.05. CONCLUSION: ADC measurement allows to confirm or rule out LMI in patients with the clinical suspicion of Wallenberg syndrome. It also serves to eliminate other diseases that showed restriction in DWI; for example, neoplasm, pontine myelinolysis, acute disseminated encephalomyelitis, multiple sclerosis and diffuse axonal injury. (L) Jornadas Académicas, 2016 Martes 27 de Septiembre, Cartel 35 al 67 50. ENDOVASCULAR CAROTID STENTING IN A PATIENT WITH PREVIOUS STROKE, ISCHEMIC HEART DISEASE, AND SEVERE AORTIC VALVE STENOSIS Lona-Pérez OA1, Balderrama-Bañares J2, Martínez-Reséndiz JA3, Yáñez-LedesmaM4, Jiménez-Zarazúa O5, Vargas-Jiménez MA6, Galeana-Juárez C6, Asensio-Lafuente E7, Barinagarrementeria-Aldatz F8, Barragán Campos H.M9,10. 1.2nd year student of the Faculty of Medicine at the University Autonomous of Querétaro, Qro., 2. Endovascular Neurological Therapy Department, Neurology and Neurosurgery National Institute “Dr. Manuel Velasco Suarez”, México City, México., 3. Department of Anesthesiology, Querétaro General Hospital, SESEQ, Querétaro, Qro., 4. Department of Anesthesiology, León Angeles Hospital, Gto., 5. Internal Medicine Department, León General Hospital, Gto., 6. Coordination of Clinical Rotation, Faculty of Medicine at the University Autonomous of Querétaro, Qro., 7. Cardiology-Electrophysiology , Hospital H+, Querétaro, Qro., 8. Neurologist, Permanent Member of National Academy of Medicine of Mexico, Hospital H+, Querétaro, Qro., 9. Magnetic Resonance Unit, Institute of Neurobiology, Campus Juriquilla, National Autonomous University of México, Querétaro, Qro., 10. Radiology Department. Querétaro General Hospital, SESEQ, Querétaro, Qro OBJECTIVE: We present a case report of a 74-year-old feminine patient who suffered from right superior gyrus stroke, ischemic heart disease, and severe valve aortic stenosis, in whom it was needed to identify which problem had to be treated first. Family antecedent of breast, pancreas, and prostate cancer in first order relatives; smoking 5 packages/year during >20 years, occasional alcoholism, right inguinal hernioplasty, hypertension and dyslipidemia of 3 years of evolution, under treatment. She presented angor pectoris at rest, lasted 3 minute long and has spontaneous recovery, 7 days later she had brain stroke at superior right frontal gyrus, developed hemiparesis with left crural predominance. MATERIALS & METHODS: Anamnesis, complete physical examination, laboratory, as well as heart and brain imaging were performed. Severe aortic valvular stenosis diagnosed by echocardiogram with 0.6 cm2 valvular area, average gradient of 38 mmHg and maximum of 66 mmHg; light mitral stenosis with valvular area of 1.8 cm2, without left atrium dilatation, maximum gradient of 8 mmHg; PSAP 30 mmHg, US Carotid Doppler showed atherosclerotic plaques in the proximal posterior wall of the bulb right internal carotid artery (RICA) that determinates a maximum stenosis of 70%. Aggressive management with antihypertensive (Met",
"title": ""
},
{
"docid": "948295ca3a97f7449548e58e02dbdd62",
"text": "Neural computations are often compared to instrument-measured distance or duration, and such relationships are interpreted by a human observer. However, neural circuits do not depend on human-made instruments but perform computations relative to an internally defined rate-of-change. While neuronal correlations with external measures, such as distance or duration, can be observed in spike rates or other measures of neuronal activity, what matters for the brain is how such activity patterns are utilized by downstream neural observers. We suggest that hippocampal operations can be described by the sequential activity of neuronal assemblies and their internally defined rate of change without resorting to the concept of space or time.",
"title": ""
},
{
"docid": "2ac2e639e9999f7c6e5be97632d7e126",
"text": "BACKGROUND\nThe relationship of health risk behavior and disease in adulthood to the breadth of exposure to childhood emotional, physical, or sexual abuse, and household dysfunction during childhood has not previously been described.\n\n\nMETHODS\nA questionnaire about adverse childhood experiences was mailed to 13,494 adults who had completed a standardized medical evaluation at a large HMO; 9,508 (70.5%) responded. Seven categories of adverse childhood experiences were studied: psychological, physical, or sexual abuse; violence against mother; or living with household members who were substance abusers, mentally ill or suicidal, or ever imprisoned. The number of categories of these adverse childhood experiences was then compared to measures of adult risk behavior, health status, and disease. Logistic regression was used to adjust for effects of demographic factors on the association between the cumulative number of categories of childhood exposures (range: 0-7) and risk factors for the leading causes of death in adult life.\n\n\nRESULTS\nMore than half of respondents reported at least one, and one-fourth reported > or = 2 categories of childhood exposures. We found a graded relationship between the number of categories of childhood exposure and each of the adult health risk behaviors and diseases that were studied (P < .001). Persons who had experienced four or more categories of childhood exposure, compared to those who had experienced none, had 4- to 12-fold increased health risks for alcoholism, drug abuse, depression, and suicide attempt; a 2- to 4-fold increase in smoking, poor self-rated health, > or = 50 sexual intercourse partners, and sexually transmitted disease; and 1.4- to 1.6-fold increase in physical inactivity and severe obesity. The number of categories of adverse childhood exposures showed a graded relationship to the presence of adult diseases including ischemic heart disease, cancer, chronic lung disease, skeletal fractures, and liver disease. The seven categories of adverse childhood experiences were strongly interrelated and persons with multiple categories of childhood exposure were likely to have multiple health risk factors later in life.\n\n\nCONCLUSIONS\nWe found a strong graded relationship between the breadth of exposure to abuse or household dysfunction during childhood and multiple risk factors for several of the leading causes of death in adults.",
"title": ""
},
{
"docid": "f66854fd8e3f29ae8de75fc83d6e41f5",
"text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.",
"title": ""
}
] |
scidocsrr
|
1cdba41323a51af6418bf6930a0ea71b
|
Integrated magnetic design of small planar transformers for LLC resonant converters
|
[
{
"docid": "bd2707625e9077837f26b53d9f9c4382",
"text": "Planar transformer parasitics are difficult to model due to the complex winding geometry and their nonlinear and multivariate nature. This paper provides parametric models for planar transformer parasitics based on finite element simulations of a variety of winding design parameters using the Design of Experiments (DoE) methodology. A rotatable Central Composite Design (CCD) is employed based on a full 24 factorial design to obtain equations for the leakage inductance, inter and intra-winding capacitances, resistance, output current, and output voltage. Using only 25 runs, the final results can be used to replace time-intensive finite element simulations for a wide range of planar transformer winding design options. Validation simulations were performed to confirm the accuracy of the model. Results are presented using a planar E18/4/10 core set and can be extended to a variety of core shapes and sizes.",
"title": ""
}
] |
[
{
"docid": "247a6f200670e43980ac7762e52c86eb",
"text": "We propose a novel mechanism for Turing pattern formation that provides a possible explanation for the regular spacing of synaptic puncta along the ventral cord of C. elegans during development. The model consists of two interacting chemical species, where one is passively diffusing and the other is actively trafficked by molecular motors. We identify the former as the kinase CaMKII and the latter as the glutamate receptor GLR-1. We focus on a one-dimensional model in which the motor-driven chemical switches between forward and backward moving states with identical speeds. We use linear stability analysis to derive conditions on the associated nonlinear interaction functions for which a Turing instability can occur. We find that the dimensionless quantity γ = αd/v has to be sufficiently small for patterns to emerge, where α is the switching rate between motor states, v is the motor speed, and d is the passive diffusion coefficient. One consequence is that patterns emerge outside the parameter regime of fast switching where the model effectively reduces to a twocomponent reaction-diffusion system. Numerical simulations of the model using experimentally based parameter values generates patterns with a wavelength consistent with the synaptic spacing found in C. elegans. Finally, in the case of biased transport, we show that the system supports spatially periodic patterns in the presence of boundary forcing, analogous to flow distributed structures in reaction-diffusion-advection systems. Such forcing could represent the insertion of new motor-bound GLR-1 from the soma of ventral cord neurons.",
"title": ""
},
{
"docid": "171f84938f8788e293d763fccc8b3c27",
"text": "Google ads, black names and white names, racial discrimination, and click advertising",
"title": ""
},
{
"docid": "6a8ac2a2786371dcb043d92fa522b726",
"text": "We propose a modular reinforcement learning algorithm which decomposes a Markov decision process into independent modules. Each module is trained using Sarsa(λ). We introduce three algorithms for forming global policy from modules policies, and demonstrate our results using a 2D grid world.",
"title": ""
},
{
"docid": "fc77be5db198932d6cb34e334a4cdb4b",
"text": "This thesis investigates how data mining algorithms can be used to predict Bodily Injury Liability Insurance claim payments based on the characteristics of the insured customer’s vehicle. The algorithms are tested on real data provided by the organizer of the competition. The data present a number of challenges such as high dimensionality, heterogeneity and missing variables. The problem is addressed using a combination of regression, dimensionality reduction, and classification techniques. Questa tesi si propone di esaminare come alcune tecniche di data mining possano essere usate per predirre l’ammontare dei danni che un’ assicurazione dovrebbe risarcire alle persone lesionate a partire dalle caratteristiche del veicolo del cliente assicurato. I dati utilizzati sono reali e la loro analisi presenta diversi ostacoli dati dalle loro grandi dimensioni, dalla loro eterogeneitá e da alcune variabili mancanti. ll problema é stato affrontato utilizzando una combinazione di tecniche di regressione, di riduzione di dimensionalitá e di classificazione.",
"title": ""
},
{
"docid": "605c6b431b336ebe2ed07e7fcf529121",
"text": "Standard approaches to probabilistic reasoning require that one possesses an explicit model of the distribution in question. But, the empirical learning of models of probability distributions from partial observations is a problem for which efficient algorithms are generally not known. In this work we consider the use of bounded-degree fragments of the “sum-of-squares” logic as a probability logic. Prior work has shown that we can decide refutability for such fragments in polynomial-time. We propose to use such fragments to decide queries about whether a given probability distribution satisfies a given system of constraints and bounds on expected values. We show that in answering such queries, such constraints and bounds can be implicitly learned from partial observations in polynomial-time as well. It is known that this logic is capable of deriving many bounds that are useful in probabilistic analysis. We show here that it furthermore captures key polynomial-time fragments of resolution. Thus, these fragments are also quite expressive.",
"title": ""
},
{
"docid": "6e1e3209c127eca9c2e3de76d745d215",
"text": "Recently, in 2014, He and Wang proposed a robust and efficient multi-server authentication scheme using biometrics-based smart card and elliptic curve cryptography (ECC). In this paper, we first analyze He-Wang's scheme and show that their scheme is vulnerable to a known session-specific temporary information attack and impersonation attack. In addition, we show that their scheme does not provide strong user's anonymity. Furthermore, He-Wang's scheme cannot provide the user revocation facility when the smart card is lost/stolen or user's authentication parameter is revealed. Apart from these, He-Wang's scheme has some design flaws, such as wrong password login and its consequences, and wrong password update during password change phase. We then propose a new secure multi-server authentication protocol using biometric-based smart card and ECC with more security functionalities. Using the Burrows-Abadi-Needham logic, we show that our scheme provides secure authentication. In addition, we simulate our scheme for the formal security verification using the widely accepted and used automated validation of Internet security protocols and applications tool, and show that our scheme is secure against passive and active attacks. Our scheme provides high security along with low communication cost, computational cost, and variety of security features. As a result, our scheme is very suitable for battery-limited mobile devices as compared with He-Wang's scheme.",
"title": ""
},
{
"docid": "a138a545a3de355757928b58ba430f5d",
"text": "Learning analytics is a research topic that is gaining increasing popularity in recent time. It analyzes the learning data available in order to make aware or improvise the process itself and/or the outcome such as student performance. In this survey paper, we look at the recent research work that has been conducted around learning analytics, framework and integrated models, and application of various models and data mining techniques to identify students at risk and to predict student performance. Keywords— Learning Analytics, Student Performance, Student Retention, Academic analytics, Course success.",
"title": ""
},
{
"docid": "82df50c6c1c51b00d00d505dce80b7ab",
"text": "This volume brings together a collection of extended versions of selected papers from two workshops on ontology learning, knowledge acquisition and related topics that were organized in the context of the European Conference on Artificial Intelligence (ECAI) 2004 and the International Conference on Knowledge Engineering and Management (EKAW) 2004. The volume presents current research in ontology learning, addressing three perspectives: methodologies that have been proposed to automatically extract information from texts and to give a structured organization to such knowledge, including approaches based on machine learning techniques; evaluation methods for ontology learning, aiming at defining procedures and metrics for a quantitative evaluation of the ontology learning task; and finally application scenarios that make ontology learning a challenging area in the context of real applications such as bio-informatics. According to the three perspectives mentioned above, the book is divided into three sections, each including a selection of papers addressing respectively the methods, the applications and the evaluation of ontology learning approaches. However, all selected papers pay considerably attention to the evaluation perspective, as this was a central topic of the ECAI 2004 workshop out of which most of the papers in this volume originate.",
"title": ""
},
{
"docid": "8238edb8ec7b9b1dd076c61c619b5da3",
"text": "Two complexity parameters of EEG, i.e. approximate entropy (ApEn) and Kolmogorov complexity (Kc) are utilized to characterize the complexity and irregularity of EEG data under the different mental fatigue states. Then the kernel principal component analysis (KPCA) and Hidden Markov Model (HMM) are combined to differentiate two mental fatigue states. The KPCA algorithm is employed to extract nonlinear features from the complexity parameters of EEG and improve the generalization performance of HMM. The investigation suggests that ApEn and Kc can effectively describe the dynamic complexity of EEG, which is strongly correlated with mental fatigue. Both complexity parameters are significantly decreased (P < 0.005) as the mental fatigue level increases. These complexity parameters may be used as the indices of the mental fatigue level. Moreover, the joint KPCA–HMM method can effectively reduce the dimensionality of the feature vectors, accelerate the classification speed and achieve higher classification accuracy (84%) of mental fatigue. Hence KPCA–HMM could be a promising model for the estimation of mental fatigue. Crown Copyright 2010 Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0084faef0e08c4025ccb3f8fd50892f1",
"text": "Steganography is a method of hiding secret messages in a cover object while communication takes place between sender and receiver. Security of confidential information has always been a major issue from the past times to the present time. It has always been the interested topic for researchers to develop secure techniques to send data without revealing it to anyone other than the receiver. Therefore from time to time researchers have developed many techniques to fulfill secure transfer of data and steganography is one of them. In this paper we have proposed a new technique of image steganography i.e. Hash-LSB with RSA algorithm for providing more security to data as well as our data hiding method. The proposed technique uses a hash function to generate a pattern for hiding data bits into LSB of RGB pixel values of the cover image. This technique makes sure that the message has been encrypted before hiding it into a cover image. If in any case the cipher text got revealed from the cover image, the intermediate person other than receiver can't access the message as it is in encrypted form.",
"title": ""
},
{
"docid": "698cc50558811c7af44d40ba7dbdfe6f",
"text": "We show that the demand for news varies with the perceived affinity of the news organization to the consumer’s political preferences. In an experimental setting, conservatives and Republicans preferred to read news reports attributed to Fox News and to avoid news from CNN and NPR. Democrats and liberals exhibited exactly the opposite syndrome—dividing their attention equally between CNN and NPR, but avoiding Fox News. This pattern of selective exposure based on partisan affinity held not only for news coverage of controversial issues but also for relatively ‘‘soft’’ subjects such as crime and travel. The tendency to select news based on anticipated agreement was also strengthened among more politically engaged partisans. Overall, these results suggest that the further proliferation of new media and enhanced media choices may contribute to the further polarization of the news audience.",
"title": ""
},
{
"docid": "9187120909f27d1378f78659a6d57096",
"text": "Let us now come to our first attempts to define fractals in an intrinsic way and to deal with infinities and with their non-differentiability. We first consider the case of fractal curves drawn in a plane. The von Koch construction may be generalized in the complex plane by first giving ourselves a base (or “generator”) F1 made of p segments of length 1/q. The coordinates of the p points Pj of F1 are given, either in Cartesian or in polar coordinates (see Figs. 3.5 and 3.6) by:",
"title": ""
},
{
"docid": "6889f45db249d7054550ecb8df5ee822",
"text": "In this work a dynamic model of a planetary gear transmission is developed to study the sensitivity of the natural frequencies and vibration modes to system parameters in perturbed situation. Parameters under consideration include component masses ,moment of inertia , mesh and support stiff nesses .The model admits three planar degree of freedom for planets ,sun, ring, and carrier. Vibration modes are classified into translational, rotational and planet modes .Well-defined modal properties of tuned (cyclically symmetric) planetary gears for linear ,time-invariant case are used to calculate eigensensitivities and express them in simple formulae .These formulae provide efficient mean to determine the sensitivity to stiffness ,mass and inertia parameters in perturbed situation.",
"title": ""
},
{
"docid": "9c47b068f7645dc5464328e80be24019",
"text": "In this paper we propose a highly effective and scalable framework for recognizing logos in images. At the core of our approach lays a method for encoding and indexing the relative spatial layout of local features detected in the logo images. Based on the analysis of the local features and the composition of basic spatial structures, such as edges and triangles, we can derive a quantized representation of the regions in the logos and minimize the false positive detections. Furthermore, we propose a cascaded index for scalable multi-class recognition of logos.\n For the evaluation of our system, we have constructed and released a logo recognition benchmark which consists of manually labeled logo images, complemented with non-logo images, all posted on Flickr. The dataset consists of a training, validation, and test set with 32 logo-classes. We thoroughly evaluate our system with this benchmark and show that our approach effectively recognizes different logo classes with high precision.",
"title": ""
},
{
"docid": "693417e5608cf092842ab34ee8cce8d9",
"text": "Software as a Service has become a dominant IT news topic over the last few years. Especially in these current recession times, adopting SaaS solutions is increasingly becoming the more favorable alternative for customers rather than investing on brand new on-premise software or outsourcing. This fact has inevitably stimulated the birth of numerous SaaS vendors. Unfortunately, many small-to-medium vendors have emerged only to disappear again from the market. A lack of maturity in their pricing strategy often becomes part of the reason. This paper presents the ’Pricing Strategy Guideline Framework (PSGF)’ that assists SaaS vendors with a guideline to ensure that all the fundamental pricing elements are included in their pricing strategy. The PSGF describes five different layers that need to be taken to price a software: value creation, price structure, price and value communication, price policy, and price level. The PSGF can be of particularly great use for the startup vendors that tend to have less experience in pricing their SaaS solutions. Up until now, there have been no SaaS pricing frameworks available in the SaaS research area, such as the PSGF developed in this research. The PSGF is evaluated in a case study at a Dutch SaaS vendor in the Finance sector.",
"title": ""
},
{
"docid": "a771a452cb8869acc5c826ffed21d629",
"text": "Copyright © 2008 Massachusetts Medical Society. A 23-year-old woman presents with palpitations. Over the past 6 months, she has reported loose stools, a 10-lb (4.5-kg) weight loss despite a good appetite and food intake, and increased irritability. She appears to be anxious and has a pulse of 119 beats per minute and a blood pressure of 137/80 mm Hg. Her thyroid gland is diffusely and symmetrically enlarged to twice the normal size, and it is firm and nontender; a thyroid bruit is audible. She has an eyelid lag, but no proptosis or periorbital edema. The serum thyrotropin level is 0.02 μU per milliliter (normal range, 0.35 to 4.50) and the level of free thyroxine is 4.10 ng per deciliter (normal range, 0.89 to 1.76). How should she be further evaluated and treated?",
"title": ""
},
{
"docid": "b2a43491283732082c65f88c9b03016f",
"text": "BACKGROUND\nExpressing breast milk has become increasingly prevalent, particularly in some developed countries. Concurrently, breast pumps have evolved to be more sophisticated and aesthetically appealing, adapted for domestic use, and have become more readily available. In the past, expressed breast milk feeding was predominantly for those infants who were premature, small or unwell; however it has become increasingly common for healthy term infants. The aim of this paper is to systematically explore the literature related to breast milk expressing by women who have healthy term infants, including the prevalence of breast milk expressing, reported reasons for, methods of, and outcomes related to, expressing.\n\n\nMETHODS\nDatabases (Medline, CINAHL, JSTOR, ProQuest Central, PsycINFO, PubMed and the Cochrane library) were searched using the keywords milk expression, breast milk expression, breast milk pumping, prevalence, outcomes, statistics and data, with no limit on year of publication. Reference lists of identified papers were also examined. A hand-search was conducted at the Australian Breastfeeding Association Lactation Resource Centre. Only English language papers were included. All papers about expressing breast milk for healthy term infants were considered for inclusion, with a focus on the prevalence, methods, reasons for and outcomes of breast milk expression.\n\n\nRESULTS\nA total of twenty two papers were relevant to breast milk expression, but only seven papers reported the prevalence and/or outcomes of expressing amongst mothers of well term infants; all of the identified papers were published between 1999 and 2012. Many were descriptive rather than analytical and some were commentaries which included calls for more research, more dialogue and clearer definitions of breastfeeding. While some studies found an association between expressing and the success and duration of breastfeeding, others found the opposite. In some cases these inconsistencies were compounded by imprecise definitions of breastfeeding and breast milk feeding.\n\n\nCONCLUSIONS\nThere is limited evidence about the prevalence and outcomes of expressing breast milk amongst mothers of healthy term infants. The practice of expressing breast milk has increased along with the commercial availability of a range of infant feeding equipment. The reasons for expressing have become more complex while the outcomes, when they have been examined, are contradictory.",
"title": ""
},
{
"docid": "6b82421dc4b949134cf7ff52c64ed960",
"text": "The rise in popularity of mobile devices has led to a parallel growth in the size of the app store market, intriguing several research studies and commercial platforms on mining app stores. App store reviews are used to analyze different aspects of app development and evolution. However, app users’ feedback does not only exist on the app store. In fact, despite the large quantity of posts that are made daily on social media, the importance and value that these discussions provide remain mostly unused in the context of mobile app development. In this paper, we study how Twitter can provide complementary information to support mobile app development. By analyzing a total of 30,793 apps over a period of six weeks, we found strong correlations between the number of reviews and tweets for most apps. Moreover, through applying machine learning classifiers, topic modeling and subsequent crowd-sourcing, we successfully mined 22.4% additional feature requests and 12.89% additional bug reports from Twitter. We also found that 52.1% of all feature requests and bug reports were discussed on both tweets and reviews. In addition to finding common and unique information from Twitter and the app store, sentiment and content analysis were also performed for 70 randomly selected apps. From this, we found that tweets provided more critical and objective views on apps than reviews from the app store. These results show that app store review mining is indeed not enough; other information sources ultimately provide added value and information for app developers.",
"title": ""
},
{
"docid": "77cfb72acbc2f077c3d9b909b0a79e76",
"text": "In this paper, we analyze two general-purpose encoding types, trees and graphs systematically, focusing on trends over increasingly complex problems. Tree and graph encodings are similar in application but offer distinct advantages and disadvantages in genetic programming. We describe two implementations and discuss their evolvability. We then compare performance using symbolic regression on hundreds of random nonlinear target functions of both 1-dimensional and 8-dimensional cases. Results show the graph encoding has less bias for bloating solutions but is slower to converge and deleterious crossovers are more frequent. The graph encoding however is found to have computational benefits, suggesting it to be an advantageous trade-off between regression performance and computational effort.",
"title": ""
},
{
"docid": "9d461f1dbe42e2030efb3eb91603331f",
"text": "We developed a cross-lingual, question-answering (CLQA) system for Hindi and English. It accepts questions in English, finds candidate answers in Hindi newspapers, and translates the answer candidates into English along with the context surrounding each answer. The system was developed as part of the surprise language exercise (SLE) within the TIDES program.",
"title": ""
}
] |
scidocsrr
|
15d24196e8dfe7978936a2f376c082ab
|
A variational-difference numerical method for designing progressive-addition lenses
|
[
{
"docid": "c31ab5f94e64c849f88465e1be60939a",
"text": "Progressive addition lenses are a relatively new approach t o compensate for defects of the human visual system. While traditional spectacles use rotationally symmetric l enses, progressive lenses require the specification of free form surfaces. This poses difficult problems for the optimal design and its visual evaluation. This paper presents two new techniques for the visualizatio n of ptical systems and the optimization of progressive lenses. Both are based on the same wavefront tracingapproach to accurately evaluate the refraction properties of complex optical systems. We use the results of wavefront tracing for continuously refocusing the eye during rendering. Together with distribution ray tracing, this yields high-quality images that accurately simulate the visual quality of an optical system. The design of progressive lenses is difficult due to t he trade-off between the desired properties of the lens and unavoidable optical errors, such as astigmatism an d distortions. We use wavefront tracing to derive an accurate error functional describing the desired properti s and the optical error across a lens. Minimizing this error yields optimal free-form lens surfaces. While the basic approach is much more general, in this paper, w describe its application to the particular problem of designing and evaluating progressive lenses and demonst rate the benefits of the new approach with several example images.",
"title": ""
}
] |
[
{
"docid": "42883fb5e43959150ab6b3e64727b9e1",
"text": "For goal-directed arm movements, the nervous system generates a sequence of motor commands that bring the arm toward the target. Control of the octopus arm is especially complex because the arm can be moved in any direction, with a virtually infinite number of degrees of freedom. Here we show that arm extensions can be evoked mechanically or electrically in arms whose connection with the brain has been severed. These extensions show kinematic features that are almost identical to normal behavior, suggesting that the basic motor program for voluntary movement is embedded within the neural circuitry of the arm itself. Such peripheral motor programs represent considerable simplification in the motor control of this highly redundant appendage.",
"title": ""
},
{
"docid": "d68147bf8637543adf3053689de740c3",
"text": "In this paper, we do a research on the keyword extraction method of news articles. We build a candidate keywords graph model based on the basic idea of TextRank, use Word2Vec to calculate the similarity between words as transition probability of nodes' weight, calculate the word score by iterative method and pick the top N of the candidate keywords as the final results. Experimental results show that the weighted TextRank algorithm with correlation of words can improve performance of keyword extraction generally.",
"title": ""
},
{
"docid": "0adfab662c5179d15f6c231aac094ec8",
"text": "The explosion of the Internet provides a variety possibilities for communication, finding information and many other activities, turning into an essential tool in our modern everyday life. However, its huge expansion globally has created some serious safety issues, which require a special approach. One of these issues and perhaps the most important one concerns the safety of children on the Internet, as they are more exposed to dangers and threats in comparison with adults. In order to design effective measures against these threats and dangers deep understanding of minors' activities on the Internet, along with their motivation, is a first necessary step. It is shown in this report that minors' Internet activity tends heavily, and in an increasing manner, towards Online Social Networks (OSN). Thus, Internet filtering techniques designed and applied so far for child online protection need to be reconsidered and redesigned in a smarter way such as data analytics, advanced content analysis and data mining techniques are incorporated. OSN fake account identification, sexual content detection and flagging of multiple OSN accounts of the same person are examples that require such sophisticated techniques. This study deals with a literature review concerning the Internet activity and motivation of use by minors and presents in a coherent manner the identified risks and threats that children using the web and online social networks are exposed to. It also presents a systematic process for designing and developing modern and state of the art techniques to prevent minors' exposure to those risks and dangers.",
"title": ""
},
{
"docid": "450a0ffcd35400f586e766d68b75cc98",
"text": "While there has been a success in 2D human pose estimation with convolutional neural networks (CNNs), 3D human pose estimation has not been thoroughly studied. In this paper, we tackle the 3D human pose estimation task with end-to-end learning using CNNs. Relative 3D positions between one joint and the other joints are learned via CNNs. The proposed method improves the performance of CNN with two novel ideas. First, we added 2D pose information to estimate a 3D pose from an image by concatenating 2D pose estimation result with the features from an image. Second, we have found that more accurate 3D poses are obtained by combining information on relative positions with respect to multiple joints, instead of just one root joint. Experimental results show that the proposed method achieves comparable performance to the state-of-the-art methods on Human 3.6m dataset.",
"title": ""
},
{
"docid": "3193c9372c9cdc581133976035ba9694",
"text": "Spoken natural language often contains ambiguities that must be addressed by a spoken dialogue system. In this work, we present the internal semantic representation and resolution strategy of a dialogue system designed to understand ambiguous input. These mechanisms are domain independent; almost all task-specific knowledge is represented in parameterizable data structures. The system derives candidate descriptions of what the user said from raw input data, context-tracking and scoring. These candidates are chosen on the basis of a pragmatic analysis of responses elicited by an extensive implicit and explicit confirmation dialogue strategy, combined with specific error correction capabilities available to the user. This new ambiguity resolution strategy greatly improves dialogue interaction, eliminating about half of the errors in dialogues from a travel reservation task.",
"title": ""
},
{
"docid": "1aef9a6523de585a7f71440d71d5bab8",
"text": "BACKGROUND\nDiabetes self-management education is a cornerstone of successful diabetes management. Various methods have been used to reach the increasing numbers of patients with diabetes, including Internet-based education. The purpose of this article is to review various delivery methods of Internet diabetes education that have been evaluated, as well as their effectiveness in improving diabetes-related outcomes.\n\n\nMATERIALS AND METHODS\nLiterature was identified in the Cumulative Index to Nursing and Allied Health Literature (CINAHL), PubMed, Medline, EBSCO, the Cochrane Library, and the Web of Science databases through searches using the following terms: \"type 2 diabetes AND internet/web based AND education\" and \"type 2 diabetes AND diabetes self-management education (DSME) AND web-based/internet OR technology assisted education.\" The search was limited to English language articles published in the last 10 years. The search yielded 111 articles; of these, 14 met criteria for inclusion in this review. Nine studies were randomized controlled trials, and study lengths varied from 2 weeks to 24 months, for a total of 2,802 participants.\n\n\nRESULTS\nDSME delivered via the Internet is effective at improving measures of glycemic control and diabetes knowledge compared with usual care. In addition, results demonstrate that improved eating habits and increased attendance at clinic appointments occur after the online DSME, although engagement and usage of Internet materials waned over time. Interventions that included an element of interaction with healthcare providers were seen as attractive to participants.\n\n\nCONCLUSIONS\nInternet-delivered diabetes education has the added benefit of easier access for many individuals, and patients can self-pace themselves through materials. More research on the cost-benefits of Internet diabetes education and best methods to maintain patient engagement are needed, along with more studies assessing the long-term impact of Internet-delivered DSME.",
"title": ""
},
{
"docid": "ea9f43aaab4383369680c85a040cedcf",
"text": "Efforts toward automated detection and identification of multistep cyber attack scenarios would benefit significantly from a methodology and language for modeling such scenarios. The Correlated Attack Modeling Language (CAML) uses a modular approach, where a module represents an inference step and modules can be linked together to detect multistep scenarios. CAML is accompanied by a library of predicates, which functions as a vocabulary to describe the properties of system states and events. The concept of attack patterns is introduced to facilitate reuse of generic modules in the attack modeling process. CAML is used in a prototype implementation of a scenario recognition engine that consumes first-level security alerts in real time and produces reports that identify multistep attack scenarios discovered in the alert stream.",
"title": ""
},
{
"docid": "4107fe17e6834f96a954e13cbb920f78",
"text": "Non-orthogonal multiple access (NOMA) can support more users than OMA techniques using the same wireless resources, which is expected to support massive connectivity for Internet of Things in 5G. Furthermore, in order to reduce the transmission latency and signaling overhead, grant-free transmission is highly expected in the uplink NOMA systems, where user activity has to be detected. In this letter, by exploiting the temporal correlation of active user sets, we propose a dynamic compressive sensing (DCS)-based multi-user detection (MUD) to realize both user activity and data detection in several continuous time slots. In particular, as the temporal correlation of the active user sets between adjacent time slots exists, we can use the estimated active user set in the current time slot as the prior information to estimate the active user set in the next time slot. Simulation results show that the proposed DCS-based MUD can achieve much better performance than that of the conventional CS-based MUD in NOMA systems.",
"title": ""
},
{
"docid": "8707d04413b656a965a03ac6ff846037",
"text": "In today's scenario, cyber security is one of the major concerns in network security and malware pose a serious threat to cyber security. The foremost step to guard the cyber system is to have an in-depth knowledge of the existing malware, various types of malware, methods of detecting and bypassing the adverse effects of malware. In this work, machine learning approach to the fore-going static and dynamic analysis techniques is investigated and reported to discuss the most recent trends in cyber security. The study captures a wide variety of samples from various online sources. The peculiar details about the malware such as file details, signatures, and hosts involved, affected files, registry keys, mutexes, section details, imports, strings and results from different antivirus have been deeply analyzed to conclude origin and functionality of malware. This approach contributes to vital cyber situation awareness by combining different malware discovery techniques, for example, static examination, to alter the session of malware triage for cyber defense and decreases the count of false alarms. Current trends in warfare have been determined.",
"title": ""
},
{
"docid": "648637a8acf70ca8266e538801b5f192",
"text": "There has been an impressive gain in individual life expectancy with parallel increases in age-related chronic diseases of the cardiovascular, brain and immune systems. These can cause loss of autonomy, dependence and high social costs for individuals and society. It is now accepted that aging and age-related diseases are in part caused by free radical reactions. The arrest of aging and stimulation of rejuvenation of the human body is also being sought. Over the last 20 years the use of herbs and natural products has gained popularity and these are being consumed backed by epidemiological evidence. One such herb is garlic, which has been used throughout the history of civilization for treating a wide variety of ailments associated with aging. The role of garlic in preventing age-related diseases has been investigated extensively over the last 10-15 years. Garlic has strong antioxidant properties and it has been suggested that garlic can prevent cardiovascular disease, inhibit platelet aggregation, thrombus formation, prevent cancer, diseases associated with cerebral aging, arthritis, cataract formation, and rejuvenate skin, improve blood circulation and energy levels. This review provides an insight in to garlic's antioxidant properties and presents evidence that it may either prevent or delay chronic diseases associated with aging.",
"title": ""
},
{
"docid": "8a36b081bb9dc9b9ed4eb9f6796c7fdb",
"text": "Almost all problems in computer vision are related in one form or an other to the problem of estimating parameters from noisy data. In this tutorial, we present what is probably the most commonly used techniques for parameter es timation. These include linear least-squares (pseudo-inverse and eigen analysis); orthogonal least-squares; gradient-weighted least-squares; bias-corrected renormal ization; Kalman ltering; and robust techniques (clustering, regression diagnostics, M-estimators, least median of squares). Particular attention has been devoted to discussions about the choice of appropriate minimization criteria and the robustness of the di erent techniques. Their application to conic tting is described. Key-words: Parameter estimation, Least-squares, Bias correction, Kalman lter ing, Robust regression Updated on April 15, 1996 To appear in Image and Vision Computing Journal, 1996",
"title": ""
},
{
"docid": "b5c8263dd499088ded04c589b5da1d9f",
"text": "User interfaces and information systems have become increasingly social in recent years, aimed at supporting the decentralized, cooperative production and use of content. A theory that predicts the impact of interface and interaction designs on such factors as participation rates and knowledge discovery is likely to be useful. This paper reviews a variety of observed phenomena in social information foraging and sketches a framework extending Information Foraging Theory towards making predictions about the effects of diversity, interference, and cost-of-effort on performance time, participation rates, and utility of discoveries.",
"title": ""
},
{
"docid": "3323474060ba5f1fbbbdcb152c22a6a9",
"text": "A compact triple-band microstrip slot antenna applied to WLAN/WiMAX applications is proposed in this letter. This antenna has a simpler structure than other antennas designed for realizing triple-band characteristics. It is just composed of a microstrip feed line, a substrate, and a ground plane on which some simple slots are etched. Then, to prove the validation of the design, a prototype is fabricated and measured. The experimental data show that the antenna can provide three impedance bandwidths of 600 MHz centered at 2.7 GHz, 430 MHz centered at 3.5 GHz, and 1300 MHz centered at 5.6 GHz.",
"title": ""
},
{
"docid": "5b1c38fccbd591e6ab00a66ef636eb5d",
"text": "There is a great thrust in industry toward the development of more feasible and viable tools for storing fast-growing volume, velocity, and diversity of data, termed ‘big data’. The structural shift of the storage mechanism from traditional data management systems to NoSQL technology is due to the intention of fulfilling big data storage requirements. However, the available big data storage technologies are inefficient to provide consistent, scalable, and available solutions for continuously growing heterogeneous data. Storage is the preliminary process of big data analytics for real-world applications such as scientific experiments, healthcare, social networks, and e-business. So far, Amazon, Google, and Apache are some of the industry standards in providing big data storage solutions, yet the literature does not report an in-depth survey of storage technologies available for big data, investigating the performance and magnitude gains of these technologies. The primary objective of this paper is to conduct a comprehensive investigation of state-of-the-art storage technologies available for big data. A well-defined taxonomy of big data storage technologies is presented to assist data analysts and researchers in understanding and selecting a storage mechanism that better fits their needs. To evaluate the performance of different storage architectures, we compare and analyze the existing approaches using Brewer’s CAP theorem. The significance and applications of storage technologies and support to other categories are discussed. Several future research challenges are highlighted with the intention to expedite the deployment of a reliable and scalable storage system.",
"title": ""
},
{
"docid": "a49d0dfa008f9f85a2b0bb13b1ffd9da",
"text": "Abduction is inference to the best explanation In the TACITUS project at SRI we have developed an approach to abductive inference called weighted abduction that has resulted in a signi cant simpli cation of how the problem of interpreting texts is conceptualized The interpretation of a text is the minimal explanation of why the text would be true More precisely to interpret a text one must prove the logical form of the text from what is already mutually known allowing for coercions merging redundancies where possible and making assumptions where necessary It is shown how such local pragmatics problems as reference resolution the interpretation of compound nominals the resolution of syntactic ambiguity and metonymy and schema recognition can be solved in this manner Moreover this approach of interpretation as abduction can be combined with the older view of parsing as deduction to produce an elegant and thorough integration of syntax semantics and pragmatics one that spans the range of linguistic phenomena from phonology to discourse structure Finally we discuss means for making the abduction process e cient possibilities for extending the approach to other pragmatics phenomena and the semantics of the weights and costs in the abduction scheme",
"title": ""
},
{
"docid": "3043eb8fbe54b5ce5f2767934a6e689e",
"text": "A 21-year-old man presented with an enlarged giant hemangioma on glans penis which also causes an erectile dysfunction (ED) that partially responded to the intracavernous injection stimulation test. Although the findings in magnetic resonance imaging (MRI) indicated a glandular hemangioma, penile colored Doppler ultrasound revealed an invaded cavernausal hemangioma to the glans. Surgical excision was avoided according to the broad extension of the gland lesion. Holmium laser coagulation was applied to the lesion due to the cosmetically concerns. However, the cosmetic results after holmium laser application was not impressive as expected without an improvement in intracavernous injection stimulation test. In conclusion, holmium laser application should not be used to the hemangiomas of glans penis related to the corpus cavernosum, but further studies are needed to reveal the effects of holmium laser application in small hemangiomas restricted to the glans penis.",
"title": ""
},
{
"docid": "9f2db5cf1ee0cfd0250e68bdbc78b434",
"text": "A novel transverse equivalent network is developed in this letter to efficiently analyze a recently proposed leaky-wave antenna in substrate integrated waveguide (SIW) technology. For this purpose, precise modeling of the SIW posts for any distance between vias is essential to obtain accurate results. A detailed parametric study is performed resulting in leaky-mode dispersion curves as a function of the main geometrical dimensions of the antenna. Finally, design curves that directly provide the requested dimensions to synthesize the desired scanning response and leakage rate are reported and validated with experiments.",
"title": ""
},
{
"docid": "e88cab4c5e93b96fd39d63cd35de00fa",
"text": "Visual recognition algorithms are required today to exhibit adaptive abilities. Given a deep model trained on a specific, given task, it would be highly desirable to be able to adapt incrementally to new tasks, preserving scalability as the number of new tasks increases, while at the same time avoiding catastrophic forgetting issues. Recent work has shown that masking the internal weights of a given original conv-net through learned binary variables is a promising strategy. We build upon this intuition and take into account more elaborated affine transformations of the convolutional weights that include learned binary masks. We show that with our generalization it is possible to achieve significantly higher levels of adaptation to new tasks, enabling the approach to compete with fine tuning strategies by requiring slightly more than 1 bit per network parameter per additional task. Experiments on two popular benchmarks showcase the power of our approach, that achieves the new state of the art on the Visual Decathlon Challenge.",
"title": ""
},
{
"docid": "590d6fd3a0faddba67f48a660c3b6c86",
"text": "A graph G of order n is k-placeable if there exist k edge-disjoint copies of G in the complete graph Kn. Previous work characterized all trees that are k-placeable for k ≤ 3. This work extends those results by giving a complete characterization of all 4-placeable trees.",
"title": ""
},
{
"docid": "18c230517b8825b616907548829e341b",
"text": "The application of small Remotely-Controlled (R/C) aircraft for aerial photography presents many unique advantages over manned aircraft due to their lower acquisition cost, lower maintenance issue, and superior flexibility. The extraction of reliable information from these images could benefit DOT engineers in a variety of research topics including, but not limited to work zone management, traffic congestion, safety, and environmental. During this effort, one of the West Virginia University (WVU) R/C aircraft, named ‘Foamy’, has been instrumented for a proof-of-concept demonstration of aerial data acquisition. Specifically, the aircraft has been outfitted with a GPS receiver, a flight data recorder, a downlink telemetry hardware, a digital still camera, and a shutter-triggering device. During the flight a ground pilot uses one of the R/C channels to remotely trigger the camera. Several hundred high-resolution geo-tagged aerial photographs were collected during 10 flight experiments at two different flight fields. A Matlab based geo-reference software was developed for measuring distances from an aerial image and estimating the geo-location of each ground asset of interest. A comprehensive study of potential Sources of Errors (SOE) has also been performed with the goal of identifying and addressing various factors that might affect the position estimation accuracy. The result of the SOE study concludes that a significant amount of position estimation error was introduced by either mismatching of different measurements or by the quality of the measurements themselves. The first issue is partially addressed through the design of a customized Time-Synchronization Board (TSB) based on a MOD 5213 embedded microprocessor. The TSB actively controls the timing of the image acquisition process, ensuring an accurate matching of the GPS measurement and the image acquisition time. The second issue is solved through the development of a novel GPS/INS (Inertial Navigation System) based on a 9-state Extended Kalman Filter (EKF). The developed sensor fusion algorithm provides a good estimation of aircraft attitude angle without the need for using expensive sensors. Through the help of INS integration, it also provides a very smooth position estimation that eliminates large jumps typically seen in the raw GPS measurements.",
"title": ""
}
] |
scidocsrr
|
4312aeb29774277790da1908b44a009b
|
Enterprise Social Media: Definition, History, and Prospects for the Study of Social Technologies in Organizations
|
[
{
"docid": "8759277ebf191306b3247877e2267173",
"text": "As organizations scale up, their collective knowledge increases, and the potential for serendipitous collaboration between members grows dramatically. However, finding people with the right expertise or interests becomes much more difficult. Semi-structured social media, such as blogs, forums, and bookmarking, present a viable platform for collaboration-if enough people participate, and if shared content is easily findable. Within the trusted confines of an organization, users can trade anonymity for a rich identity that carries information about their role, location, and position in its hierarchy.\n This paper describes WaterCooler, a tool that aggregates shared internal social media and cross-references it with an organization's directory. We deployed WaterCooler in a large global enterprise and present the results of a preliminary user study. Despite the lack of complete social networking affordances, we find that WaterCooler changed users' perceptions of their workplace, made them feel more connected to each other and the company, and redistributed users' attention outside their own business groups.",
"title": ""
}
] |
[
{
"docid": "220bf8be47ae728d922150b520175b8a",
"text": "A 29-year-old man, suffering from dry skin, who had a brother with rhinoconjunctivitis, was referred to us by a private dermatologist in January 2016 because of erythematous and scaly lips, which had been almost continuously present for >8 years. The eruption had started at both corners of the mouth, but gradually spread to the entire upper and lower lips. At consultation, we also observed erythema and scales at the philtrum, arranged in two vertical lines divided from each other at the midline",
"title": ""
},
{
"docid": "48c49e1f875978ec4e2c1d4549a98ffd",
"text": "Deep neural networks have been shown to be very powerful modeling tools for many supervised learning tasks involving complex input patterns. However, they can also easily overfit to training set biases and label noises. In addition to various regularizers, example reweighting algorithms are popular solutions to these problems, but they require careful tuning of additional hyperparameters, such as example mining schedules and regularization hyperparameters. In contrast to past reweighting methods, which typically consist of functions of the cost value of each example, in this work we propose a novel meta-learning algorithm that learns to assign weights to training examples based on their gradient directions. To determine the example weights, our method performs a meta gradient descent step on the current mini-batch example weights (which are initialized from zero) to minimize the loss on a clean unbiased validation set. Our proposed method can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available.",
"title": ""
},
{
"docid": "1164a33e84a333628ec8fe74aab45f4c",
"text": "Low rank approximation is an important tool in many applications. Given an observed matrix with elements corrupted by Gaussian noise it is possible to find the best approximating matrix of a given rank through singular value decomposition. However, due to the non-convexity of the formulation it is not possible to incorporate any additional knowledge of the sought matrix without resorting to heuristic optimization techniques. In this paper we propose a convex formulation that is more flexible in that it can be combined with any other convex constraints and penalty functions. The formulation uses the so called convex envelope, which is the provably best possible convex relaxation. We show that for a general class of problems the envelope can be efficiently computed and may in some cases even have a closed form expression. We test the algorithm on a number of real and synthetic data sets and show state-of-the-art results.",
"title": ""
},
{
"docid": "f18aefe00103d33ae256a6fd161531ff",
"text": "Conventional database optimizers take full advantage of associativity and commutativity properties of join to implement e cient and powerful optimizations on select/project/join queries. However, only limited optimization is performed on other binary operators. In this paper, we present the theory and algorithms needed to generate alternative evaluation orders for the optimization of queries containing outerjoins. Our results include both a complete set of transformation rules, suitable for new-generation, transformation-based optimizers, and a bottom-up join enumeration algorithm compatible with those used by traditional optimizers.",
"title": ""
},
{
"docid": "26e90d8dca906c2e7dd023441ba4438a",
"text": "In this paper, we show that the handedness of a planar chiral checkerboard-like metasurface can be dynamically switched by modulating the local sheet impedance of the metasurface structure. We propose a metasurface design to realize the handedness switching and theoretically analyze its electromagnetic characteristic based on Babinet’s principle. Numerical simulations of the proposed metasurface are performed to validate the theoretical analysis. It is demonstrated that the polarity of asymmetric transmission for circularly polarized waves, which is determined by the planar chirality of the metasurface, is inverted by switching the sheet impedance at the interconnection points of the checkerboard-like structure. The physical origin of the asymmetric transmission is also discussed in terms of the surface current and charge distributions on the metasurface.",
"title": ""
},
{
"docid": "da28960f4a5daeb80aa5c344db326c8d",
"text": "Adaptive traffic signal control, which adjusts traffic signal timing according to real-time traffic, has been shown to be an effective method to reduce traffic congestion. Available works on adaptive traffic signal control make responsive traffic signal control decisions based on human-crafted features (e.g. vehicle queue length). However, human-crafted features are abstractions of raw traffic data (e.g., position and speed of vehicles), which ignore some useful traffic information and lead to suboptimal traffic signal controls. In this paper, we propose a deep reinforcement learning algorithm that automatically extracts all useful features (machine-crafted features) from raw real-time traffic data and learns the optimal policy for adaptive traffic signal control. To improve algorithm stability, we adopt experience replay and target network mechanisms. Simulation results show that our algorithm reduces vehicle delay by up to 47% and 86% when compared to another two popular traffic signal control algorithms, longest queue first algorithm and fixed time control algorithm, respectively.",
"title": ""
},
{
"docid": "071d7bc76ae1a23c82789d57f5647f40",
"text": "Applied Research Associates and BAE Systems are working together to develop a wearable augmented reality system under the DARPA ULTRA-Vis program † . Our approach to achieve the objectives of ULTRAVis, called iLeader, incorporates a full color 40° field of view (FOV) see-thru holographic waveguide integrated with sensors for full position and head tracking to provide an unobtrusive information system for operational maneuvers. iLeader will enable warfighters to mark-up the 3D battle-space with symbologic identification of graphical control measures, friendly force positions and enemy/target locations. Our augmented reality display provides dynamic real-time painting of symbols on real objects, a pose-sensitive 360 ̊ representation of relevant object positions, and visual feedback for a variety of system activities. The iLeader user interface and situational awareness graphical representations are highly intuitive, nondisruptive, and always tactically relevant. We used best human-factors practices, system engineering expertise, and cognitive task analysis to design effective strategies for presenting real-time situational awareness to the military user without distorting their natural senses and perception. We present requirements identified for presenting information within a see-through display in combat environments, challenges in designing suitable visualization capabilities, and solutions that enable us to bring real-time iconic command and control to the tactical user community.",
"title": ""
},
{
"docid": "2cee5301ded39d4256f04d4440689a9c",
"text": "Job performance refers to how effective employees a r in accomplishing their tasks and responsibilitie s related to direct patient care. Improving the perfo rmance of employees has been a topic of great inter est to practitioners as well as researchers. The aim of th e s udy is to analysis the impacts of job performan ce level on nurses’ performance working in public hospitals. In order to achieve the study objective, a survey conducted. Questionnaires distributed to the public sector hospital’s manager in Saudi Arabia. The fin dings of the study turn out to be true; the study will co ntribute to both theory and practice. Through the p r sent study, the researcher expects the findings to shed light on the research conducted regression to analy sis the impacts of job performance level on nurses’ in publ ic sector hospitals in Saudi Arabia.",
"title": ""
},
{
"docid": "e2ffac5515399469b93ed53e05d92345",
"text": "Network security is a major issue affecting SCADA systems designed and deployed in the last decade. Simulation of network attacks on a SCADA system presents certain challenges, since even a simple SCADA system is composed of models in several domains and simulation environments. Here we demonstrate the use of C2WindTunnel to simulate a plant and its controller, and the Ethernet network that connects them, in different simulation environments. We also simulate DDOS-like attacks on a few of the routers to observe and analyze the effec ts of a network attack on such a system. I. I NTRODUCTION Supervisory Control And Data Acquisition (SCADA) systems are computer-based monitoring tools that are used to manage and control critical infrastructure functions in re al time, like gas utilities, power plants, chemical plants, tr affic control systems, etc. A typical SCADA system consists of a SCADA Master which provides overall monitoring and control for the system, local process controllers called Re mot Terminal Units (RTUs), sensors and actuators and a network which provides the communication between the Master and the RTUs. A. Security of SCADA Systems SCADA systems are designed to have long life spans, usually in decades. The SCADA systems currently installed and used were designed at a time when security issues were not paramount, which is not the case today. Furthermore, SCADA systems are now connected to the Internet for remote monitoring and control making the systems susceptible to network security problems which arise through a connection to a public network. Despite these evident security risks, SCADA systems are cumbersome to upgrade for several reasons. Firstly, adding security features often implies a large downtime, which is not desirable in systems like power plants and traffic contro l. Secondly, SCADA devices with embedded codes would need to be completely replaced to add new security protocols. Lastly, the networks used in a SCADA system are usually customized for that system and cannot be generalized. Security of legacy SCADA systems and design of future systems both thus rely heavily on the assessment and rectification of security vulnerabilities of SCADA implementatio ns in realistic settings. B. Simulation of SCADA Systems In a SCADA system it is essential to model and simulate communication networks in order to study mission critical situations such as network failures or attacks. Even a simpl e SCADA system is composed of several units in various domains like dynamic systems, networks and physical environments, and each of these units can be modeled using a variety of available simulators and/or emulators. An example system could include simulating controller and plant dynamics in Simulink or Matlab, network architecture and behavior in a network simulator like OMNeT++, etc. An adequate simulation of such a system necessitates the use of an underlying software infrastructure that connects and re lates the heterogeneous simulators in a logically and temporally coherent framework.",
"title": ""
},
{
"docid": "5106155fbe257c635fb9621240fd7736",
"text": "AIM\nThe aim of this study was to investigate the prevalence of pain and pain assessment among inpatients in a university hospital.\n\n\nBACKGROUND\nPain management could be considered an indicator of quality of care. Few studies report on prevalence measures including all inpatients.\n\n\nDESIGN\nQuantitative and explorative.\n\n\nMETHOD\nSurvey.\n\n\nRESULTS\nOf the inpatients at the hospital who answered the survey, 494 (65%) reported having experienced pain during the preceding 24 hours. Of the patients who reported having experienced pain during the preceding 24 hours, 81% rated their pain >3 and 42.1% rated their pain >7. Of the patients who reported having experienced pain during the preceding 24 hours, 38.7% had been asked to self-assess their pain using a Numeric Rating Scale (NRS); 29.6% of the patients were completely satisfied, and 11.5% were not at all satisfied with their participation in pain management.\n\n\nCONCLUSIONS\nThe result showed that too many patients are still suffering from pain and that the NRS is not used to the extent it should be. Efforts to overcome under-implementation of pain assessment are required, particularly on wards where pain is not obvious, e.g., wards that do not deal with surgery patients. Work to improve pain management must be carried out through collaboration across professional groups.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nUsing a pain assessment tool such as the NRS could help patients express their pain and improve communication between nurses and patients in relation to pain as well as allow patients to participate in their own care. Carrying out prevalence pain measures similar to those used here could be helpful in performing quality improvement work in the area of pain management.",
"title": ""
},
{
"docid": "4768001167cefad7b277e3b77de648bb",
"text": "MicroRNAs (miRNAs) regulate gene expression at the posttranscriptional level and are therefore important cellular components. As is true for protein-coding genes, the transcription of miRNAs is regulated by transcription factors (TFs), an important class of gene regulators that act at the transcriptional level. The correct regulation of miRNAs by TFs is critical, and increasing evidence indicates that aberrant regulation of miRNAs by TFs can cause phenotypic variations and diseases. Therefore, a TF-miRNA regulation database would be helpful for understanding the mechanisms by which TFs regulate miRNAs and understanding their contribution to diseases. In this study, we manually surveyed approximately 5000 reports in the literature and identified 243 TF-miRNA regulatory relationships, which were supported experimentally from 86 publications. We used these data to build a TF-miRNA regulatory database (TransmiR, http://cmbi.bjmu.edu.cn/transmir), which contains 82 TFs and 100 miRNAs with 243 regulatory pairs between TFs and miRNAs. In addition, we included references to the published literature (PubMed ID) information about the organism in which the relationship was found, whether the TFs and miRNAs are involved with tumors, miRNA function annotation and miRNA-associated disease annotation. TransmiR provides a user-friendly interface by which interested parties can easily retrieve TF-miRNA regulatory pairs by searching for either a miRNA or a TF.",
"title": ""
},
{
"docid": "ba1b3fb5f147b5af173e5f643a2794e0",
"text": "The objective of this study is to examine how personal factors such as lifestyle, personality, and economic situations affect the consumer behavior of Malaysian university students. A quantitative approach was adopted and a self-administered questionnaire was distributed to collect data from university students. Findings illustrate that ‘personality’ influences the consumer behavior among Malaysian university student. This study also noted that the economic situation had a negative relationship with consumer behavior. Findings of this study improve our understanding of consumer behavior of Malaysian University Students. The findings of this study provide valuable insights in identifying and taking steps to improve on the services, ambience, and needs of the student segment of the Malaysian market.",
"title": ""
},
{
"docid": "569fed958b7a471e06ce718102687a1e",
"text": "The introduction of convolutional layers greatly advanced the performance of neural networks on image tasks due to innately capturing a way of encoding and learning translation-invariant operations, matching one of the underlying symmetries of the image domain. In comparison, there are a number of problems in which there are a number of different inputs which are all ’of the same type’ — multiple particles, multiple agents, multiple stock prices, etc. The corresponding symmetry to this is permutation symmetry, in that the algorithm should not depend on the specific ordering of the input data. We discuss a permutation-invariant neural network layer in analogy to convolutional layers, and show the ability of this architecture to learn to predict the motion of a variable number of interacting hard discs in 2D. In the same way that convolutional layers can generalize to different image sizes, the permutation layer we describe generalizes to different numbers of objects.",
"title": ""
},
{
"docid": "9005bd3eaab940344da6158f0eca2d38",
"text": "We present an example-based crowd simulation technique. Most crowd simulation techniques assume that the behavior exhibited by each person in the crowd can be defined by a restricted set of rules. This assumption limits the behavioral complexity of the simulated agents. By learning from real-world examples, our autonomous agents display complex natural behaviors that are often missing in crowd simulations. Examples are created from tracked video segments of real pedestrian crowds. During a simulation, autonomous agents search for examples that closely match the situation that they are facing. Trajectories taken by real people in similar situations, are copied to the simulated agents, resulting in seemingly natural behaviors.",
"title": ""
},
{
"docid": "53ab91cdff51925141c43c4bc1c6aade",
"text": "Floods are the most common natural disasters, and cause significant damage to life, agriculture and economy. Research has moved on from mathematical modeling or physical parameter based flood forecasting schemes, to methodologies focused around algorithmic approaches. The Internet of Things (IoT) is a field of applied electronics and computer science where a system of devices collects data in real time and transfers it through a Wireless Sensor Network (WSN) to the computing device for analysis. IoT generally combines embedded system hardware techniques along with data science or machine learning models. In this work, an IoT and machine learning based embedded system is proposed to predict the probability of floods in a river basin. The model uses a modified mesh network connection over ZigBee for the WSN to collect data, and a GPRS module to send the data over the internet. The data sets are evaluated using an artificial neural network model. The results of the analysis which are also appended show a considerable improvement over the currently existing methods.",
"title": ""
},
{
"docid": "cfcae9b30fda24358e79e4e664ed747d",
"text": "Automated driving is predicted to enhance traffic safety, transport efficiency, and driver comfort. To extend the capability of current advanced driver assistance systems, and eventually realize fully automated driving, the intelligent vehicle system must have the ability to plan different maneuvers while adapting to the surrounding traffic environment. This paper presents an algorithm for longitudinal and lateral trajectory planning for automated driving maneuvers where the vehicle does not have right of way, i.e., yielding maneuvers. Such maneuvers include, e.g., lane change, roundabout entry, and intersection crossing. In the proposed approach, the traffic environment which the vehicle must traverse is incorporated as constraints on its longitudinal and lateral positions. The trajectory planning problem can thereby be formulated as two loosely coupled low-complexity model predictive control problems for longitudinal and lateral motion. Simulation results demonstrate the ability of the proposed trajectory planning algorithm to generate smooth collision-free maneuvers which are appropriate for various traffic situations.",
"title": ""
},
{
"docid": "d8092e95e3275585d06a6efdc02aee31",
"text": "The present study provides a new account of how fluid intelligence influences academic performance. In this account a complex learning component of fluid intelligence tests is proposed to play a major role in predicting academic performance. A sample of 2, 277 secondary school students completed two reasoning tests that were assumed to represent fluid intelligence and standardized math and verbal tests assessing academic performance. The fluid intelligence data were decomposed into a learning component that was associated with the position effect of intelligence items and a constant component that was independent of the position effect. Results showed that the learning component contributed significantly more to the prediction of math and verbal performance than the constant component. The link from the learning component to math performance was especially strong. These results indicated that fluid intelligence, which has so far been considered as homogeneous, could be decomposed in such a way that the resulting components showed different properties and contributed differently to the prediction of academic performance. Furthermore, the results were in line with the expectation that learning was a predictor of performance in school.",
"title": ""
},
{
"docid": "6e5b3746474ec858799b35bc5236c3e0",
"text": "Mobile systems have become widely adopted by users to perform sensitive operations ranging from on-line payments for personal use to remote access to enterprise assets. Thus, attacks on mobile devices can cause significant loss to user's personal data as well as to valuable enterprise assets. In order to mitigate risks arising from attacks, various approaches have been proposed including the use of Trusted Execution Environment (TEE) to isolate and protect the execution of sensitive code from the rest of the system, e.g. applications and other software.However, users remain at risk of exploits via several types of software vulnerabilities - indicating that enterprises have failed to deliver the required protection, despite the use of existing isolation technologies. In this paper, we investigate Samsung KNOX and its usage of TEE as being the current technology providing secure containers. First, we study how KNOX uses TEE and perform analysis on its design consideration from a system vulnerabilities perspective. Second, we analyse and discuss recent attacks on KNOX and how those attacks exploit system vulnerabilities. Finally, we present new shortcomings emerging from our analysis of KNOX architecture. Our research exhibits that system vulnerabilities are the underlying cause of many attacks on systems and it reveals how they affect fundamental design security principles when the full potential of TEE is not exploited.",
"title": ""
},
{
"docid": "082b1c341435ce93cfab869475ed32bd",
"text": "Given a graph where vertices are partitioned into k terminals and non-terminals, the goal is to compress the graph (i.e., reduce the number of non-terminals) using minor operations while preserving terminal distances approximately. The distortion of a compressed graph is the maximum multiplicative blow-up of distances between all pairs of terminals. We study the trade-off between the number of non-terminals and the distortion. This problem generalizes the Steiner Point Removal (SPR) problem, in which all non-terminals must be removed. We introduce a novel black-box reduction to convert any lower bound on distortion for the SPR problem into a super-linear lower bound on the number of non-terminals, with the same distortion, for our problem. This allows us to show that there exist graphs such that every minor with distortion less than 2 / 2.5 / 3 must have Ω(k2) / Ω(k5/4) / Ω(k6/5) non-terminals, plus more trade-offs in between. The black-box reduction has an interesting consequence: if the tight lower bound on distortion for the SPR problem is super-constant, then allowing any O(k) non-terminals will not help improving the lower bound to a constant. We also build on the existing results on spanners, distance oracles and connected 0-extensions to show a number of upper bounds for general graphs, planar graphs, graphs that exclude a fixed minor and bounded treewidth graphs. Among others, we show that any graph admits a minor with O(log k) distortion and O(k2) non-terminals, and any planar graph admits a minor with 1 + ε distortion and Õ((k/ε)2) non-terminals. 1998 ACM Subject Classification G.2.2 Graph Theory",
"title": ""
},
{
"docid": "170873ad959b33eea76e9f542c5dbff6",
"text": "This paper reports on a development framework, two prototypes, and a comparative study in the area of multi-tag Near-Field Communication (NFC) interaction. By combining NFC with static and dynamic displays, such as posters and projections, services are made more visible and allow users to interact with them easily by interacting directly with the display with their phone. In this paper, we explore such interactions, in particular, the combination of the phone display and large NFC displays. We also compare static displays and dynamic displays, and present a list of deciding factors for a particular deployment situation. We discuss one prototype for each display type and developed a corresponding framework which can be used to accelerate the development of such prototypes whilst supporting a high level of versatility. The findings of a controlled comparative study indicate, among other things, that all participants preferred the dynamic display, although the static display has advantages, e.g. with respect to privacy and portability.",
"title": ""
}
] |
scidocsrr
|
0f1525313cf095d9a5cd350e1f6197c7
|
Semantic Web in data mining and knowledge discovery: A comprehensive survey
|
[
{
"docid": "cb08df0c8ff08eecba5d7fed70c14f1e",
"text": "In this article, we propose a family of efficient kernels for l a ge graphs with discrete node labels. Key to our method is a rapid feature extraction scheme b as d on the Weisfeiler-Lehman test of isomorphism on graphs. It maps the original graph to a sequ ence of graphs, whose node attributes capture topological and label information. A fami ly of kernels can be defined based on this Weisfeiler-Lehman sequence of graphs, including a highly e ffici nt kernel comparing subtree-like patterns. Its runtime scales only linearly in the number of e dges of the graphs and the length of the Weisfeiler-Lehman graph sequence. In our experimental evaluation, our kernels outperform state-of-the-art graph kernels on several graph classifica tion benchmark data sets in terms of accuracy and runtime. Our kernels open the door to large-scale ap plic tions of graph kernels in various disciplines such as computational biology and social netwo rk analysis.",
"title": ""
},
{
"docid": "ec58ee349217d316f87ff684dba5ac2b",
"text": "This paper is a survey of inductive rule learning algorithms that use a separate-and-conquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases.",
"title": ""
}
] |
[
{
"docid": "c5759678a84864a843c20c5f4a23f29f",
"text": "We propose a novel framework called transient imaging for image formation and scene understanding through impulse illumination and time images. Using time-of-flight cameras and multi-path analysis of global light transport, we pioneer new algorithms and systems for scene understanding through time images. We demonstrate that our proposed transient imaging framework allows us to accomplish tasks that are well beyond the reach of existing imaging technology. For example, one can infer the geometry of not only the visible but also the hidden parts of a scene, enabling us to look around corners. Traditional cameras estimate intensity per pixel I(x,y). Our transient imaging camera captures a 3D time-image I(x,y,t) for each pixel and uses an ultra-short pulse laser for illumination. Emerging technologies are supporting cameras with a temporal-profile per pixel at picosecond resolution, allowing us to capture an ultra-high speed time-image. This time-image contains the time profile of irradiance incident at a sensor pixel. We experimentally corroborated our theory with free space hardware experiments using a femtosecond laser and a picosecond accurate sensing device. The ability to infer the structure of hidden scene elements, unobservable by both the camera and illumination source, will create a range of new computer vision opportunities.",
"title": ""
},
{
"docid": "4d2b0b01fae0ff2402fc2feaa5657574",
"text": "In this paper, we give an algorithm for the analysis and correction of the distorted QR barcode (QR-code) image. The introduced algorithm is based on the code area finding by four corners detection for 2D barcode. We combine Canny edge detection with contours finding algorithms to erase noises and reduce computation and utilize two tangents to approximate the right-bottom point. Then, we give a detail description on how to use inverse perspective transformation in rebuilding a QR-code image from a distorted one. We test our algorithm on images taken by mobile phones. The experiment shows that our algorithm is effective.",
"title": ""
},
{
"docid": "66a8e7c076ad2cfb7bbe42836607a039",
"text": "The Spider system at the Oak Ridge National Laboratory’s Leadership Computing Facility (OLCF) is the world’s largest scale Lustre parallel file system. Envisioned as a shared parallel file system capable of delivering both the bandwidth and capacity requirements of the OLCF’s diverse computational environment, the project had a number of ambitious goals. To support the workloads of the OLCF’s diverse computational platforms, the aggregate performance and storage capacity of Spider exceed that of our previously deployed systems by a factor of 6x 240 GB/sec, and 17x 10 Petabytes, respectively. Furthermore, Spider supports over 26,000 clients concurrently accessing the file system, which exceeds our previously deployed systems by nearly 4x. In addition to these scalability challenges, moving to a center-wide shared file system required dramatically improved resiliency and fault-tolerance mechanisms. This paper details our efforts in designing, deploying, and operating Spider. Through a phased approach of research and development, prototyping, deployment, and transition to operations, this work has resulted in a number of insights into large-scale parallel file system architectures, from both the design and the operational perspectives. We present in this paper our solutions to issues such as network congestion, performance baselining and evaluation, file system journaling overheads, and high availability in a system with tens of thousands of components. We also discuss areas of continued challenges, such as stressed metadata performance and the need for file system quality of service alongside with our efforts to address them. Finally, operational aspects of managing a system of this scale are discussed along with real-world data and observations.",
"title": ""
},
{
"docid": "70ec2398526863c05b41866593214d0a",
"text": "Matrix factorization (MF) is one of the most popular techniques for product recommendation, but is known to suffer from serious cold-start problems. Item cold-start problems are particularly acute in settings such as Tweet recommendation where new items arrive continuously. In this paper, we present a meta-learning strategy to address item cold-start when new items arrive continuously. We propose two deep neural network architectures that implement our meta-learning strategy. The first architecture learns a linear classifier whose weights are determined by the item history while the second architecture learns a neural network whose biases are instead adjusted. We evaluate our techniques on the real-world problem of Tweet recommendation. On production data at Twitter, we demonstrate that our proposed techniques significantly beat the MF baseline and also outperform production models for Tweet recommendation.",
"title": ""
},
{
"docid": "933f8ba333e8cbef574b56348872b313",
"text": "Automatic image annotation has been an important research topic in facilitating large scale image management and retrieval. Existing methods focus on learning image-tag correlation or correlation between tags to improve annotation accuracy. However, most of these methods evaluate their performance using top-k retrieval performance, where k is fixed. Although such setting gives convenience for comparing different methods, it is not the natural way that humans annotate images. The number of annotated tags should depend on image contents. Inspired by the recent progress in machine translation and image captioning, we propose a novel Recurrent Image Annotator (RIA) model that forms image annotation task as a sequence generation problem so that RIA can natively predict the proper length of tags according to image contents. We evaluate the proposed model on various image annotation datasets. In addition to comparing our model with existing methods using the conventional top-k evaluation measures, we also provide our model as a high quality baseline for the arbitrary length image tagging task. Moreover, the results of our experiments show that the order of tags in training phase has a great impact on the final annotation performance.",
"title": ""
},
{
"docid": "b0575058a6950bc17a976504145dca0e",
"text": "BACKGROUND\nCitation screening is time consuming and inefficient. We sought to evaluate the performance of Abstrackr, a semi-automated online tool for predictive title and abstract screening.\n\n\nMETHODS\nFour systematic reviews (aHUS, dietary fibre, ECHO, rituximab) were used to evaluate Abstrackr. Citations from electronic searches of biomedical databases were imported into Abstrackr, and titles and abstracts were screened and included or excluded according to the entry criteria. This process was continued until Abstrackr predicted and classified the remaining unscreened citations as relevant or irrelevant. These classification predictions were checked for accuracy against the original review decisions. Sensitivity analyses were performed to assess the effects of including case reports in the aHUS dataset whilst screening and the effects of using larger imbalanced datasets with the ECHO dataset. The performance of Abstrackr was calculated according to the number of relevant studies missed, the workload saving, the false negative rate, and the precision of the algorithm to correctly predict relevant studies for inclusion, i.e. further full text inspection.\n\n\nRESULTS\nOf the unscreened citations, Abstrackr's prediction algorithm correctly identified all relevant citations for the rituximab and dietary fibre reviews. However, one relevant citation in both the aHUS and ECHO reviews was incorrectly predicted as not relevant. The workload saving achieved with Abstrackr varied depending on the complexity and size of the reviews (9 % rituximab, 40 % dietary fibre, 67 % aHUS, and 57 % ECHO). The proportion of citations predicted as relevant, and therefore, warranting further full text inspection (i.e. the precision of the prediction) ranged from 16 % (aHUS) to 45 % (rituximab) and was affected by the complexity of the reviews. The false negative rate ranged from 2.4 to 21.7 %. Sensitivity analysis performed on the aHUS dataset increased the precision from 16 to 25 % and increased the workload saving by 10 % but increased the number of relevant studies missed. Sensitivity analysis performed with the larger ECHO dataset increased the workload saving (80 %) but reduced the precision (6.8 %) and increased the number of missed citations.\n\n\nCONCLUSIONS\nSemi-automated title and abstract screening with Abstrackr has the potential to save time and reduce research waste.",
"title": ""
},
{
"docid": "9feeeabb8491a06ae130c99086a9d069",
"text": "Dopamine (DA) is a key transmitter in the basal ganglia, yet DA transmission does not conform to several aspects of the classic synaptic doctrine. Axonal DA release occurs through vesicular exocytosis and is action potential- and Ca²⁺-dependent. However, in addition to axonal release, DA neurons in midbrain exhibit somatodendritic release by an incompletely understood, but apparently exocytotic, mechanism. Even in striatum, axonal release sites are controversial, with evidence for DA varicosities that lack postsynaptic specialization, and largely extrasynaptic DA receptors and transporters. Moreover, DA release is often assumed to reflect a global response to a population of activities in midbrain DA neurons, whether tonic or phasic, with precise timing and specificity of action governed by other basal ganglia circuits. This view has been reinforced by anatomical evidence showing dense axonal DA arbors throughout striatum, and a lattice network formed by DA axons and glutamatergic input from cortex and thalamus. Nonetheless, localized DA transients are seen in vivo using voltammetric methods with high spatial and temporal resolution. Mechanistic studies using similar methods in vitro have revealed local regulation of DA release by other transmitters and modulators, as well as by proteins known to be disrupted in Parkinson's disease and other movement disorders. Notably, the actions of most other striatal transmitters on DA release also do not conform to the synaptic doctrine, with the absence of direct synaptic contacts for glutamate, GABA, and acetylcholine (ACh) on striatal DA axons. Overall, the findings reviewed here indicate that DA signaling in the basal ganglia is sculpted by cooperation between the timing and pattern of DA input and those of local regulatory factors.",
"title": ""
},
{
"docid": "b2c299e13eff8776375c14357019d82e",
"text": "This paper is focused on the application of complementary split-ring resonators (CSRRs) to the suppression of the common (even) mode in microstrip differential transmission lines. By periodically and symmetrically etching CSRRs in the ground plane of microstrip differential lines, the common mode can be efficiently suppressed over a wide band whereas the differential signals are not affected. Throughout the paper, we present and discuss the principle for the selective common-mode suppression, the circuit model of the structure (including the models under even- and odd-mode excitation), the strategies for bandwidth enhancement of the rejected common mode, and a methodology for common-mode filter design. On the basis of the dispersion relation for the common mode, it is shown that the maximum achievable rejection bandwidth can be estimated. Finally, theory is validated by designing and measuring a differential line and a balanced bandpass filter with common-mode suppression, where double-slit CSRRs (DS-CSRRs) are used in order to enhance the common-mode rejection bandwidth. Due to the presence of DS-CSRRs, the balanced filter exhibits more than 40 dB of common-mode rejection within a 34% bandwidth around the filter pass band.",
"title": ""
},
{
"docid": "353f91c6e35cd5703b5b238f929f543e",
"text": "This paper provides an overview of prominent deep learning toolkits and, in particular, reports on recent publications that contributed open source software for implementing tasks that are common in intelligent user interfaces (IUI). We provide a scientific reference for researchers and software engineers who plan to utilise deep learning techniques within their IUI research and development projects. ACM Classification",
"title": ""
},
{
"docid": "7ddfa92cee856e2ef24caf3e88d92b93",
"text": "Applications are getting increasingly interconnected. Although the interconnectedness provide new ways to gather information about the user, not all user information is ready to be directly implemented in order to provide a personalized experience to the user. Therefore, a general model is needed to which users’ behavior, preferences, and needs can be connected to. In this paper we present our works on a personality-based music recommender system in which we use users’ personality traits as a general model. We identified relationships between users’ personality and their behavior, preferences, and needs, and also investigated different ways to infer users’ personality traits from user-generated data of social networking sites (i.e., Facebook, Twitter, and Instagram). Our work contributes to new ways to mine and infer personality-based user models, and show how these models can be implemented in a music recommender system to positively contribute to the user experience.",
"title": ""
},
{
"docid": "786ef1b656c182ab71f7a63e7f263b3f",
"text": "The spectrum of a first-order sentence is the set of cardinalities of its finite models. This paper is concerned with spectra of sentences over languages that contain only unary function symbols. In particular, it is shown that a set S of natural numbers is the spectrum of a sentence over the language of one unary function symbol precisely if S is an eventually periodic set.",
"title": ""
},
{
"docid": "a9b96c162e9a7f39a90c294167178c05",
"text": "The performance of automotive radar systems is expected to significantly increase in the near future. With enhanced resolution capabilities more accurate and denser point clouds of traffic participants and roadside infrastructure can be acquired and so the amount of gathered information is growing drastically. One main driver for this development is the global trend towards self-driving cars, which all rely on precise and fine-grained sensor information. New radar signal processing concepts have to be developed in order to provide this additional information. This paper presents a prototype high resolution radar sensor which helps to facilitate algorithm development and verification. The system is operational under real-time conditions and achieves excellent performance in terms of range, velocity and angular resolution. Complex traffic scenarios can be acquired out of a moving test vehicle, which is very close to the target application. First measurement runs on public roads are extremely promising and show an outstanding single-snapshot performance. Complex objects can be precisely located and recognized by their contour shape. In order to increase the possible recording time, the raw data rate is reduced by several orders of magnitude in real-time by means of constant false alarm rate (CFAR) processing. The number of target cells can still exceed more than 10 000 points in a single measurement cycle for typical road scenarios.",
"title": ""
},
{
"docid": "7b45559be60b099de0bcf109c9a539b7",
"text": "The split-heel technique has distinct advantages over the conventional medial or lateral approach in the operative debridement of extensive and predominantly plantar chronic calcaneal osteomyelitis in children above 5 years of age. We report three cases (age 5.5-11 years old) of chronic calcaneal osteomyelitis in children treated using the split-heel approach with 3-10 years follow-up showing excellent functional and cosmetic results.",
"title": ""
},
{
"docid": "b27e5e9540e625912a4e395079f6ac68",
"text": "We propose Cooperative Training (CoT) for training generative models that measure a tractable density for discrete data. CoT coordinately trains a generator G and an auxiliary predictive mediator M . The training target of M is to estimate a mixture density of the learned distribution G and the target distribution P , and that of G is to minimize the Jensen-Shannon divergence estimated through M . CoT achieves independent success without the necessity of pre-training via Maximum Likelihood Estimation or involving high-variance algorithms like REINFORCE. This low-variance algorithm is theoretically proved to be unbiased for both generative and predictive tasks. We also theoretically and empirically show the superiority of CoT over most previous algorithms in terms of generative quality and diversity, predictive generalization ability and computational cost.",
"title": ""
},
{
"docid": "6ceab65cc9505cf21824e9409cf67944",
"text": "Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on largescale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in terms of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level",
"title": ""
},
{
"docid": "c2d4f97913bb3acceb3703f1501547a8",
"text": "Pattern recognition is the discipline studying the design and operation of systems capable to recognize patterns with specific properties in data so urce . Intrusion detection, on the other hand, is in charge of identifying anomalou s activities by analyzing a data source, be it the logs of an operating system or in the network traffic. It is easy to find similarities between such research fields , and it is straightforward to think of a way to combine them. As to the descriptions abov e, we can imagine an Intrusion Detection System (IDS) using techniques prop er of the pattern recognition field in order to discover an attack pattern within the network traffic. What we propose in this work is such a system, which exp loits the results of research in the field of data mining, in order to discover poten tial attacks. The paper also presents some experimental results dealing with p erformance of our system in a real-world operational scenario.",
"title": ""
},
{
"docid": "32faa5a14922d44101281c783cf6defb",
"text": "A novel multifocus color image fusion algorithm based on the quaternion wavelet transform (QWT) is proposed in this paper, aiming at solving the image blur problem. The proposed method uses a multiresolution analysis procedure based on the quaternion wavelet transform. The performance of the proposed fusion scheme is assessed by some experiments, and the experimental results show that the proposed method is effective and performs better than the existing fusion methods.",
"title": ""
},
{
"docid": "b3ffb805b3dcffc4e5c9cec47f90e566",
"text": "Real-time ride-sharing, which enables on-the-fly matching between riders and drivers (even en-route), is an important problem due to its environmental and societal benefits. With the emergence of many ride-sharing platforms (e.g., Uber and Lyft), the design of a scalable framework to match riders and drivers based on their various constraints while maximizing the overall profit of the platform becomes a distinguishing business strategy.\n A key challenge of such framework is to satisfy both types of the users in the system, e.g., reducing both riders' and drivers' travel distances. However, the majority of the existing approaches focus only on minimizing the total travel distance of drivers which is not always equivalent to shorter trips for riders. Hence, we propose a fair pricing model that simultaneously satisfies both the riders' and drivers' constraints and desires (formulated as their profiles). In particular, we introduce a distributed auction-based framework where each driver's mobile app automatically bids on every nearby request taking into account many factors such as both the driver's and the riders' profiles, their itineraries, the pricing model, and the current number of riders in the vehicle. Subsequently, the server determines the highest bidder and assigns the rider to that driver. We show that this framework is scalable and efficient, processing hundreds of tasks per second in the presence of thousands of drivers. We compare our framework with the state-of-the-art approaches in both industry and academia through experiments on New York City's taxi dataset. Our results show that our framework can simultaneously match more riders to drivers (i.e., higher service rate) by engaging the drivers more effectively. Moreover, our frame-work schedules shorter trips for riders (i.e., better service quality). Finally, as a consequence of higher service rate and shorter trips, our framework increases the overall profit of the ride-sharing platforms.",
"title": ""
},
{
"docid": "32fe17034223a3ea9a7c52b4107da760",
"text": "With the prevalence of the internet, mobile devices and commercial streaming music services, the amount of digital music available is greater than ever. Sorting through all this music is an extremely time-consuming task. Music recommendation systems search through this music automatically and suggest new songs to users. Music recommendation systems have been developed in commercial and academic settings, but more research is needed. The perfect system would handle all the user’s listening needs while requiring only minimal user input. To this end, I have reviewed 20 articles within the field of music recommendation with the goal of finding how the field can be improved. I present a survey of music recommendation, including an explanation of collaborative and content-based filtering with their respective strengths and weaknesses. I propose a novel next-track recommendation system that incorporates techniques advocated by the literature. The system relies heavily on user skipping behavior to drive both a content-based and a collaborative approach. It uses active learning to balance the needs of exploration vs. exploitation in playing music for the user.",
"title": ""
}
] |
scidocsrr
|
bc04f53bf1928db5a36744a94216ce73
|
Smart e-Health Gateway: Bringing intelligence to Internet-of-Things based ubiquitous healthcare systems
|
[
{
"docid": "be99f6ba66d573547a09d3429536049e",
"text": "With the development of sensor, wireless mobile communication, embedded system and cloud computing, the technologies of Internet of Things have been widely used in logistics, Smart Meter, public security, intelligent building and so on. Because of its huge market prospects, Internet of Things has been paid close attention by several governments all over the world, which is regarded as the third wave of information technology after Internet and mobile communication network. Bridging between wireless sensor networks with traditional communication networks or Internet, IOT Gateway plays an important role in IOT applications, which facilitates the seamless integration of wireless sensor networks and mobile communication networks or Internet, and the management and control with wireless sensor networks. In this paper, we proposed an IOT Gateway system based on Zigbee and GPRS protocols according to the typical IOT application scenarios and requirements from telecom operators, presented the data transmission between wireless sensor networks and mobile communication networks, protocol conversion of different sensor network protocols, and control functionalities for sensor networks, and finally gave an implementation of prototyping system and system validation.",
"title": ""
}
] |
[
{
"docid": "ec6f53bd2cbc482c1450934b1fd9e463",
"text": "Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.",
"title": ""
},
{
"docid": "fb7f079d104e81db41b01afe67cdf3b0",
"text": "In this paper, we address natural human-robot interaction (HRI) in a smart assisted living (SAIL) system for the elderly and the disabled. Two common HRI problems are studied: hand gesture recognition and daily activity recognition. For hand gesture recognition, we implemented a neural network for gesture spotting and a hierarchical hidden Markov model for context-based recognition. For daily activity recognition, a multisensor fusion scheme is developed to process motion data collected from the foot and the waist of a human subject. Experiments using a prototype wearable sensor system show the effectiveness and accuracy of our algorithms.",
"title": ""
},
{
"docid": "c142745b4f4fe202fb4c59a494060f70",
"text": "We created a comprehensive set of health-system performance measurements for China nationally and regionally, with health-system coverage and catastrophic medical spending as major indicators. With respect to performance of health-care delivery, China has done well in provision of maternal and child health services, but poorly in addressing non-communicable diseases. For example, coverage of hospital delivery increased from 20% in 1993 to 62% in 2003 for women living in rural areas. However, effective coverage of hypertension treatment was only 12% for patients living in urban areas and 7% for those in rural areas in 2004. With respect to performance of health-care financing, 14% of urban and 16% of rural households incurred catastrophic medical expenditure in 2003. Furthermore, 15% of urban and 22% of rural residents had affordability difficulties when accessing health care. Although health-system coverage improved for both urban and rural areas from 1993 to 2003, affordability difficulties had worsened in rural areas. Additionally, substantial inter-regional and intra-regional inequalities in health-system coverage and health-care affordability measures exist. People with low income not only receive lower health-system coverage than those with high income, but also have an increased probability of either not seeking health care when ill or undergoing catastrophic medical spending. China's current health-system reform efforts need to be assessed for their effect on performance indicators, for which substantial data gaps exist.",
"title": ""
},
{
"docid": "9a7786a5f05876bc9265246531077c81",
"text": "PURPOSE\nThe aim of this in vivo study was to evaluate the clinical performance of porcelain veneers after 5 and 10 years of clinical service.\n\n\nMATERIALS AND METHODS\nA single operator placed porcelain laminates on 87 maxillary anterior teeth in 25 patients. All restorations were recalled at 5 years and 93% of the restorations at 10 years. Clinical performance was assessed in terms of esthetics, marginal integrity, retention, clinical microleakage, caries recurrence, fracture, vitality, and patient satisfaction. Failures were recorded either as \"clinically unacceptable but repairable\" or as \"clinically unacceptable with replacement needed\".\n\n\nRESULTS\nPorcelain veneers maintained their esthetic appearance after 10 years of clinical service. None of the veneers were lost. The percentage of restorations that remained \"clinically acceptable\" (without need for intervention) significantly decreased from an average of 92% (95 CI: 90% to 94%) at 5 years to 64% (95 CI: 51% to 77%) at 10 years. Fractures of porcelain (11%) and large marginal defects (20%) were the main reason for failure. Marginal defects were especially noticed at locations where the veneer ended in an existing composite filling. At such vulnerable locations, severe marginal discoloration (19%) and caries recurrence (10%) were frequently observed. Most of the restorations that present one or more \"clinically unacceptable\" problems (28%) were repairable. Only 4% of the restorations needed to be replaced at the 10-year recall.\n\n\nCONCLUSION\nIt was concluded that labial porcelain veneers represent a reliable, effective procedure for conservative treatment of unesthetic anterior teeth. Occlusion, preparation design, presence of composite fillings, and the adhesive used to bond veneers to tooth substrate are covariables that contribute to the clinical outcome of these restorations in the long-term.",
"title": ""
},
{
"docid": "ee4ebafe1b40e3d2020b2fb9a4b881f6",
"text": "Probing the lowest energy configuration of a complex system by quantum annealing was recently found to be more effective than its classical, thermal counterpart. By comparing classical and quantum Monte Carlo annealing protocols on the two-dimensional random Ising model (a prototype spin glass), we confirm the superiority of quantum annealing relative to classical annealing. We also propose a theory of quantum annealing based on a cascade of Landau-Zener tunneling events. For both classical and quantum annealing, the residual energy after annealing is inversely proportional to a power of the logarithm of the annealing time, but the quantum case has a larger power that makes it faster.",
"title": ""
},
{
"docid": "eb52b00d6aec954e3c64f7043427709c",
"text": "The paper presents a ball on plate balancing system useful for various educational purposes. A touch-screen placed on the plate is used for ball's position sensing and two servomotors are employed for balancing the plate in order to control ball's Cartesian coordinates. The design of control embedded systems is demonstrated for different control algorithms in compliance with FreeRTOS real time operating system and dsPIC33 microcontroller. On-line visualizations useful for system monitoring are provided by a PC host application connected with the embedded application. The measurements acquired during real-time execution and the parameters of the system are stored in specific data files, as support for any desired additional analysis. Taking into account the properties of this controlled system (instability, fast dynamics) and the capabilities of the embedded architecture (diversity of the involved communication protocols, diversity of employed hardware components, usage of an open source real time operating system), this educational setup allows a good illustration of numerous theoretical and practical aspects related to system engineering and applied informatics.",
"title": ""
},
{
"docid": "da270ae9b62c04d785ea6aad02db2ae9",
"text": "We study the complexity problem in artificial feedforward neural networks designed to approximate real valued functions of several real variables; i.e., we estimate the number of neurons in a network required to ensure a given degree of approximation to every function in a given function class. We indicate how to construct networks with the indicated number of neurons evaluating standard activation functions. Our general theorem shows that the smoother the activation function, the better the rate of approximation.",
"title": ""
},
{
"docid": "de17b1fcae6336947e82adab0881b5ba",
"text": "Presence of duplicate documents in the World Wide Web adversely affects crawling, indexing and relevance, which are the core building blocks of web search. In this paper, we present a set of techniques to mine rules from URLs and utilize these learnt rules for de-duplication using just URL strings without fetching the content explicitly. Our technique is composed of mining the crawl logs and utilizing clusters of similar pages to extract specific rules from URLs belonging to each cluster. Preserving each mined rules for de-duplication is not efficient due to the large number of specific rules. We present a machine learning technique to generalize the set of rules, which reduces the resource footprint to be usable at web-scale. The rule extraction techniques are robust against web-site specific URL conventions. We demonstrate the effectiveness of our techniques through experimental evaluation.",
"title": ""
},
{
"docid": "0ebecd74a2b4e5df55cbb51016c060c9",
"text": "Because of the increasing detail and size of virtual worlds, designers are more and more urged to consider employing procedural methods to alleviate part of their modeling work. However, such methods are often unintuitive to use, difficult to integrate, and provide little user control, making their application far from straightforward.\n In our declarative modeling approach, designers are provided with a more productive and simplified virtual world modeling workflow that matches better with their iterative way of working. Using interactive procedural sketching, they can quickly layout a virtual world, while having proper user control at the level of large terrain features. However, in practice, designers require a finer level of control. Integrating procedural techniques with manual editing in an iterative modeling workflow is an important topic that has remained relatively unaddressed until now.\n This paper identifies challenges of this integration and discusses approaches to combine these methods in such a way that designers can freely mix them, while the virtual world model is kept consistent during all modifications. We conclude that overcoming the challenges mentioned, for example in a declarative modeling context, is instrumental to achieve the much desired adoption of procedural modeling in mainstream virtual world modeling.",
"title": ""
},
{
"docid": "d95cd76008dd65d5d7f00c82bad013d3",
"text": "Though data analysis tools continue to improve, analysts still expend an inordinate amount of time and effort manipulating data and assessing data quality issues. Such \"data wrangling\" regularly involves reformatting data values or layout, correcting erroneous or missing values, and integrating multiple data sources. These transforms are often difficult to specify and difficult to reuse across analysis tasks, teams, and tools. In response, we introduce Wrangler, an interactive system for creating data transformations. Wrangler combines direct manipulation of visualized data with automatic inference of relevant transforms, enabling analysts to iteratively explore the space of applicable operations and preview their effects. Wrangler leverages semantic data types (e.g., geographic locations, dates, classification codes) to aid validation and type conversion. Interactive histories support review, refinement, and annotation of transformation scripts. User study results show that Wrangler significantly reduces specification time and promotes the use of robust, auditable transforms instead of manual editing.",
"title": ""
},
{
"docid": "c86f477a1a2900a1b3d5dc80974c6f7c",
"text": "The understanding of the metal and transition metal dichalcogenide (TMD) interface is critical for future electronic device technologies based on this new class of two-dimensional semiconductors. Here, we investigate the initial growth of nanometer-thick Pd, Au, and Ag films on monolayer MoS2. Distinct growth morphologies are identified by atomic force microscopy: Pd forms a uniform contact, Au clusters into nanostructures, and Ag forms randomly distributed islands on MoS2. The formation of these different interfaces is elucidated by large-scale spin-polarized density functional theory calculations. Using Raman spectroscopy, we find that the interface homogeneity shows characteristic Raman shifts in E2g(1) and A1g modes. Interestingly, we show that insertion of graphene between metal and MoS2 can effectively decouple MoS2 from the perturbations imparted by metal contacts (e.g., strain), while maintaining an effective electronic coupling between metal contact and MoS2, suggesting that graphene can act as a conductive buffer layer in TMD electronics.",
"title": ""
},
{
"docid": "187c696aeb78607327fd817dfa9446ba",
"text": "OBJECTIVE\nThe integration of SNOMED CT into the Unified Medical Language System (UMLS) involved the alignment of two views of synonymy that were different because the two vocabulary systems have different intended purposes and editing principles. The UMLS is organized according to one view of synonymy, but its structure also represents all the individual views of synonymy present in its source vocabularies. Despite progress in knowledge-based automation of development and maintenance of vocabularies, manual curation is still the main method of determining synonymy. The aim of this study was to investigate the quality of human judgment of synonymy.\n\n\nDESIGN\nSixty pairs of potentially controversial SNOMED CT synonyms were reviewed by 11 domain vocabulary experts (six UMLS editors and five noneditors), and scores were assigned according to the degree of synonymy.\n\n\nMEASUREMENTS\nThe synonymy scores of each subject were compared to the gold standard (the overall mean synonymy score of all subjects) to assess accuracy. Agreement between UMLS editors and noneditors was measured by comparing the mean synonymy scores of editors to noneditors.\n\n\nRESULTS\nAverage accuracy was 71% for UMLS editors and 75% for noneditors (difference not statistically significant). Mean scores of editors and noneditors showed significant positive correlation (Spearman's rank correlation coefficient 0.654, two-tailed p < 0.01) with a concurrence rate of 75% and an interrater agreement kappa of 0.43.\n\n\nCONCLUSION\nThe accuracy in the judgment of synonymy was comparable for UMLS editors and nonediting domain experts. There was reasonable agreement between the two groups.",
"title": ""
},
{
"docid": "a7cc577ae2a09a5ff18333b7bfb47001",
"text": "Metacercariae of an unidentified species of Apophallus Lühe, 1909 are associated with overwinter mortality in coho salmon, Oncorhynchus kisutch (Walbaum, 1792), in the West Fork Smith River, Oregon. We infected chicks with these metacercariae in order to identify the species. The average size of adult worms was 197 × 57 μm, which was 2 to 11 times smaller than other described Apophallus species. Eggs were also smaller, but larger in proportion to body size, than in other species of Apophallus. Based on these morphological differences, we describe Apophallus microsoma n. sp. In addition, sequences from the cytochrome c oxidase 1 gene from Apophallus sp. cercariae collected in the study area, which are likely conspecific with experimentally cultivated A. microsoma, differ by >12% from those we obtained from Apophallus donicus ( Skrjabin and Lindtrop, 1919 ) and from Apophallus brevis Ransom, 1920 . The taxonomy and pathology of Apophallus species is reviewed.",
"title": ""
},
{
"docid": "886c284d72a01db9bc4eb9467e14bbbb",
"text": "The Bitcoin cryptocurrency introduced a novel distributed consensus mechanism relying on economic incentives. While a coalition controlling a majority of computational power may undermine the system, for example by double-spending funds, it is often assumed it would be incentivized not to attack to protect its long-term stake in the health of the currency. We show how an attacker might purchase mining power (perhaps at a cost premium) for a short duration via bribery. Indeed, bribery can even be performed in-band with the system itself enforcing the bribe. A bribing attacker would not have the same concerns about the long-term health of the system, as their majority control is inherently short-lived. New modeling assumptions are needed to explain why such attacks have not been observed in practice. The need for all miners to avoid short-term profits by accepting bribes further suggests a potential tragedy of the commons which has not yet been analyzed.",
"title": ""
},
{
"docid": "d13145bc68472ed9a06bafd86357c5dd",
"text": "Modeling cloth with fiber-level geometry can produce highly realistic details. However, rendering fiber-level cloth models not only has a high memory cost but it also has a high computation cost even for offline rendering applications. In this paper we present a real-time fiber-level cloth rendering method for current GPUs. Our method procedurally generates fiber-level geometric details on-the-fly using yarn-level control points for minimizing the data transfer to the GPU. We also reduce the rasterization operations by collectively representing the fibers near the center of each ply that form the yarn structure. Moreover, we employ a level-of-detail strategy to minimize or completely eliminate the generation of fiber-level geometry that would have little or no impact on the final rendered image. Furthermore, we introduce a simple yarn-level ambient occlusion approximation and self-shadow computation method that allows lighting with self-shadows using relatively low-resolution shadow maps. We demonstrate the effectiveness of our approach by comparing our simplified fiber geometry to procedurally generated references and display knitwear containing more than a hundred million individual fiber curves at real-time frame rates with shadows and ambient occlusion.",
"title": ""
},
{
"docid": "2e0585860c1fa533412ff1fea76632cb",
"text": "Author Co-citation Analysis (ACA) has long been used as an effective method for identifying the intellectual structure of a research domain, but it relies on simple co-citation counting, which does not take the citation content into consideration. The present study proposes a new method for measuring the similarity between co-cited authors by considering author's citation content. We collected the full-text journal articles in the information science domain and extracted the citing sentences to calculate their similarity distances. We compared our method with traditional ACA and found out that our approach, while displaying a similar intellectual structure for the information science domain as the other baseline methods, also provides more details about the sub-disciplines in the domain than with traditional ACA.",
"title": ""
},
{
"docid": "3669d58dc1bed1d83e5d0d6747771f0e",
"text": "To cite: He A, Kwatra SG, Kim N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/bcr-2016214761 DESCRIPTION A 26-year-old woman with a reported history of tinea versicolour presented for persistent hypopigmentation on her bilateral forearms. Detailed examination revealed multiple small (5–10 mm), irregularly shaped white macules on the extensor surfaces of the bilateral forearms overlying slightly erythaematous skin. The surrounding erythaematous skin blanched with pressure and with elevation of the upper extremities the white macules were no longer visible (figures 1 and 2). A clinical diagnosis of Bier spots was made based on the patient’s characteristic clinical features. Bier spots are completely asymptomatic and are often found on the extensor surfaces of the upper and lower extremities, although they are sometimes generalised. They are a benign physiological vascular anomaly, arising either from cutaneous vessels responding to venous hypertension or from small vessel vasoconstriction leading to tissue hypoxia. 3 Our patient had neither personal nor family history of vascular disease. Bier spots are easily diagnosed by a classic sign on physical examination: the pale macules disappear with pressure applied on the surrounding skin or by elevating the affected limbs (figure 2). However, Bier spots can be easily confused with a variety of other disorders associated with hypopigmented macules. The differential diagnosis includes vitiligo, postinflammatory hypopigmentation and tinea versicolour, which was a prior diagnosis in this case. Bier spots are often idiopathic and regress spontaneously, although there are reports of Bier spots heralding systemic diseases, such as scleroderma renal crisis, mixed cryoglobulinaemia or lymphoma. Since most Bier spots are idiopathic and transient, no treatment is required.",
"title": ""
},
{
"docid": "4d832a8716aebf7c36ae6894ce1bac33",
"text": "Autonomous vehicles require a reliable perception of their environment to operate in real-world conditions. Awareness of moving objects is one of the key components for the perception of the environment. This paper proposes a method for detection and tracking of moving objects (DATMO) in dynamic environments surrounding a moving road vehicle equipped with a Velodyne laser scanner and GPS/IMU localization system. First, at every time step, a local 2.5D grid is built using the last sets of sensor measurements. Along time, the generated grids combined with localization data are integrated into an environment model called local 2.5D map. In every frame, a 2.5D grid is compared with an updated 2.5D map to compute a 2.5D motion grid. A mechanism based on spatial properties is presented to suppress false detections that are due to small localization errors. Next, the 2.5D motion grid is post-processed to provide an object level representation of the scene. The detected moving objects are tracked over time by applying data association and Kalman filtering. The experiments conducted on different sequences from KITTI dataset showed promising results, demonstrating the applicability of the proposed method.",
"title": ""
},
{
"docid": "ac34478a54d67abce7c892e058295e63",
"text": "The popularity of the term \"integrated curriculum\" has grown immensely in medical education over the last two decades, but what does this term mean and how do we go about its design, implementation, and evaluation? Definitions and application of the term vary greatly in the literature, spanning from the integration of content within a single lecture to the integration of a medical school's comprehensive curriculum. Taking into account the integrated curriculum's historic and evolving base of knowledge and theory, its support from many national medical education organizations, and the ever-increasing body of published examples, we deem it necessary to present a guide to review and promote further development of the integrated curriculum movement in medical education with an international perspective. We introduce the history and theory behind integration and provide theoretical models alongside published examples of common variations of an integrated curriculum. In addition, we identify three areas of particular need when developing an ideal integrated curriculum, leading us to propose the use of a new, clarified definition of \"integrated curriculum\", and offer a review of strategies to evaluate the impact of an integrated curriculum on the learner. This Guide is presented to assist educators in the design, implementation, and evaluation of a thoroughly integrated medical school curriculum.",
"title": ""
}
] |
scidocsrr
|
673ff1460830ec05f4c68c46a6b0b84e
|
Impact Of Employee Participation On Job Satisfaction , Employee Commitment And Employee Productivity
|
[
{
"docid": "3fd9fd52be3153fe84f2ea6319665711",
"text": "The theories of supermodular optimization and games provide a framework for the analysis of systems marked by complementarity. We summarize the principal results of these theories and indicate their usefulness by applying them to study the shift to 'modern manufacturing'. We also use them to analyze the characteristic features of the Lincoln Electric Company's strategy and structure.",
"title": ""
}
] |
[
{
"docid": "c252cca4122984aac411a01ce28777f7",
"text": "An image-based visual servo control is presented for an unmanned aerial vehicle (UAV) capable of stationary or quasi-stationary flight with the camera mounted onboard the vehicle. The target considered consists of a finite set of stationary and disjoint points lying in a plane. Control of the position and orientation dynamics is decoupled using a visual error based on spherical centroid data, along with estimations of the linear velocity and the gravitational inertial direction extracted from image features and an embedded inertial measurement unit. The visual error used compensates for poor conditioning of the image Jacobian matrix by introducing a nonhomogeneous gain term adapted to the visual sensitivity of the error measurements. A nonlinear controller, that ensures exponential convergence of the system considered, is derived for the full dynamics of the system using control Lyapunov function design techniques. Experimental results on a quadrotor UAV, developed by the French Atomic Energy Commission, demonstrate the robustness and performance of the proposed control strategy.",
"title": ""
},
{
"docid": "7ad244791a1ef91495aa3e0f4cf43f0c",
"text": "T he education and research communities are abuzz with new (or at least re-discovered) ideas about the nature of cognition and learning. Terms like situated cognition,\" \"distributed cognition,\" and \"communities of practice\" fill the air. Recent dialogue in Educational Researcher (Anderson, Reder, & Simon, 1996, 1997; Greeno, 1997) typifies this discussion. Some have argued that the shifts in world view that these discussions represent are even more fundamental than the now-historical shift from behaviorist to cognitive views of learning (Shuell, 1986). These new iaeas about the nature of knowledge, thinking, and learning--which are becoming known as the \"situative perspective\" (Greeno, 1997; Greeno, Collins, & Resnick, 1996)--are interacting with, and sometimes fueling, current reform movements in education. Most discussions of these ideas and their implications for educational practice have been cast primarily in terms of students. Scholars and policymakers have considered, for example, how to help students develop deep understandings of subject matter, situate students' learning in meaningful contexts, and create learning communities in which teachers and students engage in rich discourse about important ideas (e.g., National Council of Teachers of Mathematics, 1989; National Education Goals Panel, 1991; National Research Council, 1993). Less attention has been paid to teachers--either to their roles in creating learning experiences consistent with the reform agenda or to how they themselves learn new ways of teaching. In this article we focus on the latter. Our purpose in considering teachers' learning is twofold. First, we use these ideas about the nature of learning and knowing as lenses for understanding recent research on teacher learning. Second, we explore new issues about teacher learning and teacher education that this perspective brings to light. We begin with a brief overview of three conceptual themes that are central to the situative perspect ive-that cognition is (a) situated in particular physical and social contexts; (b) social in nature; and (c) distributed across the individual, other persons, and tools.",
"title": ""
},
{
"docid": "2136c0e78cac259106d5424a2985e5d7",
"text": "Stylistic composition is a creative musical activity, in which students as well as renowned composers write according to the style of another composer or period. We describe and evaluate two computational models of stylistic composition, called Racchman-Oct2010 and Racchmaninof-Oct2010. The former is a constrained Markov model and the latter embeds this model in an analogy-based design system. Racchmaninof-Oct2010 applies a pattern discovery algorithm called SIACT and a perceptually validated formula for rating pattern importance, to guide the generation of a new target design from an existing source design. A listening study is reported concerning human judgments of music excerpts that are, to varying degrees, in the style of mazurkas by Frédédric Chopin (1810-1849). The listening study acts as an evaluation of the two computational models and a third, benchmark system called Experiments in Musical Intelligence (EMI). Judges’ responses indicate that some aspects of musical style, such as phrasing and rhythm, are being modeled effectively by our algorithms. Judgments are also used to identify areas for future improvements. We discuss the broader implications of this work for the fields of engineering and design, where there is potential to make use of our models of hierarchical repetitive structure. Data and code to accompany this paper are available from www.tomcollinsresearch.net",
"title": ""
},
{
"docid": "4b8470edc0d643e9baeceae7d15a3c8b",
"text": "The authors have investigated potential applications of artificial neural networks for electrocardiographic QRS detection and beat classification. For the task of QRS detection, the authors used an adaptive multilayer perceptron structure to model the nonlinear background noise so as to enhance the QRS complex. This provided more reliable detection of QRS complexes even in a noisy environment. For electrocardiographic QRS complex pattern classification, an artificial neural network adaptive multilayer perceptron was used as a pattern classifier to distinguish between normal and abnormal beat patterns, as well as to classify 12 different abnormal beat morphologies. Preliminary results using the MIT/BIH (Massachusetts Institute of Technology/Beth Israel Hospital, Cambridge, MA) arrhythmia database are encouraging.",
"title": ""
},
{
"docid": "d2146f1821812ca65cfd56f557252200",
"text": "This paper presents an automatic annotation tool AATOS for providing documents with semantic annotations. The tool links entities found from the texts to ontologies defined by the user. The application is highly configurable and can be used with different natural language Finnish texts. The application was developed as a part of the WarSampo and Semantic Finlex projects and tested using Kansa Taisteli magazine articles and consolidated Finnish legislation of Semantic Finlex. The quality of the automatic annotation was evaluated by measuring precision and recall against existing manual annotations. The results showed that the quality of the input text, as well as the selection and configuration of the ontologies impacted the results.",
"title": ""
},
{
"docid": "e6c0aa517c857ed217fc96aad58d7158",
"text": "Conjoined twins, popularly known as Siamese twins, result from aberrant embryogenesis [1]. It is a rare presentation with an incidence of 1 in 50,000 births. Since 60% of these cases are still births, so the true incidence is estimated to be approximately 1 in 200,000 births [2-4]. This disorder is more common in females with female to male ratio of 3:1 [5]. Conjoined twins are classified based on their site of attachment with a suffix ‘pagus’ which is a Greek term meaning “fixed”. The main types of conjoined twins are omphalopagus (abdomen), thoracopagus (thorax), cephalopagus (ventrally head to umbilicus), ischipagus (pelvis), parapagus (laterally body side), craniopagus (head), pygopagus (sacrum) and rachipagus (vertebral column) [6]. Cephalophagus is an extremely rare variant of conjoined twins with an incidence of 11% among all cases. These types of twins are fused at head, thorax and upper abdominal cavity. They are pre-dominantly of two types: Janiceps (two faces are on the either side of the head) or non Janiceps type (normal single head and face). We hereby report a case of non janiceps cephalopagus conjoined twin, which was diagnosed after delivery.",
"title": ""
},
{
"docid": "42b6aca92046022cf77b724e2704348b",
"text": "We explore a model-based approach to reinforcement learning where partially or totally unknown dynamics are learned and explicit planning is performed. We learn dynamics with neural networks, and plan behaviors with differential dynamic programming (DDP). In order to handle complicated dynamics, such as manipulating liquids (pouring), we consider temporally decomposed dynamics. We start from our recent work [1] where we used locally weighted regression (LWR) to model dynamics. The major contribution of this paper is making use of deep learning in the form of neural networks with stochastic DDP, and showing the advantages of neural networks over LWR. For this purpose, we extend neural networks for: (1) modeling prediction error and output noise, (2) computing an output probability distribution for a given input distribution, and (3) computing gradients of output expectation with respect to an input. Since neural networks have nonlinear activation functions, these extensions were not easy. We provide an analytic solution for these extensions using some simplifying assumptions. We verified this method in pouring simulation experiments. The learning performance with neural networks was better than that of LWR. The amount of spilled materials was reduced. We also present early results of robot experiments using a PR2. Accompanying video: https://youtu.be/aM3hE1J5W98.",
"title": ""
},
{
"docid": "a470aa1ba955cdb395b122daf2a17b6a",
"text": "Many real-world sequential decision making problems are partially observable by nature, and the environment model is typically unknown. Consequently, there is great need for reinforcement learning methods that can tackle such problems given only a stream of rewards and incomplete and noisy observations. In this paper, we propose deep variational reinforcement learning (DVRL), which introduces an inductive bias that allows an agent to learn a generative model of the environment and perform inference in that model to effectively aggregate the available information. We develop an n-step approximation to the evidence lower bound (ELBO), allowing the model to be trained jointly with the policy. This ensures that the latent state representation is suitable for the control task. In experiments on Mountain Hike and flickering Atari we show that our method outperforms previous approaches relying on recurrent neural networks to encode the past.",
"title": ""
},
{
"docid": "cfec098f84e157a2e12f0ff40551c977",
"text": "In this paper, an online news recommender system for the popular social network, Facebook, is described. This system provides daily newsletters for communities on Facebook. The system fetches the news articles and filters them based on the community description to prepare the daily news digest. Explicit survey feedback from the users show that most users found the application useful and easy to use. They also indicated that they could get some community specific articles that they would not have got otherwise.",
"title": ""
},
{
"docid": "1d0d5ad5371a3f7b8e90fad6d5299fa7",
"text": "Vascularization of embryonic organs or tumors starts from a primitive lattice of capillaries. Upon perfusion, this lattice is remodeled into branched arteries and veins. Adaptation to mechanical forces is implied to play a major role in arterial patterning. However, numerical simulations of vessel adaptation to haemodynamics has so far failed to predict any realistic vascular pattern. We present in this article a theoretical modeling of vascular development in the yolk sac based on three features of vascular morphogenesis: the disconnection of side branches from main branches, the reconnection of dangling sprouts (\"dead ends\"), and the plastic extension of interstitial tissue, which we have observed in vascular morphogenesis. We show that the effect of Poiseuille flow in the vessels can be modeled by aggregation of random walkers. Solid tissue expansion can be modeled by a Poiseuille (parabolic) deformation, hence by deformation under hits of random walkers. Incorporation of these features, which are of a mechanical nature, leads to realistic modeling of vessels, with important biological consequences. The model also predicts the outcome of simple mechanical actions, such as clamping of vessels or deformation of tissue by the presence of obstacles. This study offers an explanation for flow-driven control of vascular branching morphogenesis.",
"title": ""
},
{
"docid": "729cb5a59c1458ce6c9ef3fa29ca1d98",
"text": "The Simulink/Stateflow toolset is an integrated suite enabling model-based design and has become popular in the automotive and aeronautics industries. We have previously developed a translator called Simtolus from Simulink to the synchronous language Lustre and we build upon that work by encompassing Stateflow as well. Stateflow is problematical for synchronous languages because of its unbounded behaviour so we propose analysis techniques to define a subset of Stateflow for which we can define a synchronous semantics. We go further and define a \"safe\" subset of Stateflow which elides features which are potential sources of errors in Stateflow designs. We give an informal presentation of the Stateflow to Lustre translation process and show how our model-checking tool Lesar can be used to verify some of the semantical checks we have proposed. Finally, we present a small case-study.",
"title": ""
},
{
"docid": "18c56e9d096ba4ea48a0579626f83edc",
"text": "PURPOSE\nThe purpose of this study was to provide an overview of platelet-rich plasma (PRP) injected into the scalp for the management of androgenic alopecia.\n\n\nMATERIALS AND METHODS\nA literature review was performed to evaluate the benefits of PRP in androgenic alopecia.\n\n\nRESULTS\nHair restoration has been increasing. PRP's main components of platelet-derived growth factor, transforming growth factor, and vascular endothelial growth factor have the potential to stimulate hard and soft tissue wound healing. In general, PRP showed a benefit on patients with androgenic alopecia, including increased hair density and quality. Currently, different PRP preparations are being used with no standard technique.\n\n\nCONCLUSION\nThis review found beneficial effects of PRP on androgenic alopecia. However, more rigorous study designs, including larger samples, quantitative measurements of effect, and longer follow-up periods, are needed to solidify the utility of PRP for treating patients with androgenic alopecia.",
"title": ""
},
{
"docid": "bff21b4a0bc4e7cc6918bc7f107a5ca5",
"text": "This paper discusses driving system design based on traffic rules. This allows fully automated driving in an environment with human drivers, without necessarily changing equipment on other vehicles or infrastructure. It also facilitates cooperation between the driving system and the host driver during highly automated driving. The concept, referred to as legal safety, is illustrated for highly automated driving on highways with distance keeping, intelligent speed adaptation, and lane-changing functionalities. Requirements by legal safety on perception and control components are discussed. This paper presents the actual design of a legal safety decision component, which predicts object trajectories and calculates optimal subject trajectories. System implementation on automotive electronic control units and results on vehicle and simulator are discussed.",
"title": ""
},
{
"docid": "2252d2fd9955cac5e16304cb90f9dd60",
"text": "A number of solutions have been proposed to address the free-riding problem in peer-to-peer file sharing systems. The solutions are either imperfect-they allow some users to cheat the system with malicious behavior, or expensive-they require human intervention, require servers, or incur high mental transaction costs. The authors proposed a method to address these weaknesses. Specifically, a utility function was introduced to capture contributions made by a user and an auditing scheme to ensure the integrity of a utility function's values. The method enabled to reduce cheating by a malicious peer: it is shown that this approach can efficiently detect malicious peers with a probability over 98%.",
"title": ""
},
{
"docid": "7e8c99297dd2f9f73f8d50d92115090b",
"text": "This paper proposes a new wrist mechanism for robot manipulation. To develop multi-dof wrist mechanisms that can emulate human wrists, compactness and high torque density are the major challenges. Traditional wrist mechanisms consist of series of rotary motors that require gearing to amplify the output torque. This often results in a bulky wrist mechanism. Instead, large linear force can be easily realized in a compact space by using lead screw motors. Inspired by the muscle-tendon actuation pattern, the proposed mechanism consists of two parallel placed linear motors. Their linear motions are transmitted to two perpendicular rotations through a spherical mechanism and two slider crank mechanisms. High torque density can be achieved. Static and dynamic models are developed to design the wrist mechanism. A wrist prototype and its position control experiments will be presented with results discussed. The novel mechanism is expected to serve as an alternative for robot manipulators in applications that require human-friendly interactions.",
"title": ""
},
{
"docid": "eede682da157ac788a300e9c3080c460",
"text": "We study question answering as a machine learning problem, and induce a function that maps open-domain questions to queries over a database of web extractions. Given a large, community-authored, question-paraphrase corpus, we demonstrate that it is possible to learn a semantic lexicon and linear ranking function without manually annotating questions. Our approach automatically generalizes a seed lexicon and includes a scalable, parallelized perceptron parameter estimation scheme. Experiments show that our approach more than quadruples the recall of the seed lexicon, with only an 8% loss in precision.",
"title": ""
},
{
"docid": "116fd1ecd65f7ddfdfad6dca09c12876",
"text": "Malicious hardware Trojan circuitry inserted in safety-critical applications is a major threat to national security. In this work, we propose a novel application of a key-based obfuscation technique to achieve security against hardware Trojans. The obfuscation scheme is based on modifying the state transition function of a given circuit by expanding its reachable state space and enabling it to operate in two distinct modes -- the normal mode and the obfuscated mode. Such a modification obfuscates the rareness of the internal circuit nodes, thus making it difficult for an adversary to insert hard-to-detect Trojans. It also makes some inserted Trojans benign by making them activate only in the obfuscated mode. The combined effect leads to higher Trojan detectability and higher level of protection against such attack. Simulation results for a set of benchmark circuits show that the scheme is capable of achieving high levels of security at modest design overhead.",
"title": ""
},
{
"docid": "73b81ca84f4072188e1a263e9a7ea330",
"text": "The digital workplace is widely acknowledged as an important organizational asset for optimizing knowledge worker productivity. While there is no particular research stream on the digital workplace, scholars have conducted intensive research on related topics. This study aims to summarize the practical implications of the current academic body of knowledge on the digital workplace. For this purpose, a screening of academic-practitioner literature was conducted, followed by a systematic review of academic top journal literature. The screening revealed four main research topics on the digital workplace that are present in academic-practitioner literature: 1) Collaboration, 2) Compliance, 3) Mobility, and 4) Stress and overload. Based on the four topics, this study categorizes practical implications on the digital workplace into 15 concepts. Thereby, it provides two main contributions. First, the study delivers condensed information for practitioners about digital workplace design. Second, the results shed light on the relevance of IS research.",
"title": ""
},
{
"docid": "127405febe57f4df6f8f16d42e0ac762",
"text": "In the recent years there has been an increase in scientific papers publications in Albania and its neighboring countries that have large communities of Albanian speaking researchers. Many of these papers are written in Albanian. It is a very time consuming task to find papers related to the researchers’ work, because there is no concrete system that facilitates this process. In this paper we present the design of a modular intelligent search system for articles written in Albanian. The main part of it is the recommender module that facilitates searching by providing relevant articles to the users (in comparison with a given one). We used a cosine similarity based heuristics that differentiates the importance of term frequencies based on their location in the article. We did not notice big differences on the recommendation results when using different combinations of the importance factors of the keywords, title, abstract and body. We got similar results when using only theand body. We got similar results when using only the title and abstract in comparison with the other combinations. Because we got fairly good results in this initial approach, we believe that similar recommender systems for documents written in Albanian can be built also in contexts not related to scientific publishing. Keywords—recommender system; Albanian; information retrieval; intelligent search; digital library",
"title": ""
},
{
"docid": "ac46286c7d635ccdcd41358666026c12",
"text": "This paper represents our first endeavor to explore how to better understand the complex nature, scope, and practices of eSports. Our goal is to explore diverse perspectives on what defines eSports as a starting point for further research. Specifically, we critically reviewed existing definitions/understandings of eSports in different disciplines. We then interviewed 26 eSports players and qualitatively analyzed their own perceptions of eSports. We contribute to further exploring definitions and theories of eSports for CHI researchers who have considered online gaming a serious and important area of research, and highlight opportunities for new avenues of inquiry for researchers who are interested in designing technologies for this unique genre.",
"title": ""
}
] |
scidocsrr
|
da4dceaf228a008ed5462e90668a6333
|
Advanced battery management system using MATLAB/Simulink
|
[
{
"docid": "4b9953e7ff548a0d1b09bca3c3f3c38f",
"text": "Battery management system (BMS) is an integral part of any electrical vehicle, which ensures that the batteries are not subjected to conditions outside their specified safe operating conditions. Thus the safety of the battery as well as of the passengers depend on the design of the BMS. In the present work a preliminary work is carried out to simulate a typical BMS for hybrid electrical vehicle. The various functional blocks of the BMS are implemented in SIMULINK toolbox of MATLAB. The BMS proposed is equipped with a battery model in which SOC is used as one of the states to overcome the limitation of stand-alone coulomb counting method for SOC estimation. The parameters of the battery are extracted from experimental results and incorporated in the model. The simulation results are validated by experimental results.",
"title": ""
}
] |
[
{
"docid": "6d2667dd550e14d4d46b24d9c8580106",
"text": "Deficits in gratification delay are associated with a broad range of public health problems, such as obesity, risky sexual behavior, and substance abuse. However, 6 decades of research on the construct has progressed less quickly than might be hoped, largely because of measurement issues. Although past research has implicated 5 domains of delay behavior, involving food, physical pleasures, social interactions, money, and achievement, no published measure to date has tapped all 5 components of the content domain. Existing measures have been criticized for limitations related to efficiency, reliability, and construct validity. Using an innovative Internet-mediated approach to survey construction, we developed the 35-item 5-factor Delaying Gratification Inventory (DGI). Evidence from 4 studies and a large, diverse sample of respondents (N = 10,741) provided support for the psychometric properties of the measure. Specifically, scores on the DGI demonstrated strong internal consistency and test-retest reliability for the 35-item composite, each of the 5 domains, and a 10-item short form. The 5-factor structure fit the data well and had good measurement invariance across subgroups. Construct validity was supported by correlations with scores on closely related self-control measures, behavioral ratings, Big Five personality trait measures, and measures of adjustment and psychopathology, including those on the Minnesota Multiphasic Personality Inventory-2-Restructured Form. DGI scores also showed incremental validity in accounting for well-being and health-related variables. The present investigation holds implications for improving public health, accelerating future research on gratification delay, and facilitating survey construction research more generally by demonstrating the suitability of an Internet-mediated strategy.",
"title": ""
},
{
"docid": "37a5089b7e9e427d330d4720cdcf00d9",
"text": "3D shape models are naturally parameterized using vertices and faces, i.e., composed of polygons forming a surface. However, current 3D learning paradigms for predictive and generative tasks using convolutional neural networks focus on a voxelized representation of the object. Lifting convolution operators from the traditional 2D to 3D results in high computational overhead with little additional benefit as most of the geometry information is contained on the surface boundary. Here we study the problem of directly generating the 3D shape surface of rigid and non-rigid shapes using deep convolutional neural networks. We develop a procedure to create consistent ‘geometry images’ representing the shape surface of a category of 3D objects. We then use this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation. Our experiments indicate that our network learns a meaningful representation of shape surfaces allowing it to interpolate between shape orientations and poses, invent new shape surfaces and reconstruct 3D shape surfaces from previously unseen images. Our code is available at https://github.com/sinhayan/surfnet.",
"title": ""
},
{
"docid": "b9d12a2c121823a81902375f6be893bb",
"text": "Internet users are often victimized by malicious attackers. Some attackers infect and use innocent users’ machines to launch large-scale attacks without the users’ knowledge. One of such attacks is the click-fraud attack. Click-fraud happens in Pay-Per-Click (PPC) ad networks where the ad network charges advertisers for every click on their ads. Click-fraud has been proved to be a serious problem for the online advertisement industry. In a click-fraud attack, a user or an automated software clicks on an ad with a malicious intent and advertisers need to pay for those valueless clicks. Among many forms of click-fraud, botnets with the automated clickers are the most severe ones. In this paper, we present a method for detecting automated clickers from the user-side. The proposed method to Fight Click-Fraud, FCFraud, can be integrated into the desktop and smart device operating systems. Since most modern operating systems already provide some kind of anti-malware service, our proposed method can be implemented as a part of the service. We believe that an effective protection at the operating system level can save billions of dollars of the advertisers. Experiments show that FCFraud is 99.6% (98.2% in mobile ad library generated traffic) accurate in classifying ad requests from all user processes and it is 100% successful in detecting clickbots in both desktop and mobile devices. We implement a cloud backend for the FCFraud service to save battery power in mobile devices. The overhead of executing FCFraud is also analyzed and we show that it is reasonable for both the platforms. Copyright c © 2016 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "316aadaf77171c03167ad8147b2d8ebb",
"text": "Rheumatoid arthritis destroys joints of the body like erosion in bones which intern may cause deformity and characterized by inflammation of the tissue around the joints as well as in other organs of the body. At the beginning of this disease mainly the joints of hand and wrist are affected making hand radiograph analysis very important. Lately manual JSW measurement in hand X-ray digital radiograph of Arthritis patients were in use but it has disadvantages like inaccuracy, inter-reader variability. The reproducible quantification of the progression of joint space narrowing and the erosive bone destructions caused by RA is crucial during treatment and in imaging biomarkers in clinical trials. Current manual scoring methods exhibit interreader variability, even after intensive training, and thus, impede the efficient monitoring of the disease. Hand radiograph analysis is difficult for radiologist as there are 14 number of hand joints. To avoid observer dependency, computer-aided analysis is required. Wrist joint space narrowing is a main radiographic outcome of rheumatoid arthritis (RA). Yet, automatic radiographic wrist joint space width (JSW) quantification with statistical properties for RA patients has not been widely investigated. The automated analysis of statistical properties helps to reduce need of skilled personnel.",
"title": ""
},
{
"docid": "9b504f633488016fad865dee6fbdf3ef",
"text": "Transmission lines is the important factor of the power system. Transmission and distribution lines has good contribution in the generating unit and consumers to obtain the continuity of electric supply. To economically transfer high power between systems and from control generating field. Transmission line run over hundreds of kilometers to supply electrical power to the consumers. It is a required for industries to detect the faults in the power system as early as possible. “Fault Detection and Auto Line Distribution System With GSM Module” is a automation technique used for fault detection in AC supply and auto sharing of power. The significance undetectable faults is that they represent a serious public safety hazard as well as a risk of arcing ignition of fires. This paper represents under voltage and over current fault detection. It is useful in technology to provide many applications like home, industry etc..",
"title": ""
},
{
"docid": "d96f7c54d669771f9fa881e97bddd5f6",
"text": "Conceptual modeling is one major topic in information systems research and becomes even more important with the arising of new software engineering principles like model driven architecture (MDA) or serviceoriented architectures (SOA). Research on conceptual modeling is characterized by a dilemma: Empirical research confirms that in practice conceptual modeling is often perceived as difficult and not done well. The application of reusable conceptual models is a promising approach to support model designers. At the same time, the IS research community claims for a sounder theoretical base for conceptual modeling. The design science research paradigm delivers a framework to fortify the theoretical foundation of research on conceptual models. We provide insights on how to achieve both, relevance and rigor, in conceptual modeling by identifying requirements for reusable conceptual models on the basis of the design science research paradigm.",
"title": ""
},
{
"docid": "c123d61a6a94e963d4fbf6075c496599",
"text": "Most metastatic tumors, such as those originating in the prostate, lung, and gastrointestinal tract, respond poorly to conventional chemotherapy. Novel treatment strategies for advanced cancer are therefore desperately needed. Dietary restriction of the essential amino acid methionine offers promise as such a strategy, either alone or in combination with chemotherapy or other treatments. Numerous in vitro and animal studies demonstrate the effectiveness of dietary methionine restriction in inhibiting growth and eventually causing death of cancer cells. In contrast, normal host tissues are relatively resistant to methionine restriction. These preclinical observations led to a phase I clinical trial of dietary methionine restriction for adults with advanced cancer. Preliminary findings from this trial indicate that dietary methionine restriction is safe and feasible for the treatment of patients with advanced cancer. In addition, the trial has yielded some preliminary evidence of antitumor activity. One patient with hormone-independent prostate cancer experienced a 25% reduction in serum prostate-specific antigen (PSA) after 12 weeks on the diet, and a second patient with renal cell cancer experienced an objective radiographic response. The possibility that methionine restriction may act synergistically with other cancer treatments such as chemotherapy is being explored. Findings to date support further investigation of dietary methionine restriction as a novel treatment strategy for advanced cancer.",
"title": ""
},
{
"docid": "c15aa2444187dffe2be4636ad00babdd",
"text": "Most people have become “big data” producers in their daily life. Our desires, opinions, sentiments, social links as well as our mobile phone calls and GPS track leave traces of our behaviours. To transform these data into knowledge, value is a complex task of data science. This paper shows how the SoBigData Research Infrastructure supports data science towards the new frontiers of big data exploitation. Our research infrastructure serves a large community of social sensing and social mining researchers and it reduces the gap between existing research centres present at European level. SoBigData integrates resources and creates an infrastructure where sharing data and methods among text miners, visual analytics researchers, socio-economic scientists, network scientists, political scientists, humanities researchers can indeed occur. The main concepts related to SoBigData Research Infrastructure are presented. These concepts support virtual and transnational (on-site) access to the resources. Creating and supporting research communities are considered to be of vital importance for the success of our research infrastructure, as well as contributing to train the new generation of data scientists. Furthermore, this paper introduces the concept of exploratory and shows their role in the promotion of the use of our research infrastructure. The exploratories presented in this paper represent also a set of real applications in the context of social mining. Finally, a special attention is given to the legal and ethical aspects. Everything in SoBigData is supervised by an ethical and legal framework.",
"title": ""
},
{
"docid": "e077bb23271fbc056290be84b39a9fcc",
"text": "Rovers will continue to play an important role in planetary exploration. Plans include the use of the rocker-bogie rover configuration. Here, models of the mechanics of this configuration are presented. Methods for solving the inverse kinematics of the system and quasi-static force analysis are described. Also described is a simulation based on the models of the rover’s performance. Experimental results confirm the validity of the models.",
"title": ""
},
{
"docid": "2a880c868e21d16f1cb9d377e9961ff2",
"text": "This paper presents the results of an action research project into the practice of formulating a knowledge management strategy for a middle-sized supply chain solution provider in Australia. The paper contrasts our practice of formulating the strategy with expectations and understandings of strategy and knowledge management strategy critical success factors identified from the literature. We have adopted an action research approach incorporating multiple iterative phases. With the cooperation of the company, this approach has enabled a change in organisational culture to one which now fosters a knowledge sharing environment.",
"title": ""
},
{
"docid": "b6b9e1eaf17f6cdbc9c060e467021811",
"text": "Tumour-associated viruses produce antigens that, on the face of it, are ideal targets for immunotherapy. Unfortunately, these viruses are experts at avoiding or subverting the host immune response. Cervical-cancer-associated human papillomavirus (HPV) has a battery of immune-evasion mechanisms at its disposal that could confound attempts at HPV-directed immunotherapy. Other virally associated human cancers might prove similarly refractive to immuno-intervention unless we learn how to circumvent their strategies for immune evasion.",
"title": ""
},
{
"docid": "fc9699b4382b1ddc6f60fc6ec883a6d3",
"text": "Applications hosted in today's data centers suffer from internal fragmentation of resources, rigidity, and bandwidth constraints imposed by the architecture of the network connecting the data center's servers. Conventional architectures statically map web services to Ethernet VLANs, each constrained in size to a few hundred servers owing to control plane overheads. The IP routers used to span traffic across VLANs and the load balancers used to spray requests within a VLAN across servers are realized via expensive customized hardware and proprietary software. Bisection bandwidth is low, severly constraining distributed computation Further, the conventional architecture concentrates traffic in a few pieces of hardware that must be frequently upgraded and replaced to keep pace with demand - an approach that directly contradicts the prevailing philosophy in the rest of the data center, which is to scale out (adding more cheap components) rather than scale up (adding more power and complexity to a small number of expensive components).\n Commodity switching hardware is now becoming available with programmable control interfaces and with very high port speeds at very low port cost, making this the right time to redesign the data center networking infrastructure. In this paper, we describe monsoon, a new network architecture, which scales and commoditizes data center networking monsoon realizes a simple mesh-like architecture using programmable commodity layer-2 switches and servers. In order to scale to 100,000 servers or more,monsoon makes modifications to the control plane (e.g., source routing) and to the data plane (e.g., hot-spot free multipath routing via Valiant Load Balancing). It disaggregates the function of load balancing into a group of regular servers, with the result that load balancing server hardware can be distributed amongst racks in the data center leading to greater agility and less fragmentation. The architecture creates a huge, flexible switching domain, supporting any server/any service and unfragmented server capacity at low cost.",
"title": ""
},
{
"docid": "533566714729c146967238ff59222a16",
"text": "Several recent papers investigate Active Learning (AL) for mitigating the datadependence of deep learning for natural language processing. However, the applicability of AL to real-world problems remains an open question. While in supervised learning, practitioners can try many different methods, evaluating each against a validation set before selecting a model, AL affords no such luxury. Over the course of one AL run, an agent annotates its dataset exhausting its labeling budget. Thus, given a new task, an active learner has no opportunity to compare models and acquisition functions. This paper provides a largescale empirical study of deep active learning, addressing multiple tasks and, for each, multiple datasets, multiple models, and a full suite of acquisition functions. We find that across all settings, Bayesian active learning by disagreement, using uncertainty estimates provided either by Dropout or Bayes-by-Backprop significantly improves over i.i.d. baselines and usually outperforms classic uncertainty sampling.",
"title": ""
},
{
"docid": "ca82ceafc6079f416d9f7b94a7a6a665",
"text": "When Big data and cloud computing join forces together, several domains like: healthcare, disaster prediction and decision making become easier and much more beneficial to users in term of information gathering, although cloud computing will reduce time and cost of analyzing information for big data, it may harm the confidentiality and integrity of the sensitive data, for instance, in healthcare, when analyzing disease's spreading area, the name of the infected people must remain secure, hence the obligation to adopt a secure model that protect sensitive data from malicious users. Several case studies on the integration of big data in cloud computing, urge on how easier it would be to analyze and manage big data in this complex envronement. Companies must consider outsourcing their sensitive data to the cloud to take advantage of its beneficial resources such as huge storage, fast calculation, and availability, yet cloud computing might harm the security of data stored and computed in it (confidentiality, integrity). Therefore, strict paradigm must be adopted by organization to obviate their outsourced data from being stolen, damaged or lost. In this paper, we compare between the existing models to secure big data implementation in the cloud computing. Then, we propose our own model to secure Big Data on the cloud computing environement, considering the lifecycle of data from uploading, storage, calculation to its destruction.",
"title": ""
},
{
"docid": "1c2dae29ed066eec72e72c1173bd263d",
"text": "Wireless Sensor Networks (WSNs) are important and necessary platforms for the future as the concept \"Internet of Things\" has emerged lately. They are used for monitoring, tracking, or controlling of many applications in industry, health care, habitat, and military. However, the quality of data collected by sensor nodes is affected by anomalies that occur due to various reasons, such as node failures, reading errors, unusual events, and malicious attacks. Therefore, anomaly detection is a necessary process to ensure the quality of sensor data before it is utilized for making decisions. In this review, we present the challenges of anomaly detection in WSNs and state the requirements to design efficient and effective anomaly detection models. We then review the latest advancements of data anomaly detection research in WSNs and classify current detection approaches in five main classes based on the detection methods used to design these approaches. Varieties of the state-of-the-art models for each class are covered and their limitations are highlighted to provide ideas for potential future works. Furthermore, the reviewed approaches are compared and evaluated based on how well they meet the stated requirements. Finally, the general limitations of current approaches are mentioned and further research opportunities are suggested and discussed.",
"title": ""
},
{
"docid": "854b473b0ee6d3cf4d1a34cd79a658e3",
"text": "Blockchain provides a new approach for participants to maintain reliable databases in untrusted networks without centralized authorities. However, there are still many serious problems in real blockchain systems in IP network such as the lack of support for multicast and the hierarchies of status. In this paper, we design a bitcoin-like blockchain system named BlockNDN over Named Data Networking and we implement and deploy it on our cluster as well. The resulting design solves those problems in IP network. It provides completely decentralized systems and simplifies system architecture. It also improves the weak-connectivity phenomenon and decreases the broadcast overhead.",
"title": ""
},
{
"docid": "76aacf8fd5c24f64211015ce9c196bf0",
"text": "In industrially relevant Cu/ZnO/Al2 O3 catalysts for methanol synthesis, the strong metal support interaction between Cu and ZnO is known to play a key role. Here we report a detailed chemical transmission electron microscopy study on the nanostructural consequences of the strong metal support interaction in an activated high-performance catalyst. For the first time, clear evidence for the formation of metastable \"graphite-like\" ZnO layers during reductive activation is provided. The description of this metastable layer might contribute to the understanding of synergistic effects between the components of the Cu/ZnO/Al2 O3 catalysts.",
"title": ""
},
{
"docid": "2f7edc539bc61f8fc07bc6f5f8e496e0",
"text": "We investigate the contextual multi-armed bandit problem in an adversarial setting and introduce an online algorithm that asymptotically achieves the performance of the best contextual bandit arm selection strategy under certain conditions. We show that our algorithm is highly efficient and provides significantly improved performance with a guaranteed performance upper bound in a strong mathematical sense. We have no statistical assumptions on the context vectors and the loss of the bandit arms, hence our results are guaranteed to hold even in adversarial environments. We use a tree notion in order to partition the space of context vectors in a nested structure. Using this tree, we construct a large class of context dependent bandit arm selection strategies and adaptively combine them to achieve the performance of the best strategy. We use the hierarchical nature of introduced tree to implement this combination with a significantly low computational complexity, thus our algorithm can be efficiently used in applications involving big data. Through extensive set of experiments involving synthetic and real data, we demonstrate significant performance gains achieved by the proposed algorithm with respect to the state-of-the-art adversarial bandit algorithms.",
"title": ""
},
{
"docid": "ad4596e24f157653a36201767d4b4f3b",
"text": "We present a character-based model for joint segmentation and POS tagging for Chinese. The bidirectional RNN-CRF architecture for general sequence tagging is adapted and applied with novel vector representations of Chinese characters that capture rich contextual information and lower-than-character level features. The proposed model is extensively evaluated and compared with a state-of-the-art tagger respectively on CTB5, CTB9 and UD Chinese. The experimental results indicate that our model is accurate and robust across datasets in different sizes, genres and annotation schemes. We obtain stateof-the-art performance on CTB5, achieving 94.38 F1-score for joint segmentation and POS tagging.",
"title": ""
},
{
"docid": "4830c447cb27d5ad1696bb25ce8c89fd",
"text": "For a grid-connected converter with an LCL filter, the harmonic compensators of a proportional-resonant (PR) controller are usually limited to several low-order current harmonics due to system instability when the compensated frequency is out of the bandwidth of the system control loop. In this paper, a new current feedback method for PR current control is proposed. The weighted average value of the currents flowing through the two inductors of the LCL filter is used as the feedback to the current PR regulator. Consequently, the control system with the LCL filter is degraded from a third-order function to a first-order one. A large proportional control-loop gain can be chosen to obtain a wide control-loop bandwidth, and the system can be optimized easily for minimum current harmonic distortions, as well as system stability. The inverter system with the proposed controller is investigated and compared with those using traditional control methods. Experimental results on a 5-kW fuel-cell inverter are provided, and the new current control strategy has been verified.",
"title": ""
}
] |
scidocsrr
|
67d11402a53a224307834eb226c43aa2
|
A new mobile-based multi-factor authentication scheme using pre-shared number, GPS location and time stamp
|
[
{
"docid": "6356a0272b95ade100ad7ececade9e36",
"text": "We describe a browser extension, PwdHash, that transparently produces a different password for each site, improving web password security and defending against password phishing and other attacks. Since the browser extension applies a cryptographic hash function to a combination of the plaintext password entered by the user, data associated with the web site, and (optionally) a private salt stored on the client machine, theft of the password received at one site will not yield a password that is useful at another site. While the scheme requires no changes on the server side, implementing this password method securely and transparently in a web browser extension turns out to be quite difficult. We describe the challenges we faced in implementing PwdHash and some techniques that may be useful to anyone facing similar security issues in a browser environment.",
"title": ""
},
{
"docid": "7b4e9043e11d93d8152294f410390f6d",
"text": "In this paper, we present a series of methods to authenticate a user with a graphical password. To that end, we employ the user¿s personal handheld device as the password decoder and the second factor of authentication. In our methods, a service provider challenges the user with an image password. To determine the appropriate click points and their order, the user needs some hint information transmitted only to her handheld device. We show that our method can overcome threats such as key-loggers, weak password, and shoulder surfing. With the increasing popularity of handheld devices such as cell phones, our approach can be leveraged by many organizations without forcing the user to memorize different passwords or carrying around different tokens.",
"title": ""
},
{
"docid": "679759d8f8e4c4ef5a2bb1356a61d7f5",
"text": "This paper describes a method of implementing two factor authentication using mobile phones. The proposed method guarantees that authenticating to services, such as online banking or ATM machines, is done in a very secure manner. The proposed system involves using a mobile phone as a software token for One Time Password generation. The generated One Time Password is valid for only a short user-defined period of time and is generated by factors that are unique to both, the user and the mobile device itself. Additionally, an SMS-based mechanism is implemented as both a backup mechanism for retrieving the password and as a possible mean of synchronization. The proposed method has been implemented and tested. Initial results show the success of the proposed method.",
"title": ""
},
{
"docid": "0f503bded2c4b0676de16345d4596280",
"text": "An emerging approach to the problem of reducing the identity theft is represented by the adoption of biometric authentication systems. Such systems however present however several challenges, related to privacy, reliability, security of the biometric data. Inter-operability is also required among the devices used for the authentication. Moreover, very often biometric authentication in itself is not sufficient as a conclusive proof of identity and has to be complemented with multiple other proofs of identity like passwords, SSN, or other user identifiers. Multi-factor authentication mechanisms are thus required to enforce strong authentication based on the biometric and identifiers of other nature.In this paper we provide a two-phase authentication mechanism for federated identity management systems. The first phase consists of a two-factor biometric authentication based on zero knowledge proofs. We employ techniques from vector-space model to generate cryptographic biometric keys. These keys are kept secret, thus preserving the confidentiality of the biometric data, and at the same time exploit the advantages of a biometric authentication. The second authentication combines several authentication factors in conjunction with the biometric to provide a strong authentication. A key advantage of our approach is that any unanticipated combination of factors can be used. Such authentication system leverages the information of the user that are available from the federated identity management system.",
"title": ""
}
] |
[
{
"docid": "13c7278393988ec2cfa9a396255e6ff3",
"text": "Finding good transfer functions for rendering medical volumes is difficult, non-intuitive, and time-consuming. We introduce a clustering-based framework for the automatic generation of transfer functions for volumetric data. The system first applies mean shift clustering to oversegment the volume boundaries according to their low-high (LH) values and their spatial coordinates, and then uses hierarchical clustering to group similar voxels. A transfer function is then automatically generated for each cluster such that the number of occlusions is reduced. The framework also allows for semi-automatic operation, where the user can vary the hierarchical clustering results or the transfer functions generated. The system improves the efficiency and effectiveness of visualizing medical images and is suitable for medical imaging applications.",
"title": ""
},
{
"docid": "3a80168bda1d5d92a5d767117581806a",
"text": "During the last years a wide range of algorithms and devices have been made available to easily acquire range images. The increasing abundance of depth data boosts the need for reliable and unsupervised analysis techniques, spanning from part registration to automated segmentation. In this context, we focus on the recognition of known objects in cluttered and incomplete 3D scans. Locating and fitting a model to a scene are very important tasks in many scenarios such as industrial inspection, scene understanding, medical imaging and even gaming. For this reason, these problems have been addressed extensively in the literature. Several of the proposed methods adopt local descriptor-based approaches, while a number of hurdles still hinder the use of global techniques. In this paper we offer a different perspective on the topic: We adopt an evolutionary selection algorithm that seeks global agreement among surface points, while operating at a local level. The approach effectively extends the scope of local descriptors by actively selecting correspondences that satisfy global consistency constraints, allowing us to attack a more challenging scenario where model and scene have different, unknown scales. This leads to a novel and very effective pipeline for 3D object recognition, which is validated with an extensive set of experiments and comparisons with recent techniques at the state of the art.",
"title": ""
},
{
"docid": "104cf54cfa4bc540b17176593cdb77d8",
"text": "Nonlinear manifold learning from unorganized data points is a very challenging unsupervised learning and data visualization problem with a great variety of applications. In this paper we present a new algorithm for manifold learning and nonlinear dimension reduction. Based on a set of unorganized data points sampled with noise from the manifold, we represent the local geometry of the manifold using tangent spaces learned by fitting an affine subspace in a neighborhood of each data point. Those tangent spaces are aligned to give the internal global coordinates of the data points with respect to the underlying manifold by way of a partial eigendecomposition of the neighborhood connection matrix. We present a careful error analysis of our algorithm and show that the reconstruction errors are of second-order accuracy. We illustrate our algorithm using curves and surfaces both in 2D/3D and higher dimensional Euclidean spaces, and 64-by-64 pixel face images with various pose and lighting conditions. We also address several theoretical and algorithmic issues for further research and improvements.",
"title": ""
},
{
"docid": "7c1af982b6ac6aa6df4549bd16c1964c",
"text": "This paper deals with the problem of estimating the position of emitters using only direction of arrival information. We propose an improvement of newly developed algorithm for position finding of a stationary emitter called sensitivity analysis. The proposed method uses Taylor series expansion iteratively to enhance the estimation of the emitter location and reduce position finding error. Simulation results show that our proposed method makes a great improvement on accuracy of position finding with respect to sensitivity analysis method.",
"title": ""
},
{
"docid": "afadbcb8c025ad6feca693c05ce7b43f",
"text": "A data structure that implements a mergeable double-ended priority queue, namely therelaxed min-max heap, is presented. A relaxed min-max heap ofn items can be constructed inO(n) time. In the worst case, operationsfind_min() andfind_max() can be performed in constant time, while each of the operationsmerge(),insert(),delete_min(),delete_max(),decrease_key(), anddelete_key() can be performed inO(logn) time. Moreover,insert() hasO(1) amortized running time. If lazy merging is used,merge() will also haveO(1) worst-case and amortized time. The relaxed min-max heap is the first data structure that achieves these bounds using only two pointers (puls one bit) per item.",
"title": ""
},
{
"docid": "f591ae6217c769d3bca2c15a021125cc",
"text": "Recent years have witnessed an explosive growth of mobile devices. Mobile devices are permeating every aspect of our daily lives. With the increasing usage of mobile devices and intelligent applications, there is a soaring demand for mobile applications with machine learning services. Inspired by the tremendous success achieved by deep learning in many machine learning tasks, it becomes a natural trend to push deep learning towards mobile applications. However, there exist many challenges to realize deep learning in mobile applications, including the contradiction between the miniature nature of mobile devices and the resource requirement of deep neural networks, the privacy and security concerns about individuals' data, and so on. To resolve these challenges, during the past few years, great leaps have been made in this area. In this paper, we provide an overview of the current challenges and representative achievements about pushing deep learning on mobile devices from three aspects: training with mobile data, efficient inference on mobile devices, and applications of mobile deep learning. The former two aspects cover the primary tasks of deep learning. Then, we go through our two recent applications that apply the data collected by mobile devices to inferring mood disturbance and user identification. Finally, we conclude this paper with the discussion of the future of this area.",
"title": ""
},
{
"docid": "a70fa8bc2a48b3cf38bd99b6d1251140",
"text": "In many of today's online applications that facilitate data exploration, results from information filters such as recommender systems are displayed alongside traditional search tools. However, the effect of prediction algorithms on users who are performing open-ended data exploration tasks through a search interface is not well understood. This paper describes a study of three interface variations of a tool for analyzing commuter traffic anomalies in the San Francisco Bay Area. The system supports novel interaction between a prediction algorithm and a human analyst, and is designed to explore the boundaries, limitations and synergies of both. The degree of explanation of underlying data and algorithmic process was varied experimentally across each interface. The experiment (N=197) was performed to assess the impact of algorithm transparency/explanation on data analysis tasks in terms of search success, general insight into the underlying data set and user experience. Results show that 1) presence of recommendations in the user interface produced a significant improvement in recall of anomalies, 2) participants were able to detect anomalies in the data that were missed by the algorithm, 3) participants who used the prediction algorithm performed significantly better when estimating quantities in the data, and 4) participants in the most explanatory condition were the least biased by the algorithm's predictions when estimating quantities.",
"title": ""
},
{
"docid": "85c74646e74aaff7121042beaded5bfe",
"text": "We consider the sampling bias introduced in the study of online networks when collecting data through publicly available APIs (application programming interfaces). We assess differences between three samples of Twitter activity; the empirical context is given by political protests taking place in May 2012. We track online communication around these protests for the period of one month, and reconstruct the network of mentions and re-tweets according to the search and the streaming APIs, and to different filraph comparison tering parameters. We find that smaller samples do not offer an accurate picture of peripheral activity; we also find that the bias is greater for the network of mentions, partly because of the higher influence of snowballing in identifying relevant nodes. We discuss the implications of this bias for the study of diffusion dynamics and political communication through social media, and advocate the need for more uniform sampling procedures to study online communication. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "33a1450fa00705d5ef20780b4e1de6b3",
"text": "This paper reviews the range of sensors used in electronic nose (e-nose) systems to date. It outlines the operating principles and fabrication methods of each sensor type as well as the applications in which the different sensors have been utilised. It also outlines the advantages and disadvantages of each sensor for application in a cost-effective low-power handheld e-nose system.",
"title": ""
},
{
"docid": "9f2d6c872761d8922cac8a3f30b4b7ba",
"text": "Recently, CNN reported on the future of brain-computer interfaces (BCIs). BCIs are devices that process a user's brain signals to allow direct communication and interaction with the environment. BCIs bypass the normal neuromuscular output pathways and rely on digital signal processing and machine learning to translate brain signals to action (Figure 1). Historically, BCIs were developed with biomedical applications in mind, such as restoring communication in completely paralyzed individuals and replacing lost motor function. More recent applications have targeted nondisabled individuals by exploring the use of BCIs as a novel input device for entertainment and gaming. The task of the BCI is to identify and predict behaviorally induced changes or \"cognitive states\" in a user's brain signals. Brain signals are recorded either noninvasively from electrodes placed on the scalp [electroencephalogram (EEG)] or invasively from electrodes placed on the surface of or inside the brain. BCIs based on these recording techniques have allowed healthy and disabled individuals to control a variety of devices. In this article, we will describe different challenges and proposed solutions for noninvasive brain-computer interfacing.",
"title": ""
},
{
"docid": "dcd705e131eb2b60c54ff5cb6ae51555",
"text": "Comprehension is one fundamental process in the software life cycle. Although necessary, this comprehension is difficult to obtain due to amount and complexity of information related to software. Thus, software visualization techniques and tools have been proposed to facilitate the comprehension process and to reduce maintenance costs. This paper shows the results from a Literature Systematic Review to identify software visualization techniques and tools. We analyzed 52 papers and we identified 28 techniques and 33 tools for software visualization. Among these techniques, 71% have been implemented and available to users, 48% use 3D visualization and 80% are generated using static analysis.",
"title": ""
},
{
"docid": "74686e9acab0a4d41c87cadd7da01889",
"text": "Automatic analysis of biomedical time series such as electroencephalogram (EEG) and electrocardiographic (ECG) signals has attracted great interest in the community of biomedical engineering due to its important applications in medicine. In this work, a simple yet effective bag-of-words representation that is able to capture both local and global structure similarity information is proposed for biomedical time series representation. In particular, similar to the bag-of-words model used in text document domain, the proposed method treats a time series as a text document and extracts local segments from the time series as words. The biomedical time series is then represented as a histogram of codewords, each entry of which is the count of a codeword appeared in the time series. Although the temporal order of the local segments is ignored, the bag-of-words representation is able to capture high-level structural information because both local and global structural information are well utilized. The performance of the bag-of-words model is validated on three datasets extracted from real EEG and ECG signals. The experimental results demonstrate that the proposed method is not only insensitive to parameters of the bag-of-words model such as local segment length and codebook size, but also robust to noise.",
"title": ""
},
{
"docid": "548b9580c2b36bd1730392a92f6640c2",
"text": "Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of magnetic resonance (MR) images. Unfortunately, MR images always contain a significant amount of noise caused by operator performance, equipment, and the environment, which can lead to serious inaccuracies with segmentation. A robust segmentation technique based on an extension to the traditional fuzzy c-means (FCM) clustering algorithm is proposed in this paper. A neighborhood attraction, which is dependent on the relative location and features of neighboring pixels, is shown to improve the segmentation performance dramatically. The degree of attraction is optimized by a neural-network model. Simulated and real brain MR images with different noise levels are segmented to demonstrate the superiority of the proposed technique compared to other FCM-based methods. This segmentation method is a key component of an MR image-based classification system for brain tumors, currently being developed.",
"title": ""
},
{
"docid": "c59652c2166aefb00469517cd270dea2",
"text": "Intrusion detection systems have traditionally been based on the characterization of an attack and the tracking of the activity on the system to see if it matches that characterization. Recently, new intrusion detection systems based on data mining are making their appearance in the field. This paper describes the design and experiences with the ADAM (Audit Data Analysis and Mining) system, which we use as a testbed to study how useful data mining techniques can be in intrusion detection.",
"title": ""
},
{
"docid": "681d0a6dcad967340cfb3ebe9cf7b779",
"text": "We demonstrate an integrated buck dc-dc converter for multi-V/sub CC/ microprocessors. At nominal conditions, the converter produces a 0.9-V output from a 1.2-V input. The circuit was implemented in a 90-nm CMOS technology. By operating at high switching frequency of 100 to 317 MHz with four-phase topology and fast hysteretic control, we reduced inductor and capacitor sizes by three orders of magnitude compared to previously published dc-dc converters. This eliminated the need for the inductor magnetic core and enabled integration of the output decoupling capacitor on-chip. The converter achieves 80%-87% efficiency and 10% peak-to-peak output noise for a 0.3-A output current and 2.5-nF decoupling capacitance. A forward body bias of 500 mV applied to PMOS transistors in the bridge improves efficiency by 0.5%-1%.",
"title": ""
},
{
"docid": "aa6c54a142442ee1de03c57f9afe8972",
"text": "Objectives: We present our 3 years experience with alar batten grafts, using a modified technique, for non-iatrogenic nasal valve/alar",
"title": ""
},
{
"docid": "714df72467bc3e919b7ea7424883cf26",
"text": "Although a lot of attention has been paid to software cost estimation since 1960, making accurate effort and schedule estimation is still a challenge. To collect evidence and identify potential areas of improvement in software cost estimation, it is important to investigate the estimation accuracy, the estimation method used, and the factors influencing the adoption of estimation methods in current industry. This paper analyzed 112 projects from the Chinese software project benchmarking dataset and conducted questionnaire survey on 116 organizations to investigate the above information. The paper presents the current situations related to software project estimation in China and provides evidence-based suggestions on how to improve software project estimation. Our survey results suggest, e.g., that large projects were more prone to cost and schedule overruns, that most computing managers and professionals were neither satisfied nor dissatisfied with the project estimation, that very few organizations (15%) used model-based methods, and that the high adoption cost and insignificant benefit after adoption were the main causes for low use of model-based methods.",
"title": ""
},
{
"docid": "c30ea570f744f576014aeacf545b027c",
"text": "We aimed to examine the effect of different doses of lutein supplementation on visual function in subjects with long-term computer display light exposure. Thirty-seven healthy subjects with long-term computer display light exposure ranging in age from 22 to 30 years were randomly assigned to one of three groups: Group L6 (6 mg lutein/d, n 12); Group L12 (12 mg lutein/d, n 13); and Group Placebo (maltodextrin placebo, n 12). Levels of serum lutein and visual performance indices such as visual acuity, contrast sensitivity and glare sensitivity were measured at weeks 0 and 12. After 12-week lutein supplementation, serum lutein concentrations of Groups L6 and L12 increased from 0.356 (SD 0.117) to 0.607 (SD 0.176) micromol/l, and from 0.328 (SD 0.120) to 0.733 (SD 0.354) micromol/l, respectively. No statistical changes from baseline were observed in uncorrected visual acuity and best-spectacle corrected visual acuity, whereas there was a trend toward increase in visual acuity in Group L12. Contrast sensitivity in Groups L6 and L12 increased with supplementation, and statistical significance was reached at most visual angles of Group L12. No significant change was observed in glare sensitivity over time. Visual function in healthy subjects who received the lutein supplement improved, especially in contrast sensitivity, suggesting that a higher intake of lutein may have beneficial effects on the visual performance.",
"title": ""
},
{
"docid": "decbbd09bcf7a36a3886d52864e9a08c",
"text": "INTRODUCTION\nBirth preparedness and complication readiness (BPCR) is a strategy to promote timely use of skilled maternal and neonatal care during childbirth. According to World Health Organization, BPCR should be a key component of focused antenatal care. Dakshina Kannada, a coastal district of Karnataka state, is categorized as a high-performing district (institutional delivery rate >25%) under the National Rural Health Mission. However, a substantial proportion of women in the district experience complications during pregnancy (58.3%), childbirth (45.7%), and postnatal (17.4%) period. There is a paucity of data on BPCR practice and the factors associated with it in the district. Exploring this would be of great use in the evidence-based fine-tuning of ongoing maternal and child health interventions.\n\n\nOBJECTIVE\nTo assess BPCR practice and the factors associated with it among the beneficiaries of two rural Primary Health Centers (PHCs) of Dakshina Kannada district, Karnataka, India.\n\n\nMETHODS\nA facility-based cross-sectional study was conducted among 217 pregnant (>28 weeks of gestation) and recently delivered (in the last 6 months) women in two randomly selected PHCs from June -September 2013. Exit interviews were conducted using a pre-designed semi-structured interview schedule. Information regarding socio-demographic profile, obstetric variables, and knowledge of key danger signs was collected. BPCR included information on five key components: identified the place of delivery, saved money to pay for expenses, mode of transport identified, identified a birth companion, and arranged a blood donor if the need arises. In this study, a woman who recalled at least two key danger signs in each of the three phases, i.e., pregnancy, childbirth, and postpartum (total six) was considered as knowledgeable on key danger signs. Optimal BPCR practice was defined as following at least three out of five key components of BPCR.\n\n\nOUTCOME MEASURES\nProportion, Odds ratio, and adjusted Odds ratio (adj OR) for optimal BPCR practice.\n\n\nRESULTS\nA total of 184 women completed the exit interview (mean age: 26.9±3.9 years). Optimal BPCR practice was observed in 79.3% (95% CI: 73.5-85.2%) of the women. Multivariate logistic regression revealed that age >26 years (adj OR = 2.97; 95%CI: 1.15-7.7), economic status of above poverty line (adj OR = 4.3; 95%CI: 1.12-16.5), awareness of minimum two key danger signs in each of the three phases, i.e., pregnancy, childbirth, and postpartum (adj OR = 3.98; 95%CI: 1.4-11.1), preference to private health sector for antenatal care/delivery (adj OR = 2.9; 95%CI: 1.1-8.01), and woman's discussion about the BPCR with her family members (adj OR = 3.4; 95%CI: 1.1-10.4) as the significant factors associated with optimal BPCR practice.\n\n\nCONCLUSION\nIn this study population, BPCR practice was better than other studies reported from India. Healthcare workers at the grassroots should be encouraged to involve women's family members while explaining BPCR and key danger signs with a special emphasis on young (<26 years) and economically poor women. Ensuring a reinforcing discussion between woman and her family members may further enhance the BPCR practice.",
"title": ""
}
] |
scidocsrr
|
202ca7528d9310831339a77fbece21a0
|
Model-Based Genetic Algorithms for Algorithm Configuration
|
[
{
"docid": "af31f2a7a996977754f0d39fb51bcacf",
"text": "The evolutionary approach called Scatter Search, and its generalized form called Path Relinking, have proved unusually effective for solving a diverse array of optimization problems from both classical and real world settings. Scatter Search and Path Relinking differ from other evolutionary procedures, such as genetic algorithms, by providing unifying principles for joining solutions based on generalized path constructions (in both Euclidean and neighborhood spaces) and by utilizing strategic designs where other approaches resort to randomization. Scatter Search and Path Relinking are also intimately related to the Tabu Search metaheuristic, and derive additional advantages by making use of adaptive memory and associated memory-exploiting mechanisms that are capable of being adapted to particular contexts. We describe the features of Scatter Search and Path Relinking that set them apart from other evolutionary approaches, and that offer opportunities for creating increasingly more versatile and effective methods in the future. † Partially supported by the visiting professor fellowship program of the University of Valencia (Grant Ref. No. 42743).",
"title": ""
}
] |
[
{
"docid": "bd1c93dfc02d90ad2a0c7343236342a7",
"text": "Osteochondritis dissecans (OCD) of the capitellum is an uncommon disorder seen primarily in the adolescent overhead athlete. Unlike Panner disease, a self-limiting condition of the immature capitellum, OCD is multifactorial and likely results from microtrauma in the setting of cartilage mismatch and vascular susceptibility. The natural history of OCD is poorly understood, and degenerative joint disease may develop over time. Multiple modalities aid in diagnosis, including radiography, MRI, and magnetic resonance arthrography. Lesion size, location, and grade determine management, which should attempt to address subchondral bone loss and articular cartilage damage. Early, stable lesions are managed with rest. Surgery should be considered for unstable lesions. Most investigators advocate arthroscopic débridement with marrow stimulation. Fragment fixation and bone grafting also have provided good short-term results, but concerns persist regarding the healing potential of advanced lesions. Osteochondral autograft transplantation appears to be promising and should be reserved for larger, higher grade lesions. Clinical outcomes and return to sport are variable. Longer-term follow-up studies are necessary to fully assess surgical management, and patients must be counseled appropriately.",
"title": ""
},
{
"docid": "bbb91e336f0125c0e8a0358f6afc9ef1",
"text": "In this paper, we study a new learning paradigm for neural machine translation (NMT). Instead of maximizing the likelihood of the human translation as in previous works, we minimize the distinction between human translation and the translation given by an NMT model. To achieve this goal, inspired by the recent success of generative adversarial networks (GANs), we employ an adversarial training architecture and name it as AdversarialNMT. In Adversarial-NMT, the training of the NMT model is assisted by an adversary, which is an elaborately designed 2D convolutional neural network (CNN). The goal of the adversary is to differentiate the translation result generated by the NMT model from that by human. The goal of the NMT model is to produce high quality translations so as to cheat the adversary. A policy gradient method is leveraged to co-train the NMT model and the adversary. Experimental results on English→French and German→English translation tasks show that Adversarial-NMT can achieve significantly better translation quality than several strong baselines.",
"title": ""
},
{
"docid": "400624533ddd2a0e1ebac1c7022238c1",
"text": "Graphene is a monolayer of tightly packed carbon atoms that possesses many interesting properties and has numerous exciting applications. In this work, we report the antibacterial activity of two water-dispersible graphene derivatives, graphene oxide (GO) and reduced graphene oxide (rGO) nanosheets. Such graphene-based nanomaterials can effectively inhibit the growth of E. coli bacteria while showing minimal cytotoxicity. We have also demonstrated that macroscopic freestanding GO and rGO paper can be conveniently fabricated from their suspension via simple vacuum filtration. Given the superior antibacterial effect of GO and the fact that GO can be mass-produced and easily processed to make freestanding and flexible paper with low cost, we expect this new carbon nanomaterial may find important environmental and clinical applications.",
"title": ""
},
{
"docid": "ca17aabbcbd3756d692117803504f441",
"text": "Novelty detection is the process of identifying the observation(s) that differ in some respect from the training observations (the target class). In reality, the novelty class is often absent during training, poorly sampled or not well defined. Therefore, one-class classifiers can efficiently model such problems. However, due to the unavailability of data from the novelty class, training an end-to-end deep network is a cumbersome task. In this paper, inspired by the success of generative adversarial networks for training deep models in unsupervised and semi-supervised settings, we propose an end-to-end architecture for one-class classification. Our architecture is composed of two deep networks, each of which trained by competing with each other while collaborating to understand the underlying concept in the target class, and then classify the testing samples. One network works as the novelty detector, while the other supports it by enhancing the inlier samples and distorting the outliers. The intuition is that the separability of the enhanced inliers and distorted outliers is much better than deciding on the original samples. The proposed framework applies to different related applications of anomaly and outlier detection in images and videos. The results on MNIST and Caltech-256 image datasets, along with the challenging UCSD Ped2 dataset for video anomaly detection illustrate that our proposed method learns the target class effectively and is superior to the baseline and state-of-the-art methods.",
"title": ""
},
{
"docid": "6ac9b6e2ddf1686087b456d3f0d3014b",
"text": "Ephaptic interactions between a neuron and axons or dendrites passing by its cell body can be, in principle, more significant than ephaptic interactions among axons in a fiber tract. Extracellular action potentials outside axons are small in amplitude and spatially spread out, while they are larger in amplitude and much more spatially confined near cell bodies. We estimated the extracellular potentials associated with an action potential in a cortical pyramidal cell using standard one-dimensional cable theory and volume conductor theory. Their spatial and temporal pattern reveal much about the location and timing of currents in the cell, especially in combination with a known morphology, and simple experiments could resolve questions about spike initiation. From the extracellular potential we compute the ephaptically induced polarization in a nearby passive cable. The magnitude of this induced voltage can be several mV, does not spread electrotonically, and depends only weakly on the passive properties of the cable. We discuss their possible functional relevance.",
"title": ""
},
{
"docid": "3f8860bc21f26b81b066f4c75b9390e1",
"text": "Adaptive filter algorithms are extensively use in active control applications and the availability of low cost powerful digital signal processor (DSP) platforms has opened the way for new applications and further research opportunities in e.g. the active control area. The field of active control demands a solid exposure to practical systems and DSP platforms for a comprehensive understanding of the theory involved. Traditional laboratory experiments prove to be insufficient to fulfill these demands and need to be complemented with more flexible and economic remotely controlled laboratories. The purpose of this thesis project is to implement a number of different adaptive control algorithms in the recently developed remotely controlled Virtual Instrument Systems in Reality (VISIR) ANC/DSP remote laboratory at Blekinge Institute of Technology and to evaluate the performance of these algorithms in the remote laboratory. In this thesis, performance of different filtered-x versions adaptive algorithms (NLMS, LLMS, RLS and FuRLMS) has been evaluated in a remote Laboratory. The adaptive algorithms were implemented remotely on a Texas Instrument DSP TMS320C6713 in an ANC system to attenuate low frequency noise which ranges from 0-200 Hz in a circular ventilation duct using single channel feed forward control. Results show that the remote lab can handle complex and advanced control algorithms. These algorithms were tested and it was found that remote lab works effectively and the achieved attenuation level for the algorithms used on the duct system is comparable to similar applications.",
"title": ""
},
{
"docid": "64d53035eb919d5e27daef6b666b7298",
"text": "The 3L-NPC (Neutral-Point-Clamped) is the most popular multilevel converter used in high-power medium-voltage applications. An important disadvantage of this structure is the unequal distribution of losses among the switches. The performances of 3L-NPC structure were improved by developing the 3L-ANPC (Active-NPC) converter which has more degrees of freedom. In this paper the switching states and the loss distribution problem are studied for different PWM strategies in a STATCOM application. The PSIM simulation results are shown in order to validate the PWM strategies studied for 3L-ANPC converter.",
"title": ""
},
{
"docid": "6d471fcfa68cfb474f2792892e197a66",
"text": "The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the rightamount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion.",
"title": ""
},
{
"docid": "f847a04cb60bbbe5a2cd1ec1c4c9be6f",
"text": "This letter presents a wideband patch antenna on a low-temperature cofired ceramic substrate for Local Multipoint Distribution Service band applications. Conventional rectangular patch antennas have a narrow bandwidth. The proposed via-wall structure enhances the electric field coupling between the stacked patches to achieve wideband characteristics. We designed same-side and opposite-side feeding configurations and report on the fabrication of an experimental 28-GHz antenna used to validate the design concept. Measurements correlate well with the simulation results, achieving a 10-dB impedance bandwidth of 25.4% (23.4-30.2 GHz).",
"title": ""
},
{
"docid": "d5508bb363b6304fe21a808d531a8d41",
"text": "A dual-band printed log-periodic dipole array (LPDA) antenna for wireless communications, designed on a low-cost PET substrate and implemented by inkjet-printing conductive ink, is presented. The proposed antenna can be used for wireless communications both within the UHF (2.4–2.484 GHz) and SHF (5.2-5.8 GHz) wireless frequency bands, and presents a good out-of-band rejection, without the need of stopband filters. The antenna has been designed using a general-purpose 3-D computer-aided design software (CAD), CST Microwave Studio, and then realized. Measured results are in very good agreement with simulations.",
"title": ""
},
{
"docid": "b26a9a78f11227e894af0e58b3b01c98",
"text": "Although all the cells in an organism contain the same genetic information, differences in the cell phenotype arise from the expression of lineage-specific genes. During myelopoiesis, external differentiating signals regulate the expression of a set of transcription factors. The combined action of these transcription factors subsequently determines the expression of myeloid-specific genes and the generation of monocytes and macrophages. In particular, the transcription factor PU.1 has a critical role in this process. We review the contribution of several transcription factors to the control of macrophage development.",
"title": ""
},
{
"docid": "64e1953833fe13e0d99928e442d75d11",
"text": "We develop a new framework to achieve the goal of Wikipedia entity expansion and attribute extraction from the Web. Our framework takes a few existing entities that are automatically collected from a particular Wikipedia category as seed input and explores their attribute infoboxes to obtain clues for the discovery of more entities for this category and the attribute content of the newly discovered entities. One characteristic of our framework is to conduct discovery and extraction from desirable semi-structured data record sets which are automatically collected from the Web. A semi-supervised learning model with Conditional Random Fields is developed to deal with the issues of extraction learning and limited number of labeled examples derived from the seed entities. We make use of a proximate record graph to guide the semi-supervised learning process. The graph captures alignment similarity among data records. Then the semi-supervised learning process can leverage the unlabeled data in the record set by controlling the label regularization under the guidance of the proximate record graph. Extensive experiments on different domains have been conducted to demonstrate its superiority for discovering new entities and extracting attribute content.",
"title": ""
},
{
"docid": "06f8b713ed4020c99403c28cbd1befbc",
"text": "In the last decade, deep learning algorithms have become very popular thanks to the achieved performance in many machine learning and computer vision tasks. However, most of the deep learning architectures are vulnerable to so called adversarial examples. This questions the security of deep neural networks (DNN) for many securityand trust-sensitive domains. The majority of the proposed existing adversarial attacks are based on the differentiability of the DNN cost function. Defence strategies are mostly based on machine learning and signal processing principles that either try to detect-reject or filter out the adversarial perturbations and completely neglect the classical cryptographic component in the defence. In this work, we propose a new defence mechanism based on the second Kerckhoffs’s cryptographic principle which states that the defence and classification algorithm are supposed to be known, but not the key. To be compliant with the assumption that the attacker does not have access to the secret key, we will primarily focus on a gray-box scenario and do not address a white-box one. More particularly, we assume that the attacker does not have direct access to the secret block, but (a) he completely knows the system architecture, (b) he has access to the data used for training and testing and (c) he can observe the output of the classifier for each given input. We show empirically that our system is efficient against most famous state-of-the-art attacks in black-box and gray-box scenarios.",
"title": ""
},
{
"docid": "99b151b39c13e7106b680ae7935567fd",
"text": "Pediatricians have an important role not only in early recognition and evaluation of autism spectrum disorders but also in chronic management of these disorders. The primary goals of treatment are to maximize the child's ultimate functional independence and quality of life by minimizing the core autism spectrum disorder features, facilitating development and learning, promoting socialization, reducing maladaptive behaviors, and educating and supporting families. To assist pediatricians in educating families and guiding them toward empirically supported interventions for their children, this report reviews the educational strategies and associated therapies that are the primary treatments for children with autism spectrum disorders. Optimization of health care is likely to have a positive effect on habilitative progress, functional outcome, and quality of life; therefore, important issues, such as management of associated medical problems, pharmacologic and nonpharmacologic intervention for challenging behaviors or coexisting mental health conditions, and use of complementary and alternative medical treatments, are also addressed.",
"title": ""
},
{
"docid": "f945b645e492e2b5c6c2d2d4ea6c57ae",
"text": "PURPOSE\nThe aim of this review was to look at relevant data and research on the evolution of ventral hernia repair.\n\n\nMETHODS\nResources including books, research, guidelines, and online articles were reviewed to provide a concise history of and data on the evolution of ventral hernia repair.\n\n\nRESULTS\nThe evolution of ventral hernia repair has a very long history, from the recognition of ventral hernias to its current management, with significant contributions from different authors. Advances in surgery have led to more cases of ventral hernia formation, and this has required the development of new techniques and new materials for ventral hernia management. The biocompatibility of prosthetic materials has been important in mesh development. The functional anatomy and physiology of the abdominal wall has become important in ventral hernia management. New techniques in abdominal wall closure may prevent or reduce the incidence of ventral hernia in the future.\n\n\nCONCLUSION\nThe management of ventral hernia is continuously evolving as it responds to new demands and new technology in surgery.",
"title": ""
},
{
"docid": "9a42785b743ed0b38334819365977020",
"text": "Low-cost but high-performance robot arms are required for widespread use of service robots. Most robot arms use expensive motors and speed reducers to provide torques sufficient to support the robot mass and payload. If the gravitational torques due to the robot mass, which is usually much greater than the payload, can be compensated for by some means; the robot would need much smaller torques, which can be delivered by cheap actuator modules. To this end, we propose a novel counterbalance mechanism which can completely counterbalance the gravitational torques due to the robot mass. Since most 6-DOF robots have three pitch joints, which are subject to gravitational torques, we propose a 3-DOF counterbalance mechanism based on the double parallelogram mechanism, in which reference planes are provided to each joint for proper counterbalancing. A 5-DOF counterbalance robot arm was built to demonstrate the performance of the proposed mechanism. Simulation and experimental results showed that the proposed mechanism had effectively decreased the torque required to support the robot mass, thus allowing the prospective use of low-cost motors and speed reducers for high-performance robot arms.",
"title": ""
},
{
"docid": "6fec53c8c10c2e7114a1464b2b8e3024",
"text": "This paper provides generalized analysis of active filters used as electromagnetic interference (EMI) filters and active-power filters. Insertion loss and impedance increase of various types of active-filter topologies are described with applicable requirements and limitations as well as the rationale for selecting active-filter topology according to different applications.",
"title": ""
},
{
"docid": "9d803b0ce1f1af621466b1d7f97b7edf",
"text": "This research paper addresses the methodology and approaches to managing criminal computer forensic investigations in a law enforcement environment with management controls, operational controls, and technical controls. Management controls cover policy and standard operating procedures (SOP's), methodology, and guidance. Operational controls cover SOP requirements, seizing evidence, evidence handling, best practices, and education, training and awareness. Technical controls cover acquisition and analysis procedures, data integrity, rules of evidence, presenting findings, proficiency testing, and data archiving.",
"title": ""
},
{
"docid": "b22137cbb14396f1dcd24b2a15b02508",
"text": "This paper studies the self-alignment properties between two chips that are stacked on top of each other with copper pillars micro-bumps. The chips feature alignment marks used for measuring the resulting offset after assembly. The accuracy of the alignment is found to be better than 0.5 µm in × and y directions, depending on the process. The chips also feature waveguides and vertical grating couplers (VGC) fabricated in the front-end-of-line (FEOL) and organized in order to realize an optical interconnection between the chips. The coupling of light between the chips is measured and compared to numerical simulation. This high accuracy self-alignment was obtained after studying the impact of flux and fluxless treatments on the wetting of the pads and the successful assembly yield. The composition of the bump surface was analyzed with Time-of-Flight Secondary Ions Mass Spectroscopy (ToF-SIMS) in order to understand the impact of each treatment. This study confirms that copper pillars micro-bumps can be used to self-align photonic integrated circuits (PIC) with another die (for example a microlens array) in order to achieve high throughput alignment of optical fiber to the PIC.",
"title": ""
},
{
"docid": "900d98cbc830f3fc2b0f65b484f71a7c",
"text": "The current dominant approaches to face recognition rely on facial characteristics that are on or over the skin. Some of these characteristics have low permanency can be altered, and their phenomenology varies significantly with environmental factors (e.g., lighting). Many methodologies have been developed to address these problems to various degrees. However, the current framework of face recognition research has a potential weakness due to its very nature. We present a novel framework for face recognition based on physiological information. The motivation behind this effort is to capitalize on the permanency of innate characteristics that are under the skin. To establish feasibility, we propose a specific methodology to capture facial physiological patterns using the bioheat information contained in thermal imagery. First, the algorithm delineates the human face from the background using the Bayesian framework. Then, it localizes the superficial blood vessel network using image morphology. The extracted vascular network produces contour shapes that are characteristic to each individual. The branching points of the skeletonized vascular network are referred to as thermal minutia points (TMPs) and constitute the feature database. To render the method robust to facial pose variations, we collect for each subject to be stored in the database five different pose images (center, midleft profile, left profile, midright profile, and right profile). During the classification stage, the algorithm first estimates the pose of the test image. Then, it matches the local and global TMP structures extracted from the test image with those of the corresponding pose images in the database. We have conducted experiments on a multipose database of thermal facial images collected in our laboratory, as well as on the time-gap database of the University of Notre Dame. The good experimental results show that the proposed methodology has merit, especially with respect to the problem of low permanence over time. More importantly, the results demonstrate the feasibility of the physiological framework in face recognition and open the way for further methodological and experimental research in the area",
"title": ""
}
] |
scidocsrr
|
c6131eb0d21e09ce97f2d0d5e32881df
|
Automatic Scene Inference for 3D Object Compositing
|
[
{
"docid": "b4ab47d8ec52d7a8e989bfc9d6c0d173",
"text": "In this paper, we consider the problem of recovering the spatial layout of indoor scenes from monocular images. The presence of clutter is a major problem for existing single-view 3D reconstruction algorithms, most of which rely on finding the ground-wall boundary. In most rooms, this boundary is partially or entirely occluded. We gain robustness to clutter by modeling the global room space with a parameteric 3D “box” and by iteratively localizing clutter and refitting the box. To fit the box, we introduce a structured learning algorithm that chooses the set of parameters to minimize error, based on global perspective cues. On a dataset of 308 images, we demonstrate the ability of our algorithm to recover spatial layout in cluttered rooms and show several examples of estimated free space.",
"title": ""
},
{
"docid": "2a56585a288405b9adc7d0844980b8bf",
"text": "In this paper we propose the first exact solution to the problem of estimating the 3D room layout from a single image. This problem is typically formulated as inference in a Markov random field, where potentials count image features (e.g ., geometric context, orientation maps, lines in accordance with vanishing points) in each face of the layout. We present a novel branch and bound approach which splits the label space in terms of candidate sets of 3D layouts, and efficiently bounds the potentials in these sets by restricting the contribution of each individual face. We employ integral geometry in order to evaluate these bounds in constant time, and as a consequence, we not only obtain the exact solution, but also in less time than approximate inference tools such as message-passing. We demonstrate the effectiveness of our approach in two benchmarks and show that our bounds are tight, and only a few evaluations are necessary.",
"title": ""
}
] |
[
{
"docid": "4709a4e1165abb5d0018b74495218fc7",
"text": "Network monitoring guides network operators in understanding the current behavior of a network. Therefore, accurate and efficient monitoring is vital to ensure that the network operates according to the intended behavior and then to troubleshoot any deviations. However, the current practice of network-monitoring largely depends on manual operations, and thus enterprises spend a significant portion of their budgets on the workforce that monitor their networks. We analyze present network-monitoring technologies, identify open problems, and suggest future directions. In particular, our findings are based on two different analyses. The first analysis assesses how well present technologies integrate with the entire cycle of network-management operations: design, deployment, and monitoring. Network operators first design network configurations, given a set of requirements, then they deploy the new design, and finally they verify it by continuously monitoring the network’s behavior. One of our observations is that the efficiency of this cycle can be greatly improved by automated deployment of pre-designed configurations, in response to changes in monitored network behavior. Our second analysis focuses on network-monitoring technologies and group issues in these technologies into five categories. Such grouping leads to the identification of major problem groups in network monitoring, e.g., efficient management of increasing amounts of measurements for storage, analysis, and presentation. We argue that continuous effort is needed in improving network-monitoring since the presented problems will become even more serious in the future, as networks grow in size and carry more data. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "dde4300fb4f29b5ee15bb5e2ef8fe44f",
"text": "In this paper, we propose a static scheduling algorithm for allocating task graphs to fullyconnected multiprocessors. We discuss six recently reported scheduling algorithms and show that they possess one drawback or the other which can lead to poor performance. The proposed algorithm, which is called the Dynamic Critical-Path (DCP) scheduling algorithm, is different from the previously proposed algorithms in a number of ways. First, it determines the critical path of the task graph and selects the next node to be scheduled in a dynamic fashion. Second, it rearranges the schedule on each processor dynamically in the sense that the positions of the nodes in the partial schedules are not fixed until all nodes have been considered. Third, it selects a suitable processor for a node by looking ahead the potential start times of the remaining nodes on that processor, and schedules relatively less important nodes to the processors already in use. A global as well as a pair-wise comparison is carried out for all seven algorithms under various scheduling conditions. The DCP algorithm outperforms the previous algorithms by a considerable margin. Despite having a number of new features, the DCP algorithm has admissible time complexity, is economical in terms of the number of processors used and is suitable for a wide range of graph structures.",
"title": ""
},
{
"docid": "f8984d660f39c66b3bd484ec766fa509",
"text": "The present paper focuses on Cyber Security Awareness Campaigns, and aims to identify key factors regarding security which may lead them to failing to appropriately change people’s behaviour. Past and current efforts to improve information-security practices and promote a sustainable society have not had the desired impact. It is important therefore to critically reflect on the challenges involved in improving information-security behaviours for citizens, consumers and employees. In particular, our work considers these challenges from a Psychology perspective, as we believe that understanding how people perceive risks is critical to creating effective awareness campaigns. Changing behaviour requires more than providing information about risks and reactive behaviours – firstly, people must be able to understand and apply the advice, and secondly, they must be motivated and willing to do so – and the latter requires changes to attitudes and intentions. These antecedents of behaviour change are identified in several psychological models of behaviour. We review the suitability of persuasion techniques, including the widely used ‘fear appeals’. From this range of literature, we extract essential components for an awareness campaign as well as factors which can lead to a campaign’s success or failure. Finally, we present examples of existing awareness campaigns in different cultures (the UK and Africa) and reflect on these.",
"title": ""
},
{
"docid": "ef5c44f6895178c8727272dbb74b5df2",
"text": "We present a systematic analysis of existing multi-domain learning approaches with respect to two questions. First, many multidomain learning algorithms resemble ensemble learning algorithms. (1) Are multi-domain learning improvements the result of ensemble learning effects? Second, these algorithms are traditionally evaluated in a balanced class label setting, although in practice many multidomain settings have domain-specific class label biases. When multi-domain learning is applied to these settings, (2) are multidomain methods improving because they capture domain-specific class biases? An understanding of these two issues presents a clearer idea about where the field has had success in multi-domain learning, and it suggests some important open questions for improving beyond the current state of the art.",
"title": ""
},
{
"docid": "f942a0bcda6a9b3f6605cd6263ac0b5c",
"text": "In nose surgery, carved or crushed cartilage used as a graft has some disadvantages, chiefly that it may be perceptible through the nasal skin after tissue resolution is complete. To overcome these problems and to obtain a smoother surface, the authors initiated the use of Surgicel-wrapped diced cartilage. This innovative technique has been used by the authors on 2365 patients over the past 10 years: in 165 patients with traumatic nasal deformity, in 350 patients with postrhinoplasty deformity, and in 1850 patients during primary rhinoplasty. The highlights of the surgical procedure include harvested cartilage (septal, alar, conchal, and sometimes costal) cut in pieces of 0.5 to 1 mm using a no. 11 blade. The fine-textured cartilage mass is then wrapped in one layer of Surgicel and moistened with an antibiotic (rifamycin). The graft is then molded into a cylindrical form and inserted under the dorsal nasal skin. In the lateral wall and tip of the nose, some overcorrection is performed depending on the type of deformity. When the mucosal stitching is complete, this graft can be externally molded, like plasticine, under the dorsal skin. In cases of mild-to-moderate nasal depression, septal and conchal cartilages are used in the same manner to augment the nasal dorsum with consistently effective and durable results. In cases with more severe defects of the nose, costal cartilage is necessary to correct both the length of the nose and the projection of the columella. In patients with recurrent deviation of the nasal bridge, this technique provided a simple solution to the problem. After overexcision of the dorsal part of deviated septal cartilage and insertion of Surgicel-wrapped diced cartilage, a straight nose was obtained in all patients with no recurrence (follow-up of 1 to 10 years). The technique also proved to be highly effective in primary rhinoplasties to camouflage bone irregularities after hump removal in patients with thin nasal skin and/or in cases when excessive hump removal was performed. As a complication, in six patients early postoperative swelling was more than usual. In 16 patients, overcorrection was persistent owing to fibrosis, and in 11 patients resorption was excessive beyond the expected amount. A histologic evaluation was possible in 16 patients, 3, 6, and 12 months postoperatively, by removing thin slices of excess cartilage from the dorsum of the nose during touch-up surgery. This graft showed a mosaic-type alignment of graft cartilage with fibrous tissue connection among the fragments. In conclusion, this type of graft is very easy to apply, because a plasticine-like material is obtained that can be molded with the fingers, giving a smooth surface with desirable form and long-lasting results in all cases. The favorable results obtained by this technique have led the authors to use Surgicel-wrapped diced cartilage routinely in all types of rhinoplasty.",
"title": ""
},
{
"docid": "a20efb7b9f80f37a177855180783e7d9",
"text": "Considered is a knapsack with integer volume Fand which is capable of holding K different classes of objects. An object from class k has integer volume bk, k = 1, . . ., K . Objects arrive randomly to the knapsack; interarrivals are exponential with mean depending on the state of the system. The sojourn time of an object has a general classdependent distribution. An object in the knapsack from class k accrues revenue at a rate rk. The problem is to find a control policy in order to acceptlreject the arriving objects as a function of the current state in order to maximize the average revenue. Optimization is carried out over the class of coordinate convex policies. For the case of K = 2, we show for a wide range of parameters that the optimal control is of the threshold type. In the case of Poisson arrivals and of knapsack and object volumes being integer multiples of each other, it is shown that the optimal policy is always of the double-threshold type. An O(F) algorithm to determine the revenue of threshold policies is also given. For the general case of K classes, we consider the problem of finding the optimal static control where for each class a portion of the knapsack is dedicated. An efficient finite-stage dynamic programming algorithm for locating the optimal static control is presented. Furthermore, variants of the optimal static control which allow some sharing among classes are also discussed.",
"title": ""
},
{
"docid": "4eff2dc30d4b0031dec8be5dda3157d8",
"text": "We introduce a scheme for molecular simulations, the deep potential molecular dynamics (DPMD) method, based on a many-body potential and interatomic forces generated by a carefully crafted deep neural network trained with ab initio data. The neural network model preserves all the natural symmetries in the problem. It is first-principles based in the sense that there are no ad hoc components aside from the network model. We show that the proposed scheme provides an efficient and accurate protocol in a variety of systems, including bulk materials and molecules. In all these cases, DPMD gives results that are essentially indistinguishable from the original data, at a cost that scales linearly with system size.",
"title": ""
},
{
"docid": "77278e6ba57e82c88f66bd9155b43a50",
"text": "Up to the time when a huge corruption scandal, popularly labeled tangentopoli”(bribe city), brought down the political establishment that had ruled Italy for several decades, that country had reported one of the largest shares of capital spending in GDP among the OECD countries. After the scandal broke out and several prominent individuals were sent to jail, or even committed suicide, capital spending fell sharply. The fall seems to have been caused by a reduction in the number of capital projects being undertaken and, perhaps more importantly, by a sharp fall in the costs of the projects still undertaken. Information released by Transparency International (TI) reports that, within the space of two or three years, in the city of Milan, the city where the scandal broke out in the first place, the cost of city rail links fell by 52 percent, the cost of one kilometer of subway fell by 57 percent, and the budget for the new airport terminal was reduced by 59 percent to reflect the lower construction costs. Although one must be aware of the logical fallacy of post hoc, ergo propter hoc, the connection between the two events is too strong to be attributed to a coincidence. In fact, this paper takes the view that it could not have been a coincidence.",
"title": ""
},
{
"docid": "76f6a4f44af78fae2960375f8c750878",
"text": "Recently in tandem with the spread of portable devices for reading electronic books, devices for digitizing paper books, called book scanners, are developed to meet the increased demand for digitizing privately owned books. However, conventional book scanners still have complex components to mechanically turn pages and to rectify the acquired images that are inevitably distorted by the curvy book surface. Here, we present the multi-scale mechanism that turns pages electronically using electroadhesive force generated by a micro-scale structure. Its another advantage is that perspective correction of image processing is applicable to readily reconstruct the distorted images of pages. Specifically, to turn one page at a time not two pages, we employ a micro-scale structure to generate near-field electroadhesive force that decays rapidly and accordingly attracts objects within tens of micrometers. We analyze geometrical parameters of the micro-scale structure to improve the decay characteristics. We find that the decay characteristics of electroadhesive force definitely depends upon the geometrical period of the micro-scale structure, while its magnitude depends on a variety of parameters. Based on this observation, we propose a novel electrode configuration with improved decay characteristics. Dynamical stability and kinematic requirements are also examined to successfully introduce near-field electroadhesive force into our digitizing process.",
"title": ""
},
{
"docid": "f0d17b259b699bc7fb7e8f525ec64db0",
"text": "Developing Intelligent Systems involves artificial intelligence approaches including artificial neural networks. Here, we present a tutorial of Deep Neural Networks (DNNs), and some insights about the origin of the term “deep”; references to deep learning are also given. Restricted Boltzmann Machines, which are the core of DNNs, are discussed in detail. An example of a simple two-layer network, performing unsupervised learning for unlabeled data, is shown. Deep Belief Networks (DBNs), which are used to build networks with more than two layers, are also described. Moreover, examples for supervised learning with DNNs performing simple prediction and classification tasks, are presented and explained. This tutorial includes two intelligent pattern recognition applications: handwritten digits (benchmark known as MNIST) and speech recognition.",
"title": ""
},
{
"docid": "4958f0fbdf29085cabef3591a1c05c51",
"text": "Network Function Virtualization (NFV) is a new networking paradigm where network functions are executed on commodity servers located in small cloud nodes distributed across the network, and where software defined mechanisms are used to control the network flows. This paradigm is a major turning point in the evolution of networking, as it introduces high expectations for enhanced economical network services, as well as major technical challenges. In this paper, we address one of the main technical challenges in this domain: the actual placement of the virtual functions within the physical network. This placement has a critical impact on the performance of the network, as well as on its reliability and operation cost. We perform a thorough study of the NFV location problem, show that it introduces a new type of optimization problems, and provide near optimal approximation algorithms guaranteeing a placement with theoretically proven performance. The performance of the solution is evaluated with respect to two measures: the distance cost between the clients and the virtual functions by which they are served, as well as the setup costs of these functions. We provide bi-criteria solutions reaching constant approximation factors with respect to the overall performance, and adhering to the capacity constraints of the networking infrastructure by a constant factor as well. Finally, using extensive simulations, we show that the proposed algorithms perform well in many realistic scenarios.",
"title": ""
},
{
"docid": "2cbb2af6ed4ef193aad77c2f696a45c5",
"text": "Consider mutli-goal tasks that involve static environments and dynamic goals. Examples of such tasks, such as goaldirected navigation and pick-and-place in robotics, abound. Two types of Reinforcement Learning (RL) algorithms are used for such tasks: model-free or model-based. Each of these approaches has limitations. Model-free RL struggles to transfer learned information when the goal location changes, but achieves high asymptotic accuracy in single goal tasks. Model-based RL can transfer learned information to new goal locations by retaining the explicitly learned state-dynamics, but is limited by the fact that small errors in modelling these dynamics accumulate over long-term planning. In this work, we improve upon the limitations of model-free RL in multigoal domains. We do this by adapting the Floyd-Warshall algorithm for RL and call the adaptation Floyd-Warshall RL (FWRL). The proposed algorithm learns a goal-conditioned action-value function by constraining the value of the optimal path between any two states to be greater than or equal to the value of paths via intermediary states. Experimentally, we show that FWRL is more sample-efficient and learns higher reward strategies in multi-goal tasks as compared to Q-learning, model-based RL and other relevant baselines in a tabular domain.",
"title": ""
},
{
"docid": "b6b9e1eaf17f6cdbc9c060e467021811",
"text": "Tumour-associated viruses produce antigens that, on the face of it, are ideal targets for immunotherapy. Unfortunately, these viruses are experts at avoiding or subverting the host immune response. Cervical-cancer-associated human papillomavirus (HPV) has a battery of immune-evasion mechanisms at its disposal that could confound attempts at HPV-directed immunotherapy. Other virally associated human cancers might prove similarly refractive to immuno-intervention unless we learn how to circumvent their strategies for immune evasion.",
"title": ""
},
{
"docid": "29626105b7d6dad21162230296deef9a",
"text": "A quasi-Z-source inverter (qZSI) could achieve buck/boost conversion as well as dc to ac inversion in a single-stage topology, which reduces the structure cost when compared to a traditional two-stage inverter. Specifically, the buck/boost conversion was accomplished via shoot-through state which took place across all phase legs of the inverter. In this paper, instead of using traditional dual-loop-based proportional integral (PI)-P controller, a type 2 based closed-loop voltage controller with novel dc-link voltage reference algorithm was proposed to fulfill the dc-link voltage tracking control of a single-phase qZSI regardless of any loading conditions, without the need of inner inductor current loop. A dc–ac boost inverter with similar circuit parameters as a qZSI was used to verify the flexibility of the proposed controller. The dynamic and transient performances of the proposed controller were investigated to evaluate its superiority against the aforementioned conventional controller. The integrated proposed controller and qZSI topology was then employed in static synchronous compensator application to perform reactive power compensation at the point of common coupling. The effectiveness of the proposed approach was verified through both simulation and experimental studies.",
"title": ""
},
{
"docid": "2c7fe5484b2184564d71a03f19188251",
"text": "This paper focuses on running scans in a main memory data processing system at \"bare metal\" speed. Essentially, this means that the system must aim to process data at or near the speed of the processor (the fastest component in most system configurations). Scans are common in main memory data processing environments, and with the state-of-the-art techniques it still takes many cycles per input tuple to apply simple predicates on a single column of a table. In this paper, we propose a technique called BitWeaving that exploits the parallelism available at the bit level in modern processors. BitWeaving operates on multiple bits of data in a single cycle, processing bits from different columns in each cycle. Thus, bits from a batch of tuples are processed in each cycle, allowing BitWeaving to drop the cycles per column to below one in some case. BitWeaving comes in two flavors: BitWeaving/V which looks like a columnar organization but at the bit level, and BitWeaving/H which packs bits horizontally. In this paper we also develop the arithmetic framework that is needed to evaluate predicates using these BitWeaving organizations. Our experimental results show that both these methods produce significant performance benefits over the existing state-of-the-art methods, and in some cases produce over an order of magnitude in performance improvement.",
"title": ""
},
{
"docid": "ddb804eec29ebb8d7f0c80223184305a",
"text": "Near Field Communication (NFC) enables physically proximate devices to communicate over very short ranges in a peer-to-peer manner without incurring complex network configuration overheads. However, adoption of NFC-enabled applications has been stymied by the low levels of penetration of NFC hardware. In this paper, we address the challenge of enabling NFC-like capability on the existing base of mobile phones. To this end, we develop Dhwani, a novel, acoustics-based NFC system that uses the microphone and speakers on mobile phones, thus eliminating the need for any specialized NFC hardware. A key feature of Dhwani is the JamSecure technique, which uses self-jamming coupled with self-interference cancellation at the receiver, to provide an information-theoretically secure communication channel between the devices. Our current implementation of Dhwani achieves data rates of up to 2.4 Kbps, which is sufficient for most existing NFC applications.",
"title": ""
},
{
"docid": "343c1607a4f8df8a8202adb26f9959ed",
"text": "This investigation examined the measurement properties of the Three Domains of Disgust Scale (TDDS). Principal components analysis in Study 1 (n = 206) revealed three factors of Pathogen, Sexual, and Moral Disgust that demonstrated excellent reliability, including test-retest over 12 weeks. Confirmatory factor analyses in Study 2 (n = 406) supported the three factors. Supportive evidence for the validity of the Pathogen and Sexual Disgust subscales was found in Study 1 and Study 2 with strong associations with disgust/contamination and weak associations with negative affect. However, the validity of the Moral Disgust subscale was limited. Study 3 (n = 200) showed that the TDDS subscales differentially related to personality traits. Study 4 (n = 47) provided evidence for the validity of the TDDS subscales in relation to multiple indices of disgust/contamination aversion in a select sample. Study 5 (n = 70) further highlighted limitations of the Moral Disgust subscale given the lack of a theoretically consistent association with moral attitudes. Lastly, Study 6 (n = 178) showed that responses on the Moral Disgust scale were more intense when anger was the response option compared with when disgust was the response option. The implications of these findings for the assessment of disgust are discussed.",
"title": ""
},
{
"docid": "cbdae65bf67066b6606bf72234918c06",
"text": "Computer animated characters have recently gained popularity in many applications, including web pages, computer games, movies, and various human computer interface designs. In order to make these animated characters lively and convincing, they require sophisticated facial expressions and motions. Traditionally, these animations are produced entirely by skilled artists. Although the quality of manually produced animation remains the best, this process is slow and costly. Motion capture performance of actors and actresses is one technique that attempts to speed up this process. One problem with this technique is that the captured motion data can not be edited easily. In recent years, statistical techniques have been used to address this problem by learning the mapping between audio speech and facial motion. New facial motion can be synthesized for novel audio data by reusing the motion capture data. However, since facial expressions are not modeled in these approaches, the resulting facial animation is realistic, yet expressionless. This thesis takes an expressionless talking face and creates an expressive facial animation. This process consists of three parts: expression synthesis, blendshape retargeting, and head motion synthesis. Expression synthesis uses a factorization model to describe the interaction between facial expression and speech content underlying each particular facial appearance. A new facial expression can be applied to novel input video, while retaining the same speech content. Blendshape retargeting maps facial expressions onto a 3D face model using the framework of blendshape interpolation. Three methods of sampling the keyshapes, or the prototype shapes, from data are evaluated. In addition, the generality of blendshape retargeting is demonstrated in three different domains. Head motion synthesis uses audio pitch contours to derive",
"title": ""
},
{
"docid": "90c99c40bfecf75534be0c09d955a207",
"text": "Massive Open Online Courses (MOOCs) have been playing a pivotal role among the latest e-learning initiative and obtain widespread popularity in many universities. But the low course completion rate and the high midway dropout rate of students have puzzled some researchers and designers of MOOCs. Therefore, it is important to explore the factors affecting students’ continuance intention to use MOOCs. This study integrates task-technology fit which can explain how the characteristics of task and technology affect the outcome of technology utilization into expectationconfirmation model to analyze the factors influencing students’ keeping using MOOCs and the relationships of constructs in the model, then it will also extend our understandings of continuance intention about MOOCs. We analyze and study 234 respondents, and results reveal that perceived usefulness, satisfaction and task-technology fit are important precedents of the intention to continue using MOOCs. Researchers and designers of MOOCs may obtain further insight in continuance intention about MOOCs.",
"title": ""
},
{
"docid": "acefbbb42607f2d478a16448644bd6e6",
"text": "The time complexity of incremental structure from motion (SfM) is often known as O(n^4) with respect to the number of cameras. As bundle adjustment (BA) being significantly improved recently by preconditioned conjugate gradient (PCG), it is worth revisiting how fast incremental SfM is. We introduce a novel BA strategy that provides good balance between speed and accuracy. Through algorithm analysis and extensive experiments, we show that incremental SfM requires only O(n) time on many major steps including BA. Our method maintains high accuracy by regularly re-triangulating the feature matches that initially fail to triangulate. We test our algorithm on large photo collections and long video sequences with various settings, and show that our method offers state of the art performance for large-scale reconstructions. The presented algorithm is available as part of VisualSFM at http://homes.cs.washington.edu/~ccwu/vsfm/.",
"title": ""
}
] |
scidocsrr
|
eefe7afe1ccb4fd18d014b7c2be8d1e4
|
Learning scrum by doing real-life projects
|
[
{
"docid": "8d8e7c9777f02c6a4a131f21a66ee870",
"text": "Teaching agile practices is becoming a priority in Software engineering curricula as a result of the increasing use of agile methods (AMs) such as Scrum in the software industry. Limitations in time, scope, and facilities within academic contexts hinder students’ hands-on experience in the use of professional AMs. To enhance students’ exposure to Scrum, we have developed Virtual Scrum, an educational virtual world that simulates a Scrum-based team room through virtual elements such as blackboards, a Web browser, document viewers, charts, and a calendar. A preliminary version of Virtual Scrum was tested with a group of 45 students running a capstone project with and without Virtual Scrum support. Students’ feedback showed that Virtual Scrum is a viable and effective tool to implement the different elements in a Scrum team room and to perform activities throughout the Scrum process. 2013 Wiley Periodicals, Inc. Comput Appl Eng Educ 23:147–156, 2015; View this article online at wileyonlinelibrary.com/journal/cae; DOI 10.1002/cae.21588",
"title": ""
}
] |
[
{
"docid": "df125587d0529bafa1be721801a67f77",
"text": "This paper describes a self-aligned SiGe heterojunction bipolar transistor (HBT) based on a standard double-polysilicon architecture and nonselective epitaxial growth (i.e. DPSA-NSEG). Emitter-base self-alignment is realized by polysilicon reflow in a hydrogen ambient after emitter window patterning. The fabricated self-aligned SiGe HBTs, with emitter widths of 0.3-0.4 μm, exhibit 20% lower base resistance and 15% higher maximum oscillation frequency fmax than non-self-aligned reference devices. The minimum noise figure of a Ku-band low-noise amplifier is reduced from 0.9 to 0.8 dB by emitter-base self-alignment.",
"title": ""
},
{
"docid": "1f15775000a1837cfc168a91c4c1a2ae",
"text": "In the recent aging society, studies on health care services have been actively conducted to provide quality services to medical consumers in wire and wireless environments. However, there are some problems in these health care services due to the lack of personalized service and the uniformed way in services. For solving these issues, studies on customized services in medical markets have been processed. However, because a diet recommendation service is only focused on the personal disease information, it is difficult to provide specific customized services to users. This study provides a customized diet recommendation service for preventing and managing coronary heart disease in health care services. This service provides a customized diet to customers by considering the basic information, vital sign, family history of diseases, food preferences according to seasons and intakes for the customers who are concerning about the coronary heart disease. The users who receive this service can use a customized diet service differed from the conventional service and that supports continuous services and helps changes in customers living habits.",
"title": ""
},
{
"docid": "05fe74d25c84e46b8044faca8a350a2f",
"text": "BACKGROUND\nAn observational study was conducted in 12 European countries by the European Federation of Clinical Chemistry and Laboratory Medicine Working Group for the Preanalytical Phase (EFLM WG-PRE) to assess the level of compliance with the CLSI H3-A6 guidelines.\n\n\nMETHODS\nA structured checklist including 29 items was created to assess the compliance of European phlebotomy procedures with the CLSI H3-A6 guideline. A risk occurrence chart of individual phlebotomy steps was created from the observed error frequency and severity of harm of each guideline key issue. The severity of errors occurring during phlebotomy was graded using the risk occurrence chart.\n\n\nRESULTS\nTwelve European countries participated with a median of 33 (18-36) audits per country, and a total of 336 audits. The median error rate for the total phlebotomy procedure was 26.9 % (10.6-43.8), indicating a low overall compliance with the recommended CLSI guideline. Patient identification and test tube labelling were identified as the key guideline issues with the highest combination of probability and potential risk of harm. Administrative staff did not adhere to patient identification procedures during phlebotomy, whereas physicians did not adhere to test tube labelling policy.\n\n\nCONCLUSIONS\nThe level of compliance of phlebotomy procedures with the CLSI H3-A6 guidelines in 12 European countries was found to be unacceptably low. The most critical steps in need of immediate attention in the investigated countries are patient identification and tube labelling.",
"title": ""
},
{
"docid": "17c8766c5fcc9b6e0d228719291dcea5",
"text": "In this study we examined the social behaviors of 4- to 12-year-old children with autism spectrum disorders (ASD; N = 24) during three tradic interactions with an adult confederate and an interaction partner, where the interaction partner varied randomly among (1) another adult human, (2) a touchscreen computer game, and (3) a social dinosaur robot. Children spoke more in general, and directed more speech to the adult confederate, when the interaction partner was a robot, as compared to a human or computer game interaction partner. Children spoke as much to the robot as to the adult interaction partner. This study provides the largest demonstration of social human-robot interaction in children with autism to date. Our findings suggest that social robots may be developed into useful tools for social skills and communication therapies, specifically by embedding social interaction into intrinsic reinforcers and motivators.",
"title": ""
},
{
"docid": "86d4296be61308ec93920d2d84f0694f",
"text": "by Jian Xu Our world produces massive data every day; they exist in diverse forms, from pairwise data and matrix to time series and trajectories. Meanwhile, we have access to the versatile toolkit of network analysis. Networks also have different forms; from simple networks to higher-order network, each representation has different capabilities in carrying information. For researchers who want to leverage the power of the network toolkit, and apply it beyond networks data to sequential data, diffusion data, and many more, the question is: how to represent big data and networks? This dissertation makes a first step to answering the question. It proposes the higherorder network, which is a critical piece for representing higher-order interaction data; it introduces a scalable algorithm for building the network, and visualization tools for interactive exploration. Finally, it presents broad applications of the higher-order network in the real-world. Dedicated to those who strive to be better persons.",
"title": ""
},
{
"docid": "682921e4e2f000384fdcb9dc6fbaa61a",
"text": "The use of Cloud Computing for computation offloading in the robotics area has become a field of interest today. The aim of this work is to demonstrate the viability of cloud offloading in a low level and intensive computing task: a vision-based navigation assistance of a service mobile robot. In order to do so, a prototype, running over a ROS-based mobile robot (Erratic by Videre Design LLC) is presented. The information extracted from on-board stereo cameras will be used by a private cloud platform consisting of five bare-metal nodes with AMD Phenom 965 × 4 CPU, with the cloud middleware Openstack Havana. The actual task is the shared control of the robot teleoperation, that is, the smooth filtering of the teleoperated commands with the detected obstacles to prevent collisions. All the possible offloading models for this case are presented and analyzed. Several performance results using different communication technologies and offloading models are explained as well. In addition to this, a real navigation case in a domestic circuit was done. The tests demonstrate that offloading computation to the Cloud improves the performance and navigation results with respect to the case where all processing is done by the robot.",
"title": ""
},
{
"docid": "9a05c95de1484df50a5540b31df1a010",
"text": "Resumen. Este trabajo trata sobre un sistema de monitoreo remoto a través de una pantalla inteligente para sensores de temperatura y corriente utilizando una red híbrida CAN−ZIGBEE. El CAN bus es usado como medio de transmisión de datos a corta distancia mientras que Zigbee es empleado para que cada nodo de la red pueda interactuar de manera inalámbrica con el nodo principal. De esta manera la red híbrida combina las ventajas de cada protocolo de comunicación para intercambiar datos. El sistema cuenta con cuatro nodos, dos son CAN y reciben la información de los sensores y el resto son Zigbee. Estos nodos están a cargo de transmitir la información de un nodo CAN de manera inalámbrica y desplegarla en una pantalla inteligente.",
"title": ""
},
{
"docid": "5086a13a84d3de2d5c340c5808e03e53",
"text": "The unstructured scenario, the extraction of significant features, the imprecision of sensors along with the impossibility of using GPS signals are some of the challenges encountered in underwater environments. Given this adverse context, the Simultaneous Localization and Mapping techniques (SLAM) attempt to localize the robot in an efficient way in an unknown underwater environment while, at the same time, generate a representative model of the environment. In this paper, we focus on key topics related to SLAM applications in underwater environments. Moreover, a review of major studies in the literature and proposed solutions for addressing the problem are presented. Given the limitations of probabilistic approaches, a new alternative based on a bio-inspired model is highlighted.",
"title": ""
},
{
"docid": "d7cb103c0dd2e7c8395438950f83da3f",
"text": "We address the effects of packaging on performance, reliability and cost of photonic devices. For silicon photonics we address some specific packaging aspects. Finally we propose an approach for integration of photonics and ASICs.",
"title": ""
},
{
"docid": "fe3775919e0a88dcabdc98bd8c34e6b8",
"text": "In this work, we study the 1-bit convolutional neural networks (CNNs), of which both the weights and activations are binary. While being efficient, the classification accuracy of the current 1-bit CNNs is much worse compared to their counterpart real-valued CNN models on the large-scale dataset, like ImageNet. To minimize the performance gap between the 1-bit and real-valued CNN models, we propose a novel model, dubbed Bi-Real net, which connects the real activations (after the 1-bit convolution and/or BatchNorm layer, before the sign function) to activations of the consecutive block, through an identity shortcut. Consequently, compared to the standard 1-bit CNN, the representational capability of the Bi-Real net is significantly enhanced and the additional cost on computation is negligible. Moreover, we develop a specific training algorithm including three technical novelties for 1bit CNNs. Firstly, we derive a tight approximation to the derivative of the non-differentiable sign function with respect to activation. Secondly, we propose a magnitude-aware gradient with respect to the weight for updating the weight parameters. Thirdly, we pre-train the real-valued CNN model with a clip function, rather than the ReLU function, to better initialize the Bi-Real net. Experiments on ImageNet show that the Bi-Real net with the proposed training algorithm achieves 56.4% and 62.2% top-1 accuracy with 18 layers and 34 layers, respectively. Compared to the state-of-the-arts (e.g., XNOR Net), Bi-Real net achieves up to 10% higher top-1 accuracy with more memory saving and lower computational cost. 4",
"title": ""
},
{
"docid": "5635f52c3e02fd9e9ea54c9ea1ff0329",
"text": "As a digital version of word-of-mouth, online review has become a major information source for consumers and has very important implications for a wide range of management activities. While some researchers focus their studies on the impact of online product review on sales, an important assumption remains unexamined, that is, can online product review reveal the true quality of the product? To test the validity of this key assumption, this paper first empirically tests the underlying distribution of online reviews with data from Amazon. The results show that 53% of the products have a bimodal and non-normal distribution. For these products, the average score does not necessarily reveal the product's true quality and may provide misleading recommendations. Then this paper derives an analytical model to explain when the mean can serve as a valid representation of a product's true quality, and discusses its implication on marketing practices.",
"title": ""
},
{
"docid": "e4c2fcc09b86dc9509a8763e7293cfe9",
"text": "This paperinvestigatesthe useof particle (sub-word) -grams for languagemodelling. One linguistics-basedand two datadriven algorithmsare presentedand evaluatedin termsof perplexity for RussianandEnglish. Interpolatingword trigramand particle6-grammodelsgivesup to a 7.5%perplexity reduction over thebaselinewordtrigrammodelfor Russian.Latticerescor ing experimentsarealsoperformedon1997DARPA Hub4evaluationlatticeswheretheinterpolatedmodelgivesa 0.4%absolute reductionin worderrorrateoverthebaselinewordtrigrammodel.",
"title": ""
},
{
"docid": "fe2ef685733bae2737faa04e8a10087d",
"text": "Federal health agencies are currently developing regulatory strategies for Artificial Intelligence based medical products. Regulatory regimes need to account for the new risks and benefits that come with modern AI, along with safety concerns and potential for continual autonomous learning that makes AI non-static and dramatically different than the drugs and products that agencies are used to regulating. Currently, the U.S. Food and Drug Administration (FDA) and other regulatory agencies treat AI-enabled products as medical devices. Alternatively, we propose that AI regulation in the medical domain can analogously adopt aspects of the models used to regulate medical providers.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "845190bc7aa800405358d9a7c5b38504",
"text": "We describe a sparse Bayesian regression method for recovering 3D human body motion directly from silhouettes extracted from monocular video sequences. No detailed body shape model is needed, and realism is ensured by training on real human motion capture data. The tracker estimates 3D body pose by using Relevance Vector Machine regression to combine a learned autoregressive dynamical model with robust shape descriptors extracted automatically from image silhouettes. We studied several different combination methods, the most effective being to learn a nonlinear observation-update correction based on joint regression with respect to the predicted state and the observations. We demonstrate the method on a 54-parameter full body pose model, both quantitatively using motion capture based test sequences, and qualitatively on a test video sequence.",
"title": ""
},
{
"docid": "8a2586b1059534c5a23bac9c1cc59906",
"text": "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.",
"title": ""
},
{
"docid": "186d9fc899fdd92c7e74615a2a054a03",
"text": "In this paper, we propose an illumination-robust face recognition system via local directional pattern images. Usually, local pattern descriptors including local binary pattern and local directional pattern have been used in the field of the face recognition and facial expression recognition, since local pattern descriptors have important properties to be robust against the illumination changes and computational simplicity. Thus, this paper represents the face recognition approach that employs the local directional pattern descriptor and twodimensional principal analysis algorithms to achieve enhanced recognition accuracy. In particular, we propose a novel methodology that utilizes the transformed image obtained from local directional pattern descriptor as the direct input image of two-dimensional principal analysis algorithms, unlike that most of previous works employed the local pattern descriptors to acquire the histogram features. The performance evaluation of proposed system was performed using well-known approaches such as principal component analysis and Gabor-wavelets based on local binary pattern, and publicly available databases including the Yale B database and the CMU-PIE database were employed. Through experimental results, the proposed system showed the best recognition accuracy compared to different approaches, and we confirmed the effectiveness of the proposed method under varying lighting conditions.",
"title": ""
},
{
"docid": "64ac8a4b315656bfb6a5e73f3072347f",
"text": "Understanding the flow characteristic in fishways is crucial for efficient fish migration. Flow characteristic measurements can generally provide quantitative information of velocity distributions in such passages; Particle Image Velocimetry (PIV) has become one of the most versatile techniques to disclose flow fields in general and in fishways, in particular. This paper firstly gives an overview of fish migration along with fish ladders and then the application of PIV measurements on the fish migration process. The overview shows that the quantitative and detailed turbulent flow information in fish ladders obtained by PIV is critical for analyzing turbulent properties and validating numerical results.",
"title": ""
},
{
"docid": "fff6fe0a87a750e83745428b630149d2",
"text": "From 1960 through 1987, 89 patients with stage I (44 patients) or II (45 patients) vaginal carcinoma (excluding melanomas) were treated primarily at the Mayo Clinic. Treatment consisted of surgery alone in 52 patients, surgery plus radiation in 14, and radiation alone in 23. The median duration of follow-up was 4.4 years. The 5-year survival (Kaplan-Meier method) was 82% for patients with stage I disease and 53% for those with stage II disease (p = 0.009). Analysis of survival according to treatment did not show statistically significant differences. This report is consistent with previous studies showing that stage is an important prognostic factor and that treatment can be individualized, including surgical treatment for primary early-stage vaginal cancer.",
"title": ""
},
{
"docid": "dbafea1fbab901ff5a53f752f3bfb4b8",
"text": "Three studies were conducted to test the hypothesis that high trait aggressive individuals are more affected by violent media than are low trait aggressive individuals. In Study 1, participants read film descriptions and then chose a film to watch. High trait aggressive individuals were more likely to choose a violent film to watch than were low trait aggressive individuals. In Study 2, participants reported their mood before and after the showing of a violet or nonviolent videotape. High trait aggressive individuals felt more angry after viewing the violent videotape than did low trait aggressive individuals. In Study 3, participants first viewed either a violent or a nonviolent videotape and then competed with an \"opponent\" on a reaction time task in which the loser received a blast of unpleasant noise. Videotape violence was more likely to increase aggression in high trait aggressive individuals than in low trait aggressive individuals.",
"title": ""
}
] |
scidocsrr
|
047e00e7538272aee2095920e129dbe8
|
Random Walks for Text Semantic Similarity
|
[
{
"docid": "a12769e78530516b382fbc18fe4ec052",
"text": "Roget’s Thesaurus has not been sufficiently appreciated in Natural Language Processing. We show that Roget's and WordNet are birds of a feather. In a few typical tests, we compare how the two resources help measure semantic similarity. One of the benchmarks is Miller and Charles’ list of 30 noun pairs to which human judges had assigned similarity measures. We correlate these measures with those computed by several NLP systems. The 30 pairs can be traced back to Rubenstein and Goodenough’s 65 pairs, which we have also studied. Our Roget’sbased system gets correlations of .878 for the smaller and .818 for the larger list of noun pairs; this is quite close to the .885 that Resnik obtained when he employed humans to replicate the Miller and Charles experiment. We further evaluate our measure by using Roget’s and WordNet to answer 80 TOEFL, 50 ESL and 300 Reader’s Digest questions: the correct synonym must be selected amongst a group of four words. Our system gets 78.75%, 82.00% and 74.33% of the questions respectively, better than any published results.",
"title": ""
},
{
"docid": "d8056ee6b9d1eed4bc25e302c737780c",
"text": "This survey reviews the research related to PageRank computing. Components of a PageRank vector serve as authority weights for Web pages independent of their textual content, solely based on the hyperlink structure of the Web. PageRank is typically used as a Web Search ranking component. This defines the importance of the model and the data structures that underly PageRank processing. Computing even a single PageRank is a difficult computational task. Computing many PageRanks is a much more complex challenge. Recently, significant effort has been invested in building sets of personalized PageRank vectors. PageRank is also used in many diverse applications other than ranking. Below we are interested in the theoretical foundations of the PageRank formulation, in accelerating of PageRank computing, in the effects of particular aspects of Web graph structure on optimal organization of computations, and in PageRank stability. We also review alternative models that lead to authority indices similar to PageRank and the role of such indices in applications other than Web Search. We also discuss link-based search personalization and outline some aspects of PageRank infrastructure from associated measures of convergence to link preprocessing. Content",
"title": ""
}
] |
[
{
"docid": "4d08bbbe59654c1e1140faebcc33701e",
"text": "Muenke Syndrome (FGFR3-Related Craniosynostosis): Expansion of the Phenotype and Review of the Literature Emily S. Doherty, Felicitas Lacbawan, Donald W. Hadley, Carmen Brewer, Christopher Zalewski, H. Jeff Kim, Beth Solomon, Kenneth Rosenbaum, Demetrio L. Domingo, Thomas C. Hart, Brian P. Brooks, LaDonna Immken, R. Brian Lowry, Virginia Kimonis, Alan L. Shanske, Fernanda Sarquis Jehee, Maria Rita Passos Bueno, Carol Knightly, Donna McDonald-McGinn, Elaine H. Zackai, and Maximilian Muenke* National Human Genome Research Institute, National Institutes of Health, Bethesda, Maryland National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, Maryland Warren Grant Magnuson Clinical Center, National Institutes of Health, Bethesda, Maryland Children’s National Medical Center, Washington, District of Columbia National Institute of Dental and Craniofacial Research, National Institutes of Health, Bethesda, Maryland National Eye Institute, National Institutes of Health, Bethesda, Maryland Specially for Children, Austin, Texas Department of Medical Genetics, Alberta Children’s Hospital and University of Calgary, Calgary, Alberta, Canada Children’s Hospital Boston, Boston, Massachusetts Children’s Hospital Montefiore, Bronx, New York University of São Paulo, São Paulo, Brazil The Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania Carilion Clinic, Roanoke, Virginia",
"title": ""
},
{
"docid": "2c2942905010e71cda5f8b0f41cf2dd0",
"text": "1 Focus and anaphoric destressing Consider a pronunciation of (1) with prominence on the capitalized noun phrases. In terms of a relational notion of prominence, the subject NP she] is prominent within the clause S she beats me], and NP Sue] is prominent within the clause S Sue beats me]. This prosody seems to have the pragmatic function of putting the two clauses into opposition, with prominences indicating where they diier, and prosodic reduction of the remaining parts indicating where the clauses are invariant. (1) She beats me more often than Sue beats me Car84], Roc86] and Roo92] propose theories of focus interpretation which formalize the idea just outlined. Under my assumptions, the prominences are the correlates of a syntactic focus features on the two prominent NPs, written as F subscripts. Further, the grammatical representation of (1) includes operators which interpret the focus features at the level of the minimal dominating S nodes. In the logical form below, each focus feature is interpreted by an operator written .",
"title": ""
},
{
"docid": "6bbff9d65a0e80fcbf0a6f840266accf",
"text": "This paper presents a complete methodology for the design of AC permanent magnet motors for electric vehicle traction. Electromagnetic, thermal and mechanical performance aspects are considered and modern CAD tools are utilised throughout the methodology. A 36 slot 10 pole interior permanent magnet design example is used throughout the analysis.",
"title": ""
},
{
"docid": "b6cd09d268aa8e140bef9fc7890538c3",
"text": "XML is quickly becoming the de facto standard for data exchange over the Internet. This is creating a new set of data management requirements involving XML, such as the need to store and query XML documents. Researchers have proposed using relational database systems to satisfy these requirements by devising ways to \"shred\" XML documents into relations, and translate XML queries into SQL queries over these relations. However, a key issue with such an approach, which has largely been ignored in the research literature, is how (and whether) the ordered XML data model can be efficiently supported by the unordered relational data model. This paper shows that XML's ordered data model can indeed be efficiently supported by a relational database system. This is accomplished by encoding order as a data value. We propose three order encoding methods that can be used to represent XML order in the relational data model, and also propose algorithms for translating ordered XPath expressions into SQL using these encoding methods. Finally, we report the results of an experimental study that investigates the performance of the proposed order encoding methods on a workload of ordered XML queries and updates.",
"title": ""
},
{
"docid": "888de1004e212e1271758ac35ff9807d",
"text": "We present the design and implementation of iVoLVER, a tool that allows users to create visualizations without textual programming. iVoLVER is designed to enable flexible acquisition of many types of data (text, colors, shapes, quantities, dates) from multiple source types (bitmap charts, webpages, photographs, SVGs, CSV files) and, within the same canvas, supports transformation of that data through simple widgets to construct interactive animated visuals. Aside from the tool, which is web-based and designed for pen and touch, we contribute the design of the interactive visual language and widgets for extraction, transformation, and representation of data. We demonstrate the flexibility and expressive power of the tool through a set of scenarios, and discuss some of the challenges encountered and how the tool fits within the current infovis tool landscape.",
"title": ""
},
{
"docid": "fae60b86d98a809f876117526106719d",
"text": "Big Data security analysis is commonly used for the analysis of large volume security data from an organisational perspective, requiring powerful IT infrastructure and expensive data analysis tools. Therefore, it can be considered to be inaccessible to the vast majority of desktop users and is difficult to apply to their rapidly growing data sets for security analysis. A number of commercial companies offer a desktop-oriented big data security analysis solution; however, most of them are prohibitive to ordinary desktop users with respect to cost and IT processing power. This paper presents an intuitive and inexpensive big data security analysis approach using Computational Intelligence (CI) techniques for Windows desktop users, where the combination of Windows batch programming, EmEditor and R are used for the security analysis. The simulation is performed on a real dataset with more than 10 million observations, which are collected from Windows Firewall logs to demonstrate how a desktop user can gain insight into their abundant and untouched data and extract useful information to prevent their system from current and future security threats. This CI-based big data security analysis approach can also be extended to other types of security logs such as event logs, application logs and web logs.",
"title": ""
},
{
"docid": "62686423e15ef0cac3a3bbe8f33e3367",
"text": "Most of the existing deep learning-based methods for 3D hand and human pose estimation from a single depth map are based on a common framework that takes a 2D depth map and directly regresses the 3D coordinates of keypoints, such as hand or human body joints, via 2D convolutional neural networks (CNNs). The first weakness of this approach is the presence of perspective distortion in the 2D depth map. While the depth map is intrinsically 3D data, many previous methods treat depth maps as 2D images that can distort the shape of the actual object through projection from 3D to 2D space. This compels the network to perform perspective distortion-invariant estimation. The second weakness of the conventional approach is that directly regressing 3D coordinates from a 2D image is a highly nonlinear mapping, which causes difficulty in the learning procedure. To overcome these weaknesses, we firstly cast the 3D hand and human pose estimation problem from a single depth map into a voxel-to-voxel prediction that uses a 3D voxelized grid and estimates the per-voxel likelihood for each keypoint. We design our model as a 3D CNN that provides accurate estimates while running in real-time. Our system outperforms previous methods in almost all publicly available 3D hand and human pose estimation datasets and placed first in the HANDS 2017 frame-based 3D hand pose estimation challenge. The code is available in1.",
"title": ""
},
{
"docid": "2f54746f666befe19af1391f1d90aca8",
"text": "The Internet of Things has drawn lots of research attention as the growing number of devices connected to the Internet. Long Term Evolution-Advanced (LTE-A) is a promising technology for wireless communication and it's also promising for IoT. The main challenge of incorporating IoT devices into LTE-A is a large number of IoT devices attempting to access the network in a short period which will greatly reduce the network performance. In order to improve the network utilization, we adopted a hierarchy architecture using a gateway for connecting the devices to the eNB and proposed a multiclass resource allocation algorithm for LTE based IoT communication. Simulation results show that the proposed algorithm can provide good performance both on data rate and latency for different QoS applications both in saturated and unsaturated environment.",
"title": ""
},
{
"docid": "dc94e340ceb76a0c9fda47bac4be9920",
"text": "Mobile health (mHealth) apps are an ideal tool for monitoring and tracking long-term health conditions; they are becoming incredibly popular despite posing risks to personal data privacy and security. In this paper, we propose a testing method for Android mHealth apps which is designed using a threat analysis, considering possible attack scenarios and vulnerabilities specific to the domain. To demonstrate the method, we have applied it to apps for managing hypertension and diabetes, discovering a number of serious vulnerabilities in the most popular applications. Here we summarise the results of that case study, and discuss the experience of using a testing method dedicated to the domain, rather than out-of-the-box Android security testing methods. We hope that details presented here will help design further, more automated, mHealth security testing tools and methods.",
"title": ""
},
{
"docid": "a83931702879dc41a3d7007ac4c32716",
"text": "We propose a query-based generative model for solving both tasks of question generation (QG) and question answering (QA). The model follows the classic encoderdecoder framework. The encoder takes a passage and a query as input then performs query understanding by matching the query with the passage from multiple perspectives. The decoder is an attention-based Long Short Term Memory (LSTM) model with copy and coverage mechanisms. In the QG task, a question is generated from the system given the passage and the target answer, whereas in the QA task, the answer is generated given the question and the passage. During the training stage, we leverage a policy-gradient reinforcement learning algorithm to overcome exposure bias, a major problem resulted from sequence learning with cross-entropy loss. For the QG task, our experiments show higher performances than the state-of-the-art results. When used as additional training data, the automatically generated questions even improve the performance of a strong extractive QA system. In addition, our model shows better performance than the state-of-the-art baselines of the generative QA task.",
"title": ""
},
{
"docid": "d75d453181293c92ec9bab800029e366",
"text": "For a majority of applications implemented today, the Intermediate Bus Architecture (IBA) has been the preferred power architecture. This power architecture has led to the development of the isolated, semi-regulated DC/DC converter known as the Intermediate Bus Converter (IBC). Fixed ratio Bus Converters that employ a new power topology known as the Sine Amplitude Converter (SAC) offer dramatic improvements in power density, noise reduction, and efficiency over the existing IBC products. As electronic systems continue to trend toward lower voltages with higher currents and as the speed of contemporary loads - such as state-of-the-art processors and memory - continues to increase, the power systems designer is challenged to provide small, cost effective and efficient solutions that offer the requisite performance. Traditional power architectures cannot, in the long run, provide the required performance. Vicor's Factorized Power Architecture (FPA), and the implementation of V·I Chips, provides a revolutionary new and optimal power conversion solution that addresses the challenge in every respect. The technology behind these power conversion engines used in the IBC and V·I Chips is analyzed and contextualized in a system perspective.",
"title": ""
},
{
"docid": "eade87f676c023cd3024226b48131ffb",
"text": "Finding the dense regions of a graph and relations among them is a fundamental task in network analysis. Nucleus decomposition is a principled framework of algorithms that generalizes the k-core and k-truss decompositions. It can leverage the higher-order structures to locate the dense subgraphs with hierarchical relations. Computation of the nucleus decomposition is performed in multiple steps, known as the peeling process, and it requires global information about the graph at any time. This prevents the scalable parallelization of the computation. Also, it is not possible to compute approximate and fast results by the peeling process, because it does not produce the densest regions until the algorithm is complete. In a previous work, Lu et al. proposed to iteratively compute the h-indices of vertex degrees to obtain the core numbers and prove that the convergence is obtained after a finite number of iterations. In this work, we generalize the iterative h-index computation for any nucleus decomposition and prove convergence bounds. We present a framework of local algorithms to obtain the exact and approximate nucleus decompositions. Our algorithms are pleasingly parallel and can provide approximations to explore time and quality trade-offs. Our shared-memory implementation verifies the efficiency, scalability, and effectiveness of our algorithms on real-world networks. In particular, using 24 threads, we obtain up to 4.04x and 7.98x speedups for k-truss and (3, 4) nucleus decompositions.",
"title": ""
},
{
"docid": "0344917c6b44b85946313957a329bc9c",
"text": "Recently, Haas and Hellerstein proposed the hash ripple join algorithm in the context of online aggregation. Although the algorithm rapidly gives a good estimate for many join-aggregate problem instances, the convergence can be slow if the number of tuples that satisfy the join predicate is small or if there are many groups in the output. Furthermore, if memory overflows (for example, because the user allows the algorithm to run to completion for an exact answer), the algorithm degenerates to block ripple join and performance suffers. In this paper, we build on the work of Haas and Hellerstein and propose a new algorithm that (a) combines parallelism with sampling to speed convergence, and (b) maintains good performance in the presence of memory overflow. Results from a prototype implementation in a parallel DBMS show that its rate of convergence scales with the number of processors, and that when allowed to run to completion, even in the presence of memory overflow, it is competitive with the traditional parallel hybrid hash join algorithm.",
"title": ""
},
{
"docid": "7965e8074a84c64c971e22995caaab6b",
"text": "Mechanical details as well as electrical models of FDR (frequency domain reflectometry) sensors for the measurement of the complex dielectric permittivity of porous materials are presented. The sensors are formed from two stainless steel parallel waveguides of various lengths. Using the data from VNA (vector network analyzer) with the connected FDR sensor and selected models of the applied sensor it was possible obtain the frequency spectrum of dielectric permittivity from 10 to 500 MHz of reference liquids and soil samples of various moisture and salinity. The performance of the analyzed sensors were compared with TDR (time domain reflectometry) ones of similar mechanical construction.",
"title": ""
},
{
"docid": "5fbb54e63158066198cdf59e1a8e9194",
"text": "In this paper, we present results of a study of the data rate fairness among nodes within a LoRaWAN cell. Since LoRa/LoRaWAN supports various data rates, we firstly derive the fairest ratios of deploying each data rate within a cell for a fair collision probability. LoRa/LoRaWan, like other frequency modulation based radio interfaces, exhibits the capture effect in which only the stronger signal of colliding signals will be extracted. This leads to unfairness, where far nodes or nodes experiencing higher attenuation are less likely to see their packets received correctly. Therefore, we secondly develop a transmission power control algorithm to balance the received signal powers from all nodes regardless of their distances from the gateway for a fair data extraction. Simulations show that our approach achieves higher fairness in data rate than the state-of-art in almost all network configurations.",
"title": ""
},
{
"docid": "5b984d57ad0940838b703eadd7c733b3",
"text": "Neural sequence generation is commonly approached by using maximumlikelihood (ML) estimation or reinforcement learning (RL). However, it is known that they have their own shortcomings; ML presents training/testing discrepancy, whereas RL suffers from sample inefficiency. We point out that it is difficult to resolve all of the shortcomings simultaneously because of a tradeoff between ML and RL. In order to counteract these problems, we propose an objective function for sequence generation using α-divergence, which leads to an ML-RL integrated method that exploits better parts of ML and RL. We demonstrate that the proposed objective function generalizes ML and RL objective functions because it includes both as its special cases (ML corresponds to α→ 0 and RL to α→ 1). We provide a proposition stating that the difference between the RL objective function and the proposed one monotonically decreases with increasing α. Experimental results on machine translation tasks show that minimizing the proposed objective function achieves better sequence generation performance than ML-based methods.",
"title": ""
},
{
"docid": "549d486d6ff362bc016c6ce449e29dc9",
"text": "Aging is very often associated with magnesium (Mg) deficit. Total plasma magnesium concentrations are remarkably constant in healthy subjects throughout life, while total body Mg and Mg in the intracellular compartment tend to decrease with age. Dietary Mg deficiencies are common in the elderly population. Other frequent causes of Mg deficits in the elderly include reduced Mg intestinal absorption, reduced Mg bone stores, and excess urinary loss. Secondary Mg deficit in aging may result from different conditions and diseases often observed in the elderly (i.e. insulin resistance and/or type 2 diabetes mellitus) and drugs (i.e. use of hypermagnesuric diuretics). Chronic Mg deficits have been linked to an increased risk of numerous preclinical and clinical outcomes, mostly observed in the elderly population, including hypertension, stroke, atherosclerosis, ischemic heart disease, cardiac arrhythmias, glucose intolerance, insulin resistance, type 2 diabetes mellitus, endothelial dysfunction, vascular remodeling, alterations in lipid metabolism, platelet aggregation/thrombosis, inflammation, oxidative stress, cardiovascular mortality, asthma, chronic fatigue, as well as depression and other neuropsychiatric disorders. Both aging and Mg deficiency have been associated to excessive production of oxygen-derived free radicals and low-grade inflammation. Chronic inflammation and oxidative stress are also present in several age-related diseases, such as many vascular and metabolic conditions, as well as frailty, muscle loss and sarcopenia, and altered immune responses, among others. Mg deficit associated to aging may be at least one of the pathophysiological links that may help to explain the interactions between inflammation and oxidative stress with the aging process and many age-related diseases.",
"title": ""
},
{
"docid": "7a8a98b91680cbc63594cd898c3052c8",
"text": "Policy-based access control is a technology that achieves separation of concerns through evaluating an externalized policy at each access attempt. While this approach has been well-established for request-response applications, it is not supported for database queries of data-driven applications, especially for attribute-based policies. In particular, search operations for such applications involve poor scalability with regard to the data set size for this approach, because they are influenced by dynamic runtime conditions. This paper proposes a scalable application-level middleware solution that performs runtime injection of the appropriate rules into the original search query, so that the result set of the search includes only items to which the subject is entitled. Our evaluation shows that our method scales far better than current state of practice approach that supports policy-based access control.",
"title": ""
},
{
"docid": "93e5ed1d67fe3d20c7b0177539e509c4",
"text": "Business models that rely on social media and user-generated content have shifted from the more traditional business model, where value for the organization is derived from the one-way delivery of products and/or services, to the provision of intangible value based on user engagement. This research builds a model that hypothesizes that the user experiences from social interactions among users, operationalized as personalization, transparency, access to social resources, critical mass of social acquaintances, and risk, as well as with the technical features of the social media platform, operationalized as the completeness, flexibility, integration, and evolvability, influence user engagement and subsequent usage behavior. Using survey responses from 408 social media users, findings suggest that both social and technical factors impact user engagement and ultimately usage with additional direct impacts on usage by perceptions of the critical mass of social acquaintances and risk. KEywORdS Social Interactions, Social Media, Social Networking, Technical Features, Use, User Engagement, User Experience",
"title": ""
}
] |
scidocsrr
|
efb4dd43048d7298ab1eaa064d0bf263
|
The effect of strength training on performance in endurance athletes.
|
[
{
"docid": "558eb032e7060abcc6c7f79be7c728aa",
"text": "In the exercising human, maximal oxygen uptake (VO2max) is limited by the ability of the cardiorespiratory system to deliver oxygen to the exercising muscles. This is shown by three major lines of evidence: 1) when oxygen delivery is altered (by blood doping, hypoxia, or beta-blockade), VO2max changes accordingly; 2) the increase in VO2max with training results primarily from an increase in maximal cardiac output (not an increase in the a-v O2 difference); and 3) when a small muscle mass is overperfused during exercise, it has an extremely high capacity for consuming oxygen. Thus, O2 delivery, not skeletal muscle O2 extraction, is viewed as the primary limiting factor for VO2max in exercising humans. Metabolic adaptations in skeletal muscle are, however, critical for improving submaximal endurance performance. Endurance training causes an increase in mitochondrial enzyme activities, which improves performance by enhancing fat oxidation and decreasing lactic acid accumulation at a given VO2. VO2max is an important variable that sets the upper limit for endurance performance (an athlete cannot operate above 100% VO2max, for extended periods). Running economy and fractional utilization of VO2max also affect endurance performance. The speed at lactate threshold (LT) integrates all three of these variables and is the best physiological predictor of distance running performance.",
"title": ""
}
] |
[
{
"docid": "9f1336d17f5d8fd7e04bd151eabb6a97",
"text": "Immensely popular video sharing websites such as YouTube have become the most important sources of music information for Internet users and the most prominent platform for sharing live music. The audio quality of this huge amount of live music recordings, however, varies significantly due to factors such as environmental noise, location, and recording device. However, most video search engines do not take audio quality into consideration when retrieving and ranking results. Given the fact that most users prefer live music videos with better audio quality, we propose the first automatic, non-reference audio quality assessment framework for live music video search online. We first construct two annotated datasets of live music recordings. The first dataset contains 500 human-annotated pieces, and the second contains 2,400 synthetic pieces systematically generated by adding noise effects to clean recordings. Then, we formulate the assessment task as a ranking problem and try to solve it using a learning-based scheme. To validate the effectiveness of our framework, we perform both objective and subjective evaluations. Results show that our framework significantly improves the ranking performance of live music recording retrieval and can prove useful for various real-world music applications.",
"title": ""
},
{
"docid": "9b176a25a16b05200341ac54778a8bfc",
"text": "This paper reports on a study of motivations for the use of peer-to-peer or sharing economy services. We interviewed both users and providers of these systems to obtain different perspectives and to determine if providers are matching their system designs to the most important drivers of use. We found that the motivational models implicit in providers' explanations of their systems' designs do not match well with what really seems to motivate users. Providers place great emphasis on idealistic motivations such as creating a better community and increasing sustainability. Users, on the other hand are looking for services that provide what they need whilst increasing value and convenience. We discuss the divergent models of providers and users and offer design implications for peer system providers.",
"title": ""
},
{
"docid": "2b8b06965cca346f3714cbaa1704ab83",
"text": "Visual question answering (Visual QA) has attracted a lot of attention lately, seen essentially as a form of (visual) Turing test that artificial intelligence should strive to achieve. In this paper, we study a crucial component of this task: how can we design good datasets for the task? We focus on the design of multiplechoice based datasets where the learner has to select the right answer from a set of candidate ones including the target (i.e. the correct one) and the decoys (i.e. the incorrect ones). Through careful analysis of the results attained by state-of-the-art learning models and human annotators on existing datasets, we show that the design of the decoy answers has a significant impact on how and what the learning models learn from the datasets. In particular, the resulting learner can ignore the visual information, the question, or both while still doing well on the task. Inspired by this, we propose automatic procedures to remedy such design deficiencies. We apply the procedures to re-construct decoy answers for two popular Visual QA datasets as well as to create a new Visual QA dataset from the Visual Genome project, resulting in the largest dataset for this task. Extensive empirical studies show that the design deficiencies have been alleviated in the remedied datasets and the performance on them is likely a more faithful indicator of the difference among learning models. The datasets are released and publicly available via http://www.teds. usc.edu/website_vqa/.",
"title": ""
},
{
"docid": "a5911891697a1b2a407f231cf0ad6c28",
"text": "In this paper, a new control method for the parallel operation of inverters operating in an island grid or connected to an infinite bus is described. Frequency and voltage control, including mitigation of voltage harmonics, are achieved without the need for any common control circuitry or communication between inverters. Each inverter supplies a current that is the result of the voltage difference between a reference ac voltage source and the grid voltage across a virtual complex impedance. The reference ac voltage source is synchronized with the grid, with a phase shift, depending on the difference between rated and actual grid frequency. A detailed analysis shows that this approach has a superior behavior compared to existing methods, regarding the mitigation of voltage harmonics, short-circuit behavior and the effectiveness of the frequency and voltage control, as it takes the R to X line impedance ratio into account. Experiments show the behavior of the method for an inverter feeding a highly nonlinear load and during the connection of two parallel inverters in operation.",
"title": ""
},
{
"docid": "2b0fa1c4dceb94a2d8c1395dae9fad99",
"text": "Among the major problems facing technical management today are those involving the coordination of many diverse activities toward a common goal. In a large engineering project, for example, almost all the engineering and craft skills are involved as well as the functions represented by research, development, design, procurement, construction, vendors, fabricators and the customer. Management must devise plans which will tell with as much accuracy as possible how the efforts of the people representing these functions should be directed toward the project's completion. In order to devise such plans and implement them, management must be able to collect pertinent information to accomplish the following tasks:\n (1) To form a basis for prediction and planning\n (2) To evaluate alternative plans for accomplishing the objective\n (3) To check progress against current plans and objectives, and\n (4) To form a basis for obtaining the facts so that decisions can be made and the job can be done.",
"title": ""
},
{
"docid": "56ff9b231738b24fda47ab152bf78ba1",
"text": "We present the Real-time Accurate Cell-shape Extractor (RACE), a high-throughput image analysis framework for automated three-dimensional cell segmentation in large-scale images. RACE is 55-330 times faster and 2-5 times more accurate than state-of-the-art methods. We demonstrate the generality of RACE by extracting cell-shape information from entire Drosophila, zebrafish, and mouse embryos imaged with confocal and light-sheet microscopes. Using RACE, we automatically reconstructed cellular-resolution tissue anisotropy maps across developing Drosophila embryos and quantified differences in cell-shape dynamics in wild-type and mutant embryos. We furthermore integrated RACE with our framework for automated cell lineaging and performed joint segmentation and cell tracking in entire Drosophila embryos. RACE processed these terabyte-sized datasets on a single computer within 1.4 days. RACE is easy to use, as it requires adjustment of only three parameters, takes full advantage of state-of-the-art multi-core processors and graphics cards, and is available as open-source software for Windows, Linux, and Mac OS.",
"title": ""
},
{
"docid": "cb793f98ea1a001dde3ac87a0b181ebd",
"text": "We propose a simplified model of attention which is applicable to feed-forward neural networks and demonstrate that the resulting model can solve the synthetic “addition” and “multiplication” long-term memory problems for sequence lengths which are both longer and more widely varying than the best published results for these tasks. 1 MODELS FOR SEQUENTIAL DATA Many problems in machine learning are best formulated using sequential data and appropriate models for these tasks must be able to capture temporal dependencies in sequences, potentially of arbitrary length. One such class of models are recurrent neural networks (RNNs), which can be considered a learnable function f whose output ht = f(xt, ht−1) at time t depends on input xt and the model’s previous state ht−1. Training of RNNs with backpropagation through time (Werbos, 1990) is hindered by the vanishing and exploding gradient problem (Pascanu et al., 2012; Hochreiter & Schmidhuber, 1997; Bengio et al., 1994), and as a result RNNs are in practice typically only applied in tasks where sequential dependencies span at most hundreds of time steps. Very long sequences can also make training computationally inefficient due to the fact that RNNs must be evaluated sequentially and cannot be fully parallelized.",
"title": ""
},
{
"docid": "71819107f543aa2b20b070e322cf1bbb",
"text": "Despite the recent success of end-to-end learned representations, hand-crafted optical flow features are still widely used in video analysis tasks. To fill this gap, we propose TVNet, a novel end-to-end trainable neural network, to learn optical-flow-like features from data. TVNet subsumes a specific optical flow solver, the TV-L1 method, and is initialized by unfolding its optimization iterations as neural layers. TVNet can therefore be used directly without any extra learning. Moreover, it can be naturally concatenated with other task-specific networks to formulate an end-to-end architecture, thus making our method more efficient than current multi-stage approaches by avoiding the need to pre-compute and store features on disk. Finally, the parameters of the TVNet can be further fine-tuned by end-to-end training. This enables TVNet to learn richer and task-specific patterns beyond exact optical flow. Extensive experiments on two action recognition benchmarks verify the effectiveness of the proposed approach. Our TVNet achieves better accuracies than all compared methods, while being competitive with the fastest counterpart in terms of features extraction time.",
"title": ""
},
{
"docid": "f279060b5ebe9b163d08f29b0e70619c",
"text": "Silver film over nanospheres (AgFONs) were successfully employed as surface-enhanced Raman spectroscopy (SERS) substrates to characterize several artists' red dyes including: alizarin, purpurin, carminic acid, cochineal, and lac dye. Spectra were collected on sample volumes (1 x 10(-6) M or 15 ng/microL) similar to those that would be found in a museum setting and were found to be higher in resolution and consistency than those collected on silver island films (AgIFs). In fact, to the best of the authors' knowledge, this work presents the highest resolution spectrum of the artists' material cochineal to date. In order to determine an optimized SERS system for dye identification, experiments were conducted in which laser excitation wavelengths were matched with correlating AgFON localized surface plasmon resonance (LSPR) maxima. Enhancements of approximately two orders of magnitude were seen when resonance SERS conditions were met in comparison to non-resonance SERS conditions. Finally, because most samples collected in a museum contain multiple dyestuffs, AgFONs were employed to simultaneously identify individual dyes within several dye mixtures. These results indicate that AgFONs have great potential to be used to identify not only real artwork samples containing a single dye but also samples containing dyes mixtures.",
"title": ""
},
{
"docid": "b59e527be8cfb1a0d9f475904bbf1602",
"text": "Clustering is grouping input data sets into subsets, called ’clusters’ within which the elements are somewhat similar. In general, clustering is an unsupervised learning task as very little or no prior knowledge is given except the input data sets. The tasks have been used in many fields and therefore various clustering algorithms have been developed. Clustering task is, however, computationally expensive as many of the algorithms require iterative or recursive procedures and most of real-life data is high dimensional. Therefore, the parallelization of clustering algorithms is inevitable, and various parallel clustering algorithms have been implemented and applied to many applications. In this paper, we review a variety of clustering algorithms and their parallel versions as well. Although the parallel clustering algorithms have been used for many applications, the clustering tasks are applied as preprocessing steps for parallelization of other algorithms too. Therefore, the applications of parallel clustering algorithms and the clustering algorithms for parallel computations are described in this paper.",
"title": ""
},
{
"docid": "2ea12c68f02657acb9fb27f6ace7e746",
"text": "1. Established relevance of authalic spherical parametrization for creating geometry images used subsequently in CNN. 2. Robust authalic parametrization of arbitrary shapes using area restoring diffeomorphic flow and barycentric mapping. 3. Creation of geometry images (a) with appropriate shape feature for rigid/non-rigid shape analysis, (b) which are robust to cut and amenable to learn using CNNs. Experiments Cuts & Data Augmentation",
"title": ""
},
{
"docid": "429c900f6ac66bcea5aa068d27f5b99f",
"text": "Recent researches shows that Brain Computer Interface (BCI) technology provides effective way of communication between human and physical device. In this work, an EEG based wireless mobile robot is implemented for people suffer from motor disabilities can interact with physical devices based on Brain Computer Interface (BCI). An experimental model of mobile robot is explored and it can be controlled by human eye blink strength. EEG signals are acquired from NeuroSky Mind wave Sensor (single channel prototype) in non-invasive manner and Signal features are extracted by adopting Discrete Wavelet Transform (DWT) to amend the signal resolution. We analyze and compare the db4 and db7 wavelets for accurate classification of blink signals. Different classes of movements are achieved based on different blink strength of user. The experimental setup of adaptive human machine interface system provides better accuracy and navigates the mobile robot based on user command, so it can be adaptable for disabled people.",
"title": ""
},
{
"docid": "4c12b827ee445ab7633aefb8faf222a2",
"text": "Research shows that speech dereverberation (SD) with Deep Neural Network (DNN) achieves the state-of-the-art results by learning spectral mapping, which, simultaneously, lacks the characterization of the local temporal spectral structures (LTSS) of speech signal and calls for a large storage space that is impractical in real applications. Contrarily, the Convolutional Neural Network (CNN) offers a better modeling ability by considering local patterns and has less parameters with its weights sharing property, which motivates us to employ the CNN for SD task. In this paper, to our knowledge, a Deep Convolutional Encoder-Decoder (DCED) model is proposed for the first time in dealing with the SD task (DCED-SD), where the advantage of the DCED-SD model lies in its powerful LTSS modeling capability via convolutional encoder-decoder layers with smaller storage requirement. By taking the reverberant and anechoic spectrum as training pairs, the proposed DCED-SD is well-trained in a supervised manner with less convergence time. Additionally, the DCED-SD model size is 23 times smaller than the size of DNN-SD model with better performance achieved. By using the simulated and real-recorded data, extensive experiments have been conducted to demonstrate the superiority of DCED-based SD method over the DNN-based SD method under different unseen reverberant conditions.",
"title": ""
},
{
"docid": "3c592d5ba9aa08f30f1e3afe890677a2",
"text": "Education in Latin America is an important part of social policy. Although huge strides were made in the last decade, a region of disparity between the rich and· poor needs to focus on the reduction of inequality of access and provision if it is to hope for qualitative change. Detailed achievements and challenges are presented, with an emphasis on improving school enrolment and a change to curriculum relevant for the future and local community and business involvement. Change will be achieved by a combination of new teachers, new management and leadership, and the involvement of all society.",
"title": ""
},
{
"docid": "2550502036aac5cf144cb8a0bc2d525b",
"text": "Significant achievements have been made on the development of next-generation filtration and separation membranes using graphene materials, as graphene-based membranes can afford numerous novel mass-transport properties that are not possible in state-of-art commercial membranes, making them promising in areas such as membrane separation, water desalination, proton conductors, energy storage and conversion, etc. The latest developments on understanding mass transport through graphene-based membranes, including perfect graphene lattice, nanoporous graphene and graphene oxide membranes are reviewed here in relation to their potential applications. A summary and outlook is further provided on the opportunities and challenges in this arising field. The aspects discussed may enable researchers to better understand the mass-transport mechanism and to optimize the synthesis of graphene-based membranes toward large-scale production for a wide range of applications.",
"title": ""
},
{
"docid": "3f467988a35ecb7b6b9feef049407bb2",
"text": "Semantic parsing of large-scale 3D point clouds is an important research topic in computer vision and remote sensing fields. Most existing approaches utilize hand-crafted features for each modality independently and combine them in a heuristic manner. They often fail to consider the consistency and complementary information among features adequately, which makes them difficult to capture high-level semantic structures. The features learned by most of the current deep learning methods can obtain high-quality image classification results. However, these methods are hard to be applied to recognize 3D point clouds due to unorganized distribution and various point density of data. In this paper, we propose a 3DCNN-DQN-RNN method which fuses the 3D convolutional neural network (CNN), Deep Q-Network (DQN) and Residual recurrent neural network (RNN)for an efficient semantic parsing of large-scale 3D point clouds. In our method, an eye window under control of the 3D CNN and DQN can localize and segment the points of the object's class efficiently. The 3D CNN and Residual RNN further extract robust and discriminative features of the points in the eye window, and thus greatly enhance the parsing accuracy of large-scale point clouds. Our method provides an automatic process that maps the raw data to the classification results. It also integrates object localization, segmentation and classification into one framework. Experimental results demonstrate that the proposed method outperforms the state-of-the-art point cloud classification methods.",
"title": ""
},
{
"docid": "0ddb95e00f5502c826e6ec380d58911b",
"text": "Antenna selection is a multiple-input multiple-output (MIMO) technology, which uses radio frequency (RF) switches to select a good subset of antennas. Antenna selection can alleviate the requirement on the number of RF transceivers, thus being attractive for massive MIMO systems. In massive MIMO antenna selection systems, RF switching architectures need to be carefully considered. In this paper, we examine two switching architectures, i.e., full-array and sub-array. By assuming independent and identically distributed Rayleigh flat fading channels, we use asymptotic theory on order statistics to derive the asymptotic upper capacity bounds of massive MIMO channels with antenna selection for the both switching architectures in the large-scale limit. We also use the derived bounds to further derive the upper bounds of the ergodic achievable spectral efficiency considering the channel state information (CSI) acquisition. It is also showed that the ergodic capacity of sub-array antenna selection system scales no faster than double logarithmic rate. In addition, optimal antenna selection algorithms based on branch-and-bound are proposed for both switching architectures. Our results show that the derived asymptotic bounds are effective and also apply to the finite-dimensional MIMO. The CSI acquisition is one of the main limits for the massive MIMO antenna selection systems in the time-variant channels. The proposed optimal antenna selection algorithms are much faster than the exhaustive-search-based antenna selection, e.g., 1000 × speedup observed in the large-scale system. Interestingly, the full-array and sub-array systems have very close performance, which is validated by their exact capacities and their close upper bounds on capacity.",
"title": ""
},
{
"docid": "05d282026dcecb3286c9ffbd88cb72a3",
"text": "Although deep neural networks (DNNs) are state-of-the-art artificial intelligence systems, it is unclear what insights, if any, they provide about human intelligence. We address this issue in the domain of visual perception. After briefly describing DNNs, we provide an overview of recent results comparing human visual representations and performance with those of DNNs. In many cases, DNNs acquire visual representations and processing strategies that are very different from those used by people. We conjecture that there are at least two factors preventing them from serving as better psychological models. First, DNNs are currently trained with impoverished data, such as data lacking important visual cues to three-dimensional structure, data lacking multisensory statistical regularities, and data in which stimuli are unconnected to an observer’s actions and goals. Second, DNNs typically lack adaptations to capacity limits, such as attentional mechanisms, visual working memory, and compressed mental representations biased toward preserving task-relevant abstractions.",
"title": ""
},
{
"docid": "3e18a760083cd3ed169ed8dae36156b9",
"text": "n engl j med 368;26 nejm.org june 27, 2013 2445 correct diagnoses as often as we think: the diagnostic failure rate is estimated to be 10 to 15%. The rate is highest among specialties in which patients are diagnostically undifferentiated, such as emergency medicine, family medicine, and internal medicine. Error in the visual specialties, such as radiology and pathology, is considerably lower, probably around 2%.1 Diagnostic error has multiple causes, but principal among them are cognitive errors. Usually, it’s not a lack of knowledge that leads to failure, but problems with the clinician’s thinking. Esoteric diagnoses are occasionally missed, but common illnesses are commonly misdiagnosed. For example, physicians know the pathophysiology of pulmonary embolus in excruciating detail, yet because its signs and symptoms are notoriously variable and overlap with those of numerous other diseases, this important diagnosis was missed a staggering 55% of the time in a series of fatal cases.2 Over the past 40 years, work by cognitive psychologists and others has pointed to the human mind’s vulnerability to cognitive biases, logical fallacies, false assumptions, and other reasoning failures. It seems that much of our everyday thinking is f lawed, and clinicians are not immune to the problem (see box). More than 100 biases affecting clinical decision making have been described, and many medical disciplines now acknowledge their pervasive influence on our thinking. Cognitive failures are best understood in the context of how our brains manage and process information. The two principal modes, automatic and controlled, are colloquially referred to as “intuitive” and “analytic”; psychologists know them as Type 1 and Type 2 processes. Various conceptualizations of the reasoning process have been proposed, but most can be incorporated into this dual-process system. This system is more than a model: it is accepted that the two processes involve different cortical mechanisms with associated neurophysiologic and neuroanatomical From Mindless to Mindful Practice — Cognitive Bias and Clinical Decision Making",
"title": ""
},
{
"docid": "90549b287e67a38516a08a87756130fc",
"text": "Based on a sample of 944 respondents who were recruited from 20 elementary schools in South Korea, this research surveyed the factors that lead to smartphone addiction. This research examined the user characteristics and media content types that can lead to addiction. With regard to user characteristics, results showed that those who have lower self-control and those who have greater stress were more likely to be addicted to smartphones. For media content types, those who use smartphones for SNS, games, and entertainment were more likely to be addicted to smartphones, whereas those who use smartphones for study-related purposes were not. Although both SNS use and game use were positive predictors of smartphone addiction, SNS use was a stronger predictor of smartphone addiction than",
"title": ""
}
] |
scidocsrr
|
4c45393f8d80acbf4b4bc8630255ea0e
|
Compositional Verification for Autonomous Systems with Deep Learning Components
|
[
{
"docid": "ed9e22167d3e9e695f67e208b891b698",
"text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.",
"title": ""
},
{
"docid": "17611b0521b69ad2b22eeadc10d6d793",
"text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"title": ""
},
{
"docid": "01ed1959250874c55bd32d472461718f",
"text": "Deep neural networks have become widely used, obtaining remarkable results in domains such as computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, and bio-informatics, where they have produced results comparable to human experts. However, these networks can be easily “fooled” by adversarial perturbations: minimal changes to correctly-classified inputs, that cause the network to misclassify them. This phenomenon represents a concern for both safety and security, but it is currently unclear how to measure a network’s robustness against such perturbations. Existing techniques are limited to checking robustness around a few individual input points, providing only very limited guarantees. We propose a novel approach for automatically identifying safe regions of the input space, within which the network is robust against adversarial perturbations. The approach is data-guided, relying on clustering to identify well-defined geometric regions as candidate safe regions. We then utilize verification techniques to confirm that these regions are safe or to provide counter-examples showing that they are not safe. We also introduce the notion of targeted robustness which, for a given target label and region, ensures that a NN does not map any input in the region to the target label. We evaluated our technique on the MNIST dataset and on a neural network implementation of a controller for the next-generation Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu). For these networks, our approach identified multiple regions which were completely safe as well as some which were only safe for specific labels. It also discovered several adversarial perturbations of interest.",
"title": ""
},
{
"docid": "11a69c06f21e505b3e05384536108325",
"text": "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.",
"title": ""
}
] |
[
{
"docid": "cc6b9165f395e832a396d59c85f482cc",
"text": "Vision-based automatic counting of people has widespread applications in intelligent transportation systems, security, and logistics. However, there is currently no large-scale public dataset for benchmarking approaches on this problem. This work fills this gap by introducing the first real-world RGBD People Counting DataSet (PCDS) containing over 4, 500 videos recorded at the entrance doors of buses in normal and cluttered conditions. It also proposes an efficient method for counting people in real-world cluttered scenes related to public transportations using depth videos. The proposed method computes a point cloud from the depth video frame and re-projects it onto the ground plane to normalize the depth information. The resulting depth image is analyzed for identifying potential human heads. The human head proposals are meticulously refined using a 3D human model. The proposals in each frame of the continuous video stream are tracked to trace their trajectories. The trajectories are again refined to ascertain reliable counting. People are eventually counted by accumulating the head trajectories leaving the scene. To enable effective head and trajectory identification, we also propose two different compound features. A thorough evaluation on PCDS demonstrates that our technique is able to count people in cluttered scenes with high accuracy at 45 fps on a 1.7 GHz processor, and hence it can be deployed for effective real-time people counting for intelligent transportation systems.",
"title": ""
},
{
"docid": "708fbc1eff4d96da2f3adaa403db3090",
"text": "We propose a new system for generating art. The system generates art by looking at art and learning about style; and becomes creative by increasing the arousal potential of the generated art by deviating from the learned styles. We build over Generative Adversarial Networks (GAN), which have shown the ability to learn to generate novel images simulating a given distribution. We argue that such networks are limited in their ability to generate creative products in their original design. We propose modifications to its objective to make it capable of generating creative art by maximizing deviation from established styles and minimizing deviation from art distribution. We conducted experiments to compare the response of human subjects to the generated art with their response to art created by artists. The results show that human subjects could not distinguish art generated by the proposed system from art generated by contemporary artists and shown in top art",
"title": ""
},
{
"docid": "9cebb39b2eb340a21c4f64c1bb42217e",
"text": "Text characters and strings in natural scene can provide valuable information for many applications. Extracting text directly from natural scene images or videos is a challenging task because of diverse text patterns and variant background interferences. This paper proposes a method of scene text recognition from detected text regions. In text detection, our previously proposed algorithms are applied to obtain text regions from scene image. First, we design a discriminative character descriptor by combining several state-of-the-art feature detectors and descriptors. Second, we model character structure at each character class by designing stroke configuration maps. Our algorithm design is compatible with the application of scene text extraction in smart mobile devices. An Android-based demo system is developed to show the effectiveness of our proposed method on scene text information extraction from nearby objects. The demo system also provides us some insight into algorithm design and performance improvement of scene text extraction. The evaluation results on benchmark data sets demonstrate that our proposed scheme of text recognition is comparable with the best existing methods.",
"title": ""
},
{
"docid": "f569131096d56336fa3ed547c05c2be4",
"text": "Providing high quality recommendations is important for e-commerce systems to assist users in making effective selection decisions from a plethora of choices. Collaborative filtering is a widely accepted technique to generate recommendations based on the ratings of like-minded users. However, it suffers from several inherent issues such as data sparsity and cold start. To address these problems, we propose a novel method called ''Merge'' to incorporate social trust information (i.e., trusted neighbors explicitly specified by users) in providing recommendations. Specifically, ratings of a user's trusted neighbors are merged to complement and represent the preferences of the user and to find other users with similar preferences (i.e., similar users). In addition, the quality of merged ratings is measured by the confidence considering the number of ratings and the ratio of conflicts between positive and negative opinions. Further, the rating confidence is incorporated into the computation of user similarity. The prediction for a given item is generated by aggregating the ratings of similar users. Experimental results based on three real-world data sets demonstrate that our method outperforms other counterparts both in terms of accuracy and coverage. The emergence of Web 2.0 applications has greatly changed users' styles of online activities from searching and browsing to interacting and sharing [6,40]. The available choices grow up exponentially, and make it challenge for users to find useful information which is well-known as the information overload problem. Recommender systems are designed and heavily used in modern e-commerce applications to cope with this problem, i.e., to provide users with high quality, personalized recommendations , and to help them find items (e.g., books, movies, news, music, etc.) of interest from a plethora of available choices. Collaborative filtering (CF) is one of the most well-known and commonly used techniques to generate recommendations [1,17]. The heuristic is that the items appreciated by those who have similar taste will also be in favor of by the active users (who desire recommendations). However, CF suffers from several inherent issues such as data sparsity and cold start. The former issue refers to the difficulty in finding sufficient and reliable similar users due to the fact that users in general only rate a small portion of items, while the latter refers to the dilemma that accurate recommendations are expected for the cold users who rate only a few items and thus whose preferences are hard to be inferred. To resolve these issues and model …",
"title": ""
},
{
"docid": "6cf2ffb0d541320b1ad04dc3b9e1c9a4",
"text": "Prediction of potential fraudulent activities may prevent both the stakeholders and the appropriate regulatory authorities of national or international level from being deceived. The objective difficulties on collecting adequate data that are obsessed by completeness affects the reliability of the most supervised Machine Learning methods. This work examines the effectiveness of forecasting fraudulent financial statements using semi-supervised classification techniques (SSC) that require just a few labeled examples for achieving robust learning behaviors mining useful data patterns from a larger pool of unlabeled examples. Based on data extracted from Greek firms, a number of comparisons between supervised and semi-supervised algorithms has been conducted. According to the produced results, the later algorithms are favored being examined over several scenarios of different Labeled Ratio (R) values.",
"title": ""
},
{
"docid": "8f47dc7401999924dba5cb3003194071",
"text": "Few types of signal streams are as ubiquitous as music. Here we consider the problem of extracting essential ingredients of music signals, such as well-defined global temporal structure in the form of nested periodicities (or meter). Can we construct an adaptive signal processing device that learns by example how to generate new instances of a given musical style? Because recurrent neural networks can in principle learn the temporal structure of a signal, they are good candidates for such a task. Unfortunately, music composed by standard recurrent neural networks (RNNs) often lacks global coherence. The reason for this failure seems to be that RNNs cannot keep track of temporally distant events that indicate global music structure. Long Short-Term Memory (LSTM) has succeeded in similar domains where other RNNs have failed, such as timing & counting and learning of context sensitive languages. In the current study we show that LSTM is also a good mechanism for learning to compose music. We present experimental results showing that LSTM successfully learns a form of blues music and is able to compose novel (and we believe pleasing) melodies in that style. Remarkably, once the network has found the relevant structure it does not drift from it: LSTM is able to play the blues with good timing and proper structure as long as one is willing to listen.",
"title": ""
},
{
"docid": "b6ae8a6fdd207686ae4c5108a4b77f1f",
"text": "Many IoT applications ingest and process time series data with emphasis on 5Vs (Volume, Velocity, Variety, Value and Veracity). To design and test such systems, it is desirable to have a high-performance traffic generator specifically designed for time series data, preferably using archived data to create a truly realistic workload. However, most existing traffic generator tools either are designed for generic network applications, or only produce synthetic data based on certain time series models. In addition, few have raised their performance bar to millions-packets-per-second level with minimum time violations. In this paper, we design, implement and evaluate a highly efficient and scalable time series traffic generator for IoT applications. Our traffic generator stands out in the following four aspects: 1) it generates time-conforming packets based on high-fidelity reproduction of archived time series data; 2) it leverages an open-source Linux Exokernel middleware and a customized userspace network subsystem; 3) it includes a scalable 10G network card driver and uses \"absolute\" zero-copy in stack processing; and 4) it has an efficient and scalable application-level software architecture and threading model. We have conducted extensive experiments on both a quad-core Intel workstation and a 20-core Intel server equipped with Intel X540 10G network cards and Samsung's NVMe SSDs. Compared with a stock Linux baseline and a traditional mmap-based file I/O approach, we observe that our traffic generator significantly outperforms other alternatives in terms of throughput (10X), scalability (3.6X) and time violations (46.2X).",
"title": ""
},
{
"docid": "bc388488c5695286fe7d7e56ac15fa94",
"text": "In this paper a new parking guiding and information system is described. The system assists the user to find the most suitable parking space based on his/her preferences and learned behavior. The system takes into account parameters such as driver's parking duration, arrival time, destination, type preference, cost preference, driving time, and walking distance as well as time-varying parking rules and pricing. Moreover, a prediction algorithm is proposed to forecast the parking availability for different parking locations for different times of the day based on the real-time parking information, and previous parking availability/occupancy data. A novel server structure is used to implement the system. Intelligent parking assist system reduces the searching time for parking spots in urban environments, and consequently leads to a reduction in air pollutions and traffic congestion. On-street parking meters, off-street parking garages, as well as free parking spaces are considered in our system.",
"title": ""
},
{
"docid": "eff903cb53fc7f7e9719a2372d517ab3",
"text": "The freshwater angelfishes (Pterophyllum) are South American cichlids that have become very popular among aquarists, yet scarce information on their culture and aquarium husbandry exists. We studied Pterophyllum scalare to analyze dietary effects on fecundity, growth, and survival of eggs and larvae during 135 days. Three diets were used: A) decapsulated cysts of Artemia, B) commercial dry fish food, and C) a mix diet of the rotifer Brachionus plicatilis and the cladoceran Daphnia magna. The initial larval density was 100 organisms in each 40 L aquarium. With diet A, larvae reached a maximum weight of 3.80 g, a total length of 6.3 cm, and a height of 5.8 cm; with diet B: 2.80 g, 4.81 cm, and 4.79 cm, and with diet C: 3.00 g, 5.15 cm, and 5.10 cm, respectively. Significant differences were observed between diet A, and diet B and C, but no significantly differences were observed between diets B and C. Fecundity varied from 234 to 1,082 eggs in 20 and 50 g females, respectively. Egg survival ranged from 87.4% up to 100%, and larvae survival (80 larvae/40 L aquarium) from 50% to 66.3% using diet B and A, respectively. Live food was better for growing fish than the commercial balanced food diet. Fecundity and survival are important factors in planning a good production of angelfish.",
"title": ""
},
{
"docid": "a4154317f6bb6af635edb1b2ef012d09",
"text": "The pulp industry in Taiwan discharges tons of wood waste and pulp sludge (i.e., wastewater-derived secondary sludge) per year. The mixture of these two bio-wastes, denoted as wood waste with pulp sludge (WPS), has been commonly converted to organic fertilizers for agriculture application or to soil conditioners. However, due to energy demand, the WPS can be utilized in a beneficial way to mitigate an energy shortage. This study elucidated the performance of applying torrefaction, a bio-waste to energy method, to transform the WPS into solid bio-fuel. Two batches of the tested WPS (i.e., WPS1 and WPS2) were generated from a virgin pulp factory in eastern Taiwan. The WPS1 and WPS2 samples contained a large amount of organics and had high heating values (HHV) on a dry-basis (HHD) of 18.30 and 15.72 MJ/kg, respectively, exhibiting a potential for their use as a solid bio-fuel. However, the wet WPS as received bears high water and volatile matter content and required de-watering, drying, and upgrading. After a 20 min torrefaction time (tT), the HHD of torrefied WPS1 (WPST1) can be enhanced to 27.49 MJ/kg at a torrefaction temperature (TT) of 573 K, while that of torrefied WPS2 (WPST2) increased to 19.74 MJ/kg at a TT of 593 K. The corresponding values of the energy densification ratio of torrefied solid bio-fuels of WPST1 and WPST2 can respectively rise to 1.50 and 1.25 times that of the raw bio-waste. The HHD of WPST1 of 27.49 MJ/kg is within the range of 24–35 MJ/kg for bituminous coal. In addition, the wet-basis HHV of WPST1 with an equilibrium moisture content of 5.91 wt % is 25.87 MJ/kg, which satisfies the Quality D coal specification of the Taiwan Power Co., requiring a value of above 20.92 MJ/kg.",
"title": ""
},
{
"docid": "6b6fd5bfbe1745a49ce497490cef949d",
"text": "This paper investigates optimal power allocation strategies over a bank of independent parallel Gaussian wiretap channels where a legitimate transmitter and a legitimate receiver communicate in the presence of an eavesdropper and an unfriendly jammer. In particular, we formulate a zero-sum power allocation game between the transmitter and the jammer where the payoff function is the secrecy rate. We characterize the optimal power allocation strategies as well as the Nash equilibrium in some asymptotic regimes. We also provide a set of results that cast further insight into the problem. Our scenario, which is applicable to current OFDM communications systems, demonstrates that transmitters that adapt to jammer experience much higher secrecy rates than non-adaptive transmitters.",
"title": ""
},
{
"docid": "fc3aeb32f617f7a186d41d56b559a2aa",
"text": "Existing neural relation extraction (NRE) models rely on distant supervision and suffer from wrong labeling problems. In this paper, we propose a novel adversarial training mechanism over instances for relation extraction to alleviate the noise issue. As compared with previous denoising methods, our proposed method can better discriminate those informative instances from noisy ones. Our method is also efficient and flexible to be applied to various NRE architectures. As shown in the experiments on a large-scale benchmark dataset in relation extraction, our denoising method can effectively filter out noisy instances and achieve significant improvements as compared with the state-of-theart models.",
"title": ""
},
{
"docid": "2181c4d52e721aab267057b8f271a9ee",
"text": "Recently, the widespread availability of consumer grade drones is responsible for the new concerns of air traffic control. This paper investigates the feasibility of drone detection by passive bistatic radar (PBR) system. Wuhan University has successfully developed a digitally multichannel PBR system, which is dedicated for the drone detection. Two typical trials with a cooperative drone have been designed to examine the system's capability of small drone detection. The agreement between experimental results and ground truth indicate the effectiveness of sensing and processing method, which verifies the practicability and prospects of drone detection by this digitally multichannel PBR system.",
"title": ""
},
{
"docid": "fb6494dcf01a927597ff784a3323e8c2",
"text": "Detection of defects in induction machine rotor bars for unassembled motors is required to evaluate machines considered for repair as well as fulfilling incremental quality assurance checks in the manufacture of new machines. Detection of rotor bar defects prior to motor assembly are critical in increasing repair efficiency and assuring the quality of newly manufactured machines. Many methods of detecting rotor bar defects in unassembled motors lack the sensitivity to find both major and minor defects in both cast and fabricated rotors along with additional deficiencies in quantifiable test results and arc-flash safety hazards. A process of direct magnetic field analysis can examine measurements from induced currents in a rotor separated from its stator yielding a high-resolution fingerprint of a rotor's magnetic field. This process identifies both major and minor rotor bar defects in a repeatable and quantifiable manner appropriate for numerical evaluation without arc-flash safety hazards.",
"title": ""
},
{
"docid": "935c404529b02cee2620e52f7a09b84d",
"text": "We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distribution of parameterized tasks/goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: 1) learning the inverse kinematics in a highly-redundant robotic arm, 2) learning omnidirectional locomotion with motor primitives in a quadruped robot, 3) an arm learning to control a fishing rod with a flexible wire. We show that 1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; 2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; 3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.",
"title": ""
},
{
"docid": "7c8412c5a7c71fe76105983d3bf7e16d",
"text": "A novel wideband dual-cavity-backed circularly polarized (CP) crossed dipole antenna is presented in this letter. The exciter of the antenna comprises two classical orthogonal straight dipoles for a simple design. Dual-cavity structure is employed to achieve unidirectional radiation and improve the broadside gain. In particular, the rim edges of the cavity act as secondary radiators, which contribute to significantly enhance the overall CP performance of the antenna. The final design with an overall size of 0.57λ<sub>o</sub> × 0.57λ<sub>o</sub> × 0.24λ<sub>o</sub> where λ<sub>o</sub> is the free-space wavelength at the lowest CP operating frequency of 2.0 GHzb yields a measured –10 dB impedance bandwidth (BW) of 79.4% and 3 dB axial-ratio BW of 66.7%. The proposed antenna exhibits right-handed circular polarization with a maximum broadside gain of about 9.7 dBic.",
"title": ""
},
{
"docid": "90fe763855ca6c4fabe4f9d042d5c61a",
"text": "While learning models of intuitive physics is an increasingly active area of research, current approaches still fall short of natural intelligences in one important regard: they require external supervision, such as explicit access to physical states, at training and sometimes even at test times. Some authors have relaxed such requirements by supplementing the model with an handcrafted physical simulator. Still, the resulting methods are unable to automatically learn new complex environments and to understand physical interactions within them. In this work, we demonstrated for the first time learning such predictors directly from raw visual observations and without relying on simulators. We do so in two steps: first, we learn to track mechanically-salient objects in videos using causality and equivariance, two unsupervised learning principles that do not require auto-encoding. Second, we demonstrate that the extracted positions are sufficient to successfully train visual motion predictors that can take the underlying environment into account. We validate our predictors on synthetic datasets; then, we introduce a new dataset, ROLL4REAL, consisting of real objects rolling on complex terrains (pool table, elliptical bowl, and random height-field). We show that in all such cases it is possible to learn reliable extrapolators of the object trajectories from raw videos alone, without any form of external supervision and with no more prior knowledge than the choice of a convolutional neural network architecture.",
"title": ""
},
{
"docid": "ba0051fdc72efa78a7104587042cea64",
"text": "Open innovation breaks the original innovation border of organization and emphasizes the use of suppliers, customers, partners, and other internal and external innovative thinking and resources. How to effectively implement and manage open innovation has become a new business problem. Business ecosystem is the network system of value creation and co-evolution achieved by suppliers, users, partner, and other groups with self-organization mode. This study began with the risk analysis of open innovation implementation; then innovation process was embedded into business ecosystem structure; open innovation mode based on business ecosystem was proposed; business ecosystem based on open innovation was built according to influence degree of each innovative object. Study finds that both sides have a mutual promotion relationship, which provides a new analysis perspective for open innovation and business ecosystem; at the same time, it is also conducive to guiding the concrete practice of implementing open innovation.",
"title": ""
},
{
"docid": "f27391f29b44bfa9989146566a288b79",
"text": "An appealing feature of blockchain technology is smart contracts. A smart contract is executable code that runs on top of the blockchain to facilitate, execute and enforce an agreement between untrusted parties without the involvement of a trusted third party. In this paper, we conduct a systematic mapping study to collect all research that is relevant to smart contracts from a technical perspective. The aim of doing so is to identify current research topics and open challenges for future studies in smart contract research. We extract 24 papers from different scientific databases. The results show that about two thirds of the papers focus on identifying and tackling smart contract issues. Four key issues are identified, namely, codifying, security, privacy and performance issues. The rest of the papers focuses on smart contract applications or other smart contract related topics. Research gaps that need to be addressed in future studies are provided.",
"title": ""
}
] |
scidocsrr
|
82b68b01e41f85d2e2337d9dc5f4b877
|
Waveform design and imaging method of MIMO ISAR based on orthogonal LFM signal
|
[
{
"docid": "18ee965b96c72dbbfc8ce833548a4f72",
"text": "With the inverse synthetic aperture radar (ISAR) imaging model, targets should move smoothly during the coherent processing interval (CPI). Since the CPI is quite long, fluctuations of a target's velocity and gesture will deteriorate image quality. This paper presents a multiple-input-multiple-output (MIMO)-ISAR imaging method by combining MIMO techniques and ISAR imaging theory. By using a special M-transmitter N-receiver linear array, a group of M orthogonal phase-code modulation signals with identical bandwidth and center frequency is transmitted. With a matched filter set, every target response corresponding to the orthogonal signals can be isolated at each receiving channel, and range compression is completed simultaneously. Based on phase center approximation theory, the minimum entropy criterion is used to rearrange the echo data after the target's velocity has been estimated, and then, the azimuth imaging will finally finish. The analysis of imaging and simulation results show that the minimum CPI of the MIMO-ISAR imaging method is 1/MN of the conventional ISAR imaging method under the same azimuth-resolution condition. It means that most flying targets can satisfy the condition that targets should move smoothly during CPI; therefore, the applicability and the quality of ISAR imaging will be improved.",
"title": ""
}
] |
[
{
"docid": "1c960375b6cdebfbd65ea0124dcdce0f",
"text": "Parameterized unit tests extend the current industry practice of using closed unit tests defined as parameterless methods. Parameterized unit tests separate two concerns: 1) They specify the external behavior of the involved methods for all test arguments. 2) Test cases can be re-obtained as traditional closed unit tests by instantiating the parameterized unit tests. Symbolic execution and constraint solving can be used to automatically choose a minimal set of inputs that exercise a parameterized unit test with respect to possible code paths of the implementation. In addition, parameterized unit tests can be used as symbolic summaries which allows symbolic execution to scale for arbitrary abstraction levels. We have developed a prototype tool which computes test cases from parameterized unit tests. We report on its first use testing parts of the .NET base class library.",
"title": ""
},
{
"docid": "747df95d08e6e5b1802dacf4e84b6642",
"text": "One of the key requirement of many schemes is that of random numbers. Sequence of random numbers are used at several stages of a standard cryptographic protocol. A simple example is of a Vernam cipher, where a string of random numbers is added to massage string to generate the encrypted code. It is represented as C = M ⊕ K where M is the message, K is the key and C is the ciphertext. It has been mathematically shown that this simple scheme is unbreakable is key K as long as M and is used only once. For a good cryptosystem, the security of the cryptosystem is not be based on keeping the algorithm secret but solely on keeping the key secret. The quality and unpredictability of secret data is critical to securing communication by modern cryptographic techniques. Generation of such data for cryptographic purposes typically requires an unpredictable physical source of random data. In this manuscript, we present studies of three different methods for producing random number. We have tested them by studying its frequency, correlation as well as using the test suit from NIST.",
"title": ""
},
{
"docid": "0e4334595aeec579e8eb35b0e805282d",
"text": "In this paper, we present madmom, an open-source audio processing and music information retrieval (MIR) library written in Python. madmom features a concise, NumPy-compatible, object oriented design with simple calling conventions and sensible default values for all parameters, which facilitates fast prototyping of MIR applications. Prototypes can be seamlessly converted into callable processing pipelines through madmom's concept of Processors, callable objects that run transparently on multiple cores. Processors can also be serialised, saved, and re-run to allow results to be easily reproduced anywhere. Apart from low-level audio processing, madmom puts emphasis on musically meaningful high-level features. Many of these incorporate machine learning techniques and madmom provides a module that implements some methods commonly used in MIR such as hidden Markov models and neural networks. Additionally, madmom comes with several state-of-the-art MIR algorithms for onset detection, beat, downbeat and meter tracking, tempo estimation, and chord recognition. These can easily be incorporated into bigger MIR systems or run as stand-alone programs.",
"title": ""
},
{
"docid": "375c03e7b5f36239023def78678fe9bd",
"text": "This paper presents an extended version of GoalOriented Requirements Analysis Method called AGORA, where attribute values, e.g. contribution values and preference matrices, are added to goal graphs. An analyst attaches contribution values and preference values to edges and nodes of a goal graph respectively during the process for refining and decomposing the goals. The contribution value of an edge stands for the degree of the contribution of the sub-goal to the achievement of its parent goal, while the preference matrix of a goal represents the preference of the goal for each stakeholder. These values can help an analyst to choose and adopt a goal from the alternatives of the goals, to recognize the conflicts among the goals, and to analyze the impact of requirements changes. Furthermore the values on a goal graph and its structural characteristics allow the analyst to estimate the quality of the resulting requirements specification, such as correctness, unambiguity, completeness etc. The estimated quality values can suggest to him which goals should be improved and/or refined. In addition, we have applied AGORA to a user account system",
"title": ""
},
{
"docid": "dcd6effc28744aa875a37ad28ecc68e1",
"text": "The knowledge of transitions between regular, laminar or chaotic behaviors is essential to understand the underlying mechanisms behind complex systems. While several linear approaches are often insufficient to describe such processes, there are several nonlinear methods that, however, require rather long time observations. To overcome these difficulties, we propose measures of complexity based on vertical structures in recurrence plots and apply them to the logistic map as well as to heart-rate-variability data. For the logistic map these measures enable us not only to detect transitions between chaotic and periodic states, but also to identify laminar states, i.e., chaos-chaos transitions. The traditional recurrence quantification analysis fails to detect the latter transitions. Applying our measures to the heart-rate-variability data, we are able to detect and quantify the laminar phases before a life-threatening cardiac arrhythmia occurs thereby facilitating a prediction of such an event. Our findings could be of importance for the therapy of malignant cardiac arrhythmias.",
"title": ""
},
{
"docid": "9f6f22e320b91838c9be8f56d3f0564d",
"text": "We present an approach for ontology population from natural language English texts that extracts RDF triples according to FrameBase, a Semantic Web ontology derived from FrameNet. Processing is decoupled in two independently-tunable phases. First, text is processed by several NLP tasks, including Semantic Role Labeling (SRL), whose results are integrated in an RDF graph of mentions, i.e., snippets of text denoting some entity/fact. Then, the mention graph is processed with SPARQL-like rules using a specifically created mapping resource from NomBank/PropBank/FrameNet annotations to FrameBase concepts, producing a knowledge graph whose content is linked to DBpedia and organized around semantic frames, i.e., prototypical descriptions of events and situations. A single RDF/OWL representation is used where each triple is related to the mentions/tools it comes from. We implemented the approach in PIKES, an open source tool that combines two complementary SRL systems and provides a working online demo. We evaluated PIKES on a manually annotated gold standard, assessing precision/recall in (i) populating FrameBase ontology, and (ii) extracting semantic frames modeled after standard predicate models, for comparison with state-of-the-art tools for the Semantic Web. We also evaluated (iii) sampled precision and execution times on a large corpus of 110 K Wikipedia-like pages.",
"title": ""
},
{
"docid": "68c2b36ae2be6a0bc0a42cb8fcf284fe",
"text": "We present a data-driven shape model for reconstructing human body models from one or more 2D photos. One of the key tasks in reconstructing the 3D model from image data is shape recovery, a task done until now in utterly geometric way, in the domain of human body modeling. In contrast, we adopt a data-driven, parameterized deformable model that is acquired from a collection of range scans of real human body. The key idea is to complement the image-based reconstruction method by leveraging the quality shape and statistic information accumulated from multiple shapes of range-scanned people. In the presence of ambiguity either from the noise or missing views, our technique has a bias towards representing as much as possible the previously acquired ‘knowledge’ on the shape geometry. Texture coordinates are then generated by projecting the modified deformable model onto the front and back images. Our technique has shown to reconstruct successfully human body models from minimum number images, even from a single image input.",
"title": ""
},
{
"docid": "4179729bef0b37bb90d58395bb4dfd18",
"text": "A navigation mesh is a representation of a 2D or 3D virtual environment that enables path planning and crowd simulation for walking characters. Various state-of-the-art navigation meshes exist, but there is no standardized way of evaluating or comparing them. Each implementation is in a different state of maturity, has been tested on different hardware, uses different example environments, and may have been designed with a different application in mind.\n In this paper, we conduct the first comparative study of navigation meshes. First, we give general definitions of 2D and 3D environments and navigation meshes. Second, we propose theoretical properties by which navigation meshes can be classified. Third, we introduce metrics by which the quality of a navigation mesh implementation can be measured objectively. Finally, we use these metrics to compare various state-of-the-art navigation meshes in a range of 2D and 3D environments.\n We expect that this work will set a new standard for the evaluation of navigation meshes, that it will help developers choose an appropriate navigation mesh for their application, and that it will steer future research on navigation meshes in interesting directions.",
"title": ""
},
{
"docid": "28fcee5c28c2b3aae6f4761afb00ebc2",
"text": "The presence of sarcasm in text can hamper the performance of sentiment analysis. The challenge is to detect the existence of sarcasm in texts. This challenge is compounded when bilingual texts are considered, for example using Malay social media data. In this paper a feature extraction process is proposed to detect sarcasm using bilingual texts; more specifically public comments on economic related posts on Facebook. Four categories of feature that can be extracted using natural language processing are considered; lexical, pragmatic, prosodic and syntactic. We also investigated the use of idiosyncratic feature to capture the peculiar and odd comments found in a text. To determine the effectiveness of the proposed process, a non-linear Support Vector Machine was used to classify texts, in terms of the identified features, according to whether they included sarcastic content or not. The results obtained demonstrate that a combination of syntactic, pragmatic and prosodic features produced the best performance with an F-measure score of 0.852.",
"title": ""
},
{
"docid": "3cd565192b29593550032f695b61087c",
"text": "Forcing occurs when a magician influences the audience's decisions without their awareness. To investigate the mechanisms behind this effect, we examined several stimulus and personality predictors. In Study 1, a magician flipped through a deck of playing cards while participants were asked to choose one. Although the magician could influence the choice almost every time (98%), relatively few (9%) noticed this influence. In Study 2, participants observed rapid series of cards on a computer, with one target card shown longer than the rest. We expected people would tend to choose this card without noticing that it was shown longest. Both stimulus and personality factors predicted the choice of card, depending on whether the influence was noticed. These results show that combining real-world and laboratory research can be a powerful way to study magic and can provide new methods to study the feeling of free will.",
"title": ""
},
{
"docid": "0a2e59ab99b9666d8cf3fb31be9fa40c",
"text": "Behavioral targeting (BT) is a widely used technique for online advertising. It leverages information collected on an individual's web-browsing behavior, such as page views, search queries and ad clicks, to select the ads most relevant to user to display. With the proliferation of social networks, it is possible to relate the behavior of individuals and their social connections. Although the similarity among connected individuals are well established (i.e., homophily), it is still not clear whether and how we can leverage the activities of one's friends for behavioral targeting; whether forecasts derived from such social information are more accurate than standard behavioral targeting models. In this paper, we strive to answer these questions by evaluating the predictive power of social data across 60 consumer domains on a large online network of over 180 million users in a period of two and a half months. To our best knowledge, this is the most comprehensive study of social data in the context of behavioral targeting on such an unprecedented scale. Our analysis offers interesting insights into the value of social data for developing the next generation of targeting services.",
"title": ""
},
{
"docid": "6c2095e83fd7bc3b7bd5bd259d1ae9bb",
"text": "This paper basically deals with design of an IoT Smart Home System (IoTSHS) which can provide the remote control to smart home through mobile, infrared(IR) remote control as well as with PC/Laptop. The controller used to design the IoTSHS is WiFi based microcontroller. Temperature sensor is provided to indicate the room temperature and tell the user if it's needed to turn the AC ON or OFF. The designed IoTSHS need to be interfaced through switches or relays with the items under control through the power distribution box. When a signal is sent from IoTSHS, then the switches will connect or disconnect the item under control. The designed IoT smart home system can also provide remote controlling for the people who cannot use smart phone to control their appliances Thus, the designed IoTSHS can benefits the whole parts in the society by providing advanced remote controlling for the smart home. The designed IoTSHS is controlled through remote control which uses IR and WiFi. The IoTSHS is capable to connect to WiFi and have a web browser regardless to what kind of operating system it uses, to control the appliances. No application program is needed to purchase, download, or install. In WiFi controlling, the IoTSHS will give a secured Access Point (AP) with a particular service set identifier (SSID). The user will connect the device (e.g. mobile-phone or Laptop/PC) to this SSID with providing the password and then will open the browser and go to particular fixed link. This link will open an HTML web page which will allow the user to interface between the Mobile-Phone/Laptop/PC and the appliances. In addition, the IoTSHS may connect to the home router so that the user can control the appliances with keeping connection with home router. The proposed IoTSHS was designed, programmed, fabricated and tested with excellent results.",
"title": ""
},
{
"docid": "7669ed44cc1bada0cfaf28172738e6f5",
"text": "The widespread deployment of high-data-rate wireless connectivity was enabled by the adoption of the WiGig (802.11ad) standard, consequently placing a challenge on integrated Power Amplifiers (PAs). To comply with system requirements, the PA must cover bands from 57 to 66GHz and deliver up to 10dBm RF modulated power, while OFDM modulations up to 16 or 64QAM are supported, implying a large Peak-to-Average Power Ratio (PAPR).",
"title": ""
},
{
"docid": "b37fb73811110ec7a095e98df66f0ee0",
"text": "This paper looks into recent developments and research trends in collision avoidance/warning systems and automation of vehicle longitudinal/lateral control tasks. It is an attempt to provide a bigger picture of the very diverse, detailed and highly multidisciplinary research in this area. Based on diversely selected research, this paper explains the initiatives for automation in different levels of transportation system with a specific emphasis on the vehicle-level automation. Human factor studies and legal issues are analyzed as well as control algorithms. Drivers’ comfort and well being, increased safety, and increased highway capacity are among the most important initiatives counted for automation. However, sometimes these are contradictory requirements. Relying on an analytical survey of the published research, we will try to provide a more clear understanding of the impact of automation/warning systems on each of the above-mentioned factors. The discussion of sensory issues requires a dedicated paper due to its broad range and is not addressed in this paper.",
"title": ""
},
{
"docid": "8107340ac05353db18f1bb84a6e88a88",
"text": "Although many examples exist for shared neural representations of self and other, it is unknown how such shared representations interact with the rest of the brain. Furthermore, do high-level inference-based shared mentalizing representations interact with lower level embodied/simulation-based shared representations? We used functional neuroimaging (fMRI) and a functional connectivity approach to assess these questions during high-level inference-based mentalizing. Shared mentalizing representations in ventromedial prefrontal cortex, posterior cingulate/precuneus, and temporo-parietal junction (TPJ) all exhibited identical functional connectivity patterns during mentalizing of both self and other. Connectivity patterns were distributed across low-level embodied neural systems such as the frontal operculum/ventral premotor cortex, the anterior insula, the primary sensorimotor cortex, and the presupplementary motor area. These results demonstrate that identical neural circuits are implementing processes involved in mentalizing of both self and other and that the nature of such processes may be the integration of low-level embodied processes within higher level inference-based mentalizing.",
"title": ""
},
{
"docid": "c5eb252d17c2bec8ab168ca79ec11321",
"text": "Personalization is pervasive in the online space as it leads to higher efficiency and revenue by allowing the most relevant content to be served to each user. However, recent studies suggest that personalization methods can propagate societal or systemic biases and polarize opinions; this has led to calls for regulatory mechanisms and algorithms to combat bias and inequality. Algorithmically, bandit optimization has enjoyed great success in learning user preferences and personalizing content or feeds accordingly. We propose an algorithmic framework that allows for the possibility to control bias or discrimination in such bandit-based personalization. Our model allows for the specification of general fairness constraints on the sensitive types of the content that can be displayed to a user. The challenge, however, is to come up with a scalable and low regret algorithm for the constrained optimization problem that arises. Our main technical contribution is a provably fast and low-regret algorithm for the fairness-constrained bandit optimization problem. Our proofs crucially leverage the special structure of our problem. Experiments on synthetic and real-world data sets show that our algorithmic framework can control bias with only a minor loss to revenue. ∗A short version of this paper appeared in the FAT/ML 2017 workshop (https://arxiv.org/abs/1707.02260) 1 ar X iv :1 80 2. 08 67 4v 1 [ cs .L G ] 2 3 Fe b 20 18",
"title": ""
},
{
"docid": "4bbd172be9833ae46dc3cf54bdd82641",
"text": "Underwater communication systems have drawn the attention of the research community in the last 15 years. This growing interest can largely be attributed to new civil and military applications enabled by large-scale networks of underwater devices (e.g., underwater static sensors, unmanned autonomous vehicles (AUVs), and autonomous robots), which can retrieve information from the aquatic and marine environment, perform in-network processing on the extracted data, and transmit the collected information to remote locations. Currently underwater communication systems are inherently hardware-based and rely on closed and inflexible architectural design. This imposes significant challenges into adopting new underwater communication and networking technologies, prevent the provision of truly-differentiated services to highly diverse underwater applications, and induce great barriers to integrate heterogeneous underwater devices. Software Defined Networking, recognized as the next-generation networking paradigm, relies on the highly flexible, programmable, and virtualizable network architecture to dramatically improve network resource utilization, simplify network management, reduce operating cost, and promote innovation and evolution. In this paper, a software-defined architecture, namely SoftWater, is first introduced to facilitate the development of the next-generation underwater communication systems. More specifically, by exploiting the network function virtualization (NFV) and network virtualization concepts, SoftWater architecture can easily incorporate new underwater communication solutions, accordingly maximize the network capacity, can achieve the network robustness and energy efficiency, as well as can provide truly differentiated and scalable networking services. Consequently, the SoftWater architecture can simultaneously support a variety of different underwater applications, and can enable the interoperability of underwater devices from different manufacturers that operate on different underwater communication technologies based on acoustic, optical, or radio waves. Moreover, the essential network management tools of SoftWater are discussed, including reconfigurable multi-controller placement, hybrid in-band and out-of-band control traffic balancing, and utility-optimal network virtualization. Furthermore, the major benefits of ∗ Corresponding author. Tel.: +01404 934 9932. E-mail addresses: [email protected] (I.F. Akyildiz), [email protected] (P. Wang), [email protected] (S.-C. Lin). http://dx.doi.org/10.1016/j.adhoc.2016.02.016 1570-8705/© 2016 Elsevier B.V. All rights reserved. Please cite this article as: I.F. Akyildiz et al., SoftWater: Software-defined networking for next-generation underwater communication systems, Ad Hoc Networks (2016), http://dx.doi.org/10.1016/j.adhoc.2016.02.016 2 I.F. Akyildiz et al. / Ad Hoc Networks xxx (2016) xxx–xxx ARTICLE IN PRESS JID: ADHOC [m3Gdc; April 9, 2016;17:52 ] SoftWater architecture are demonstrated by introducing software-defined underwater networking solutions, including the throughput-optimal underwater routing, SDN-enhanced fault recovery, and software-defined underwater mobility management. The research challenges to realize the SoftWater are also discussed in detail. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a964f8aeb9d48c739716445adc58e98c",
"text": "A passive aeration composting study was undertaken to investigate the effects of aeration pipe orientation (PO) and perforation size (PS) on some physico-chemical properties of chicken litter (chicken manure + sawdust) during composting. The experimental set up was a two-factor completely randomised block design with two pipe orientations: horizontal (Ho) and vertical (Ve), and three perforation sizes: 15, 25 and 35 mm diameter. The properties monitored during composting were pile temperature, moisture content (MC), pH, electrical conductivity (EC), total carbon (C(T)), total nitrogen (N(T)) and total phosphorus (P(T)). Moisture level in the piles was periodically replenished to 60% for efficient microbial activities. The results of the study showed that optimum composting conditions (thermophilic temperatures and sanitation requirements) were attained in all the piles. During composting, both PO and PS significantly affected pile temperature, moisture level, pH, C(T) loss and P(T) gain. EC was only affected by PO while N(T) was affected by PS. Neither PO nor PS had a significant effect on the C:N ratio. A vertical pipe was effective for uniform air distribution, hence, uniform composting rate within the composting pile. The final values showed that PO of Ve and PS of 35 mm diameter resulted in the least loss in N(T). The PO of Ho was as effective as Ve in the conservation of C(T) and P(T). Similarly, the three PSs were equally effective in the conservation of C(T) and P(T). In conclusion, the combined effects of PO and PS showed that treatments Ve35 and Ve15 were the most effective in minimizing N(T) loss.",
"title": ""
},
{
"docid": "8e896b9006ecc82fcfa4f6905a3dc5ae",
"text": "In this paper, we present a generalized Wishart classifier derived from a non-Gaussian model for polarimetric synthetic aperture radar (PolSAR) data. Our starting point is to demonstrate that the scale mixture of Gaussian (SMoG) distribution model is suitable for modeling PolSAR data. We show that the distribution of the sample covariance matrix for the SMoG model is given as a generalization of the Wishart distribution and present this expression in integral form. We then derive the closed-form solution for one particular SMoG distribution, which is known as the multivariate K-distribution. Based on this new distribution for the sample covariance matrix, termed as the K -Wishart distribution, we propose a Bayesian classification scheme, which can be used in both supervised and unsupervised modes. To demonstrate the effect of including non-Gaussianity, we present a detailed comparison with the standard Wishart classifier using airborne EMISAR data.",
"title": ""
},
{
"docid": "0209627cd57745dc5c06dc5ff9723352",
"text": "The cloud computing provides on demand services over the Internet with the help of a large amount of virtual storage. The main features of cloud computing is that the user does not have any setup of expensive computing infrastructure and the cost of its services is less. In the recent years, cloud computing integrates with the industry and many other areas, which has been encouraging the researcher to research on new related technologies. Due to the availability of its services & scalability for computing processes individual users and organizations transfer their application, data and services to the cloud storage server. Regardless of its advantages, the transformation of local computing to remote computing has brought many security issues and challenges for both consumer and provider. Many cloud services are provided by the trusted third party which arises new security threats. The cloud provider provides its services through the Internet and uses many web technologies that arise new security issues. This paper discussed about the basic features of the cloud computing, security issues, threats and their solutions. Additionally, the paper describes several key topics related to the cloud, namely cloud architecture framework, service and deployment model, cloud technologies, cloud security concepts, threats, and attacks. The paper also discusses a lot of open research issues related to the cloud security. Keywords—Cloud Computing, Cloud Framework, Cloud Security, Cloud Security Challenges, Cloud Security Issues",
"title": ""
}
] |
scidocsrr
|
d36077d0c9dfa54ef58d45554e21479b
|
Building Streetview Datasets for Place Recognition and City Reconstruction
|
[
{
"docid": "64ce725037b72921b979583f6fdc4f27",
"text": "We describe an approach to object retrieval which searches for and localizes all the occurrences of an object in a video, given a query image of the object. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject those that are unstable. Efficient retrieval is achieved by employing methods from statistical text retrieval, including inverted file systems, and text and document frequency weightings. This requires a visual analogy of a word which is provided here by vector quantizing the region descriptors. The final ranking also depends on the spatial layout of the regions. The result is that retrieval is immediate, returning a ranked list of shots in the manner of Google. We report results for object retrieval on the full length feature films ‘Groundhog Day’ and ‘Casablanca’.",
"title": ""
},
{
"docid": "f085832faf1a2921eedd3d00e8e592db",
"text": "There are billions of photographs on the Internet, comprising the largest and most diverse photo collection ever assembled. How can computer vision researchers exploit this imagery? This paper explores this question from the standpoint of 3D scene modeling and visualization. We present structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like “Notre Dame” or “Trevi Fountain.” This approach, which we call Photo Tourism, has enabled reconstructions of numerous well-known world sites. This paper presents these algorithms and results as a first step towards 3D modeling of the world’s well-photographed sites, cities, and landscapes from Internet imagery, and discusses key open problems and challenges for the research community.",
"title": ""
}
] |
[
{
"docid": "c39b143861d1e0c371ec1684bb29f4cc",
"text": "Data races are a particularly unpleasant kind of threading bugs. They are hard to find and reproduce -- you may not observe a bug during the entire testing cycle and will only see it in production as rare unexplainable failures. This paper presents ThreadSanitizer -- a dynamic detector of data races. We describe the hybrid algorithm (based on happens-before and locksets) used in the detector. We introduce what we call dynamic annotations -- a sort of race detection API that allows a user to inform the detector about any tricky synchronization in the user program. Various practical aspects of using ThreadSanitizer for testing multithreaded C++ code at Google are also discussed.",
"title": ""
},
{
"docid": "eb42c7dafed682a0643b46f49d2a86ec",
"text": "OBJECTIVE\nTo evaluate the effectiveness of telephone based peer support in the prevention of postnatal depression.\n\n\nDESIGN\nMultisite randomised controlled trial.\n\n\nSETTING\nSeven health regions across Ontario, Canada.\n\n\nPARTICIPANTS\n701 women in the first two weeks postpartum identified as high risk for postnatal depression with the Edinburgh postnatal depression scale and randomised with an internet based randomisation service.\n\n\nINTERVENTION\nProactive individualised telephone based peer (mother to mother) support, initiated within 48-72 hours of randomisation, provided by a volunteer recruited from the community who had previously experienced and recovered from self reported postnatal depression and attended a four hour training session.\n\n\nMAIN OUTCOME MEASURES\nEdinburgh postnatal depression scale, structured clinical interview-depression, state-trait anxiety inventory, UCLA loneliness scale, and use of health services.\n\n\nRESULTS\nAfter web based screening of 21 470 women, 701 (72%) eligible mothers were recruited. A blinded research nurse followed up more than 85% by telephone, including 613 at 12 weeks and 600 at 24 weeks postpartum. At 12 weeks, 14% (40/297) of women in the intervention group and 25% (78/315) in the control group had an Edinburgh postnatal depression scale score >12 (chi(2)=12.5, P<0.001; number need to treat 8.8, 95% confidence interval 5.9 to 19.6; relative risk reduction 0.46, 95% confidence interval 0.24 to 0.62). There was a positive trend in favour of the intervention group for maternal anxiety but not loneliness or use of health services. For ethical reasons, participants identified with clinical depression at 12 weeks were referred for treatment, resulting in no differences between groups at 24 weeks. Of the 221 women in the intervention group who received and evaluated their experience of peer support, over 80% were satisfied and would recommend this support to a friend.\n\n\nCONCLUSION\nTelephone based peer support can be effective in preventing postnatal depression among women at high risk.\n\n\nTRIAL REGISTRATION\nISRCTN 68337727.",
"title": ""
},
{
"docid": "6080612b8858d633c3f63a3d019aef58",
"text": "Color images provide large information for human visual perception compared to grayscale images. Color image enhancement methods enhance the visual data to increase the clarity of the color image. It increases human perception of information. Different color image contrast enhancement methods are used to increase the contrast of the color images. The Retinex algorithms enhance the color images similar to the scene perceived by the human eye. Multiscale retinex with color restoration (MSRCR) is a type of retinex algorithm. The MSRCR algorithm results in graying out and halo artifacts at the edges of the images. So here the focus is on improving the MSRCR algorithm by combining it with contrast limited adaptive histogram equalization (CLAHE) using image.",
"title": ""
},
{
"docid": "425270bbfd1290a0692afeea95fa090f",
"text": "This paper introduces a bounding gait control algorithm that allows a successful implementation of duty cycle modulation in the MIT Cheetah 2. Instead of controlling leg stiffness to emulate a `springy leg' inspired from the Spring-Loaded-Inverted-Pendulum (SLIP) model, the algorithm prescribes vertical impulse by generating scaled ground reaction forces at each step to achieve the desired stance and total stride duration. Therefore, we can control the duty cycle: the percentage of the stance phase over the entire cycle. By prescribing the required vertical impulse of the ground reaction force at each step, the algorithm can adapt to variable duty cycles attributed to variations in running speed. Following linear momentum conservation law, in order to achieve a limit-cycle gait, the sum of all vertical ground reaction forces must match vertical momentum created by gravity during a cycle. In addition, we added a virtual compliance control in the vertical direction to enhance stability. The stiffness of the virtual compliance is selected based on the eigenvalue analysis of the linearized Poincaré map and the chosen stiffness is 700 N/m, which corresponds to around 12% of the stiffness used in the previous trotting experiments of the MIT Cheetah, where the ground reaction forces are purely caused by the impedance controller with equilibrium point trajectories. This indicates that the virtual compliance control does not significantly contributes to generating ground reaction forces, but to stability. The experimental results show that the algorithm successfully prescribes the duty cycle for stable bounding gaits. This new approach can shed a light on variable speed running control algorithm.",
"title": ""
},
{
"docid": "85964cef28799c2a37fa3ab6aef992fb",
"text": "Rabies developed in an Austrian man after he was bitten by a dog in Agadir, Morocco. Diagnosis was confirmed by reverse transcription-polymerase chain reaction and immunohistochemistry. The patient's girlfriend was bitten by the same dog, but she did not become ill.",
"title": ""
},
{
"docid": "28d19824a598ae20039f2ed5d8885234",
"text": "Soft-tissue augmentation of the face is an increasingly popular cosmetic procedure. In recent years, the number of available filling agents has also increased dramatically, improving the range of options available to physicians and patients. Understanding the different characteristics, capabilities, risks, and limitations of the available dermal and subdermal fillers can help physicians improve patient outcomes and reduce the risk of complications. The most popular fillers are those made from cross-linked hyaluronic acid (HA). A major and unique advantage of HA fillers is that they can be quickly and easily reversed by the injection of hyaluronidase into areas in which elimination of the filler is desired, either because there is excess HA in the area or to accelerate the resolution of an adverse reaction to treatment or to the product. In general, a lower incidence of complications (especially late-occurring or long-lasting effects) has been reported with HA fillers compared with the semi-permanent and permanent fillers. The implantation of nonreversible fillers requires more and different expertise on the part of the physician than does injection of HA fillers, and may produce effects and complications that are more difficult or impossible to manage even by the use of corrective surgery. Most practitioners use HA fillers as the foundation of their filler practices because they have found that HA fillers produce excellent aesthetic outcomes with high patient satisfaction, and a low incidence and severity of complications. Only limited subsets of physicians and patients have been able to justify the higher complexity and risks associated with the use of nonreversible fillers.",
"title": ""
},
{
"docid": "f94f8d5f1a0ca74b94a1086d6d94b0d3",
"text": "The horizontal binocular disparity is a critical factor for the visual fatigue induced by watching stereoscopic TVs. Stereoscopic images that possess the disparity within the 'comfort zones' and remain still in the depth direction are considered comfortable to the viewers as 2D images. However, the difference in brain activities between processing such comfortable stereoscopic images and 2D images is still less studied. The DP3 (differential P3) signal refers to an event-related potential (ERP) component indicating attentional processes, which is typically evoked by odd target stimuli among standard stimuli in an oddball task. The present study found that the DP3 signal elicited by the comfortable 3D images exhibits the delayed peak latency and enhanced peak amplitude over the anterior and central scalp regions compared to the 2D images. The finding suggests that compared to the processing of the 2D images, more attentional resources are involved in the processing of the stereoscopic images even though they are subjectively comfortable.",
"title": ""
},
{
"docid": "2adcf4db59bb321132a10445292d7fe9",
"text": "In this paper, we present work on learning analytics that aims to support learners and teachers through dashboard applications, ranging from small mobile applications to learnscapes on large public displays. Dashboards typically capture and visualize traces of learning activities, in order to promote awareness, reflection, and sense-making, and to enable learners to define goals and track progress toward these goals. Based on an analysis of our own work and a broad range of similar learning dashboards, we identify HCI issues for this exciting research area.",
"title": ""
},
{
"docid": "e36236681b84ac00a56af6c769e9436c",
"text": "Due to their potential commercial value and the associated great research challenges, recommender systems have been extensively studied by both academia and industry recently. However, the data sparsity problem of the involved user-item matrix seriously affects the recommendation quality. Many existing approaches to recommender systems cannot easily deal with users who have made very few ratings. In view of the exponential growth of information generated by online users, social contextual information analysis is becoming important for many Web applications. In this article, we propose a factor analysis approach based on probabilistic matrix factorization to alleviate the data sparsity and poor prediction accuracy problems by incorporating social contextual information, such as social networks and social tags. The complexity analysis indicates that our approach can be applied to very large datasets since it scales linearly with the number of observations. Moreover, the experimental results show that our method performs much better than the state-of-the-art approaches, especially in the circumstance that users have made few ratings.",
"title": ""
},
{
"docid": "76156cea2ef1d49179d35fd8f333b011",
"text": "Climate change, pollution, and energy insecurity are among the greatest problems of our time. Addressing them requires major changes in our energy infrastructure. Here, we analyze the feasibility of providing worldwide energy for all purposes (electric power, transportation, heating/cooling, etc.) from wind, water, and sunlight (WWS). In Part I, we discuss WWS energy system characteristics, current and future energy demand, availability of WWS resources, numbers of WWS devices, and area and material requirements. In Part II, we address variability, economics, and policy of WWS energy. We estimate that !3,800,000 5 MW wind turbines, !49,000 300 MW concentrated solar plants, !40,000 300 MW solar PV power plants, !1.7 billion 3 kW rooftop PV systems, !5350 100 MWgeothermal power plants, !270 new 1300 MW hydroelectric power plants, !720,000 0.75 MWwave devices, and !490,000 1 MW tidal turbines can power a 2030 WWS world that uses electricity and electrolytic hydrogen for all purposes. Such a WWS infrastructure reduces world power demand by 30% and requires only !0.41% and !0.59% more of the world’s land for footprint and spacing, respectively. We suggest producing all new energy withWWSby 2030 and replacing the pre-existing energy by 2050. Barriers to the plan are primarily social and political, not technological or economic. The energy cost in a WWS world should be similar to",
"title": ""
},
{
"docid": "89dcd15d3f7e2f538af4a2654f144dfb",
"text": "E-waste comprises discarded electronic appliances, of which computers and mobile telephones are disproportionately abundant because of their short lifespan. The current global production of E-waste is estimated to be 20-25 million tonnes per year, with most E-waste being produced in Europe, the United States and Australasia. China, Eastern Europe and Latin America will become major E-waste producers in the next ten years. Miniaturisation and the development of more efficient cloud computing networks, where computing services are delivered over the internet from remote locations, may offset the increase in E-waste production from global economic growth and the development of pervasive new technologies. E-waste contains valuable metals (Cu, platinum group) as well as potential environmental contaminants, especially Pb, Sb, Hg, Cd, Ni, polybrominated diphenyl ethers (PBDEs), and polychlorinated biphenyls (PCBs). Burning E-waste may generate dioxins, furans, polycyclic aromatic hydrocarbons (PAHs), polyhalogenated aromatic hydrocarbons (PHAHs), and hydrogen chloride. The chemical composition of E-waste changes with the development of new technologies and pressure from environmental organisations on electronics companies to find alternatives to environmentally damaging materials. Most E-waste is disposed in landfills. Effective reprocessing technology, which recovers the valuable materials with minimal environmental impact, is expensive. Consequently, although illegal under the Basel Convention, rich countries export an unknown quantity of E-waste to poor countries, where recycling techniques include burning and dissolution in strong acids with few measures to protect human health and the environment. Such reprocessing initially results in extreme localised contamination followed by migration of the contaminants into receiving waters and food chains. E-waste workers suffer negative health effects through skin contact and inhalation, while the wider community are exposed to the contaminants through smoke, dust, drinking water and food. There is evidence that E-waste associated contaminants may be present in some agricultural or manufactured products for export.",
"title": ""
},
{
"docid": "0a95cf6687a8e2421907fb94324c5163",
"text": "The existence of multiple memory systems has been proposed in a number of areas, including cognitive psychology, neuropsychology, and the study of animal learning and memory. We examine whether the existence of such multiple systems seems likely on evolutionary grounds. Multiple systems adapted to serve seemingly similar functions, which differ in important ways, are a common evolutionary outcome. The evolution of multiple memory systems requires memory systems to be specialized to such a degree that the functional problems each system handles cannot be handled by another system. We define this condition as functional incompatibility and show that it occurs for a number of the distinctions that have been proposed between memory systems. The distinction between memory for song and memory for spatial locations in birds, and between incremental habit formation and memory for unique episodes in humans and other primates provide examples. Not all memory systems are highly specialized in function, however, and the conditions under which memory systems could evolve to serve a wide range of functions are also discussed.",
"title": ""
},
{
"docid": "1436e4fddc73d33a6cf83abfa5c9eb02",
"text": "The aim of our study was to provide a contribution to the research field of the critical success factors (CSFs) of ERP projects, with specific focus on smaller enterprises (SMEs). Therefore, we conducted a systematic literature review in order to update the existing reviews of CSFs. On the basis of that review, we led several interviews with ERP consultants experienced with ERP implementations in SMEs. As a result, we showed that all factors found in the literature also affected the success of ERP projects in SMEs. However, within those projects, technological factors gained much more importance compared to the factors that most influence the success of larger ERP projects. For SMEs, factors like the Organizational fit of the ERP system as well as ERP system tests were even more important than Top management support or Project management, which were the most important factors for large-scale companies.",
"title": ""
},
{
"docid": "5eda080188512f8d3c5f882c1114e1c8",
"text": "Knowledge mapping is one of the most popular techniques used to identify knowledge in organizations. Using knowledge mapping techniques; a large and complex set of knowledge resources can be acquired and navigated more easily. Knowledge mapping has attracted the senior managers' attention as an assessment tool in recent years and is expected to measure deep conceptual understanding and allow experts in organizations to characterize relationships between concepts within a domain visually. Here the very critical issue is how to identify and choose an appropriate knowledge mapping technique. This paper aims to explore the different types of knowledge mapping techniques and give a general idea of their target contexts to have the way for choosing the appropriate map. It attempts to illustrate which techniques are appropriate, why and where they can be applied, and how these mapping techniques can be managed. The paper is based on the comprehensive review of papers on knowledge mapping techniques. In addition, this paper attempts to further clarify the differences among these knowledge mapping techniques and the main purpose for using each. Eventually, it is recommended that experts must understand the purpose for which the map is being developed before proceeding to activities related to any knowledge management dimensions; in order to the appropriate knowledge mapping technique .",
"title": ""
},
{
"docid": "bee4bd3019983dc7f66cfd3dafc251ac",
"text": "We present a framework to systematically analyze convolutional neural networks (CNNs) used in classification of cars in autonomous vehicles. Our analysis procedure comprises an image generator that produces synthetic pictures by sampling in a lower dimension image modification subspace and a suite of visualization tools. The image generator produces images which can be used to test the CNN and hence expose its vulnerabilities. The presented framework can be used to extract insights of the CNN classifier, compare across classification models, or generate training and validation datasets.",
"title": ""
},
{
"docid": "85fc78cc3f71b784063b8b564e6509a9",
"text": "Numerous research papers have listed different vectors of personally identifiable information leaking via tradition al and mobile Online Social Networks (OSNs) and highlighted the ongoing aggregation of data about users visiting popular We b sites. We argue that the landscape is worsening and existing proposals (including the recent U.S. Federal Trade Commission’s report) do not address several key issues. We examined over 100 popular non-OSN Web sites across a number of categories where tens of millions of users representing d iverse demographics have accounts, to see if these sites leak private information to prominent aggregators. Our results raise considerable concerns: we see leakage in sites for every category we examined; fully 56% of the sites directly leak pieces of private information with this result growing to 75% if we also include leakage of a site userid. Sensitive search strings sent to healthcare Web sites and travel itineraries on flight reservation sites are leaked in 9 of the top 10 sites studied for each category. The community needs a clear understanding of the shortcomings of existing privac y protection measures and the new proposals. The growing disconnect between the protection measures and increasing leakage and linkage suggests that we need to move beyond the losing battle with aggregators and examine what roles first-party sites can play in protecting privacy of their use rs.",
"title": ""
},
{
"docid": "0bb2798c21d9f7420ea47c717578e94d",
"text": "Blockchain has drawn attention as the next-generation financial technology due to its security that suits the informatization era. In particular, it provides security through the authentication of peers that share virtual cash, encryption, and the generation of hash value. According to the global financial industry, the market for security-based blockchain technology is expected to grow to about USD 20 billion by 2020. In addition, blockchain can be applied beyond the Internet of Things (IoT) environment; its applications are expected to expand. Cloud computing has been dramatically adopted in all IT environments for its efficiency and availability. In this paper, we discuss the concept of blockchain technology and its hot research trends. In addition, we will study how to adapt blockchain security to cloud computing and its secure solutions in detail.",
"title": ""
},
{
"docid": "15aa0333268dd812546d1cc9c24103b8",
"text": "Relation extraction is the process of identifying instances of specified types of semantic relations in text; relation type extension involves extending a relation extraction system to recognize a new type of relation. We present LGCo-Testing, an active learning system for relation type extension based on local and global views of relation instances. Locally, we extract features from the sentence that contains the instance. Globally, we measure the distributional similarity between instances from a 2 billion token corpus. Evaluation on the ACE 2004 corpus shows that LGCo-Testing can reduce annotation cost by 97% while maintaining the performance level of supervised learning.",
"title": ""
},
{
"docid": "2a0577aa61ca1cbde207306fdb5beb08",
"text": "In recent years, researchers have shown that unwanted web tracking is on the rise, as advertisers are trying to capitalize on users' online activity, using increasingly intrusive and sophisticated techniques. Among these, browser fingerprinting has received the most attention since it allows trackers to uniquely identify users despite the clearing of cookies and the use of a browser's private mode. In this paper, we investigate and quantify the fingerprintability of browser extensions, such as, AdBlock and Ghostery. We show that an extension's organic activity in a page's DOM can be used to infer its presence, and develop XHound, the first fully automated system for fingerprinting browser extensions. By applying XHound to the 10,000 most popular Google Chrome extensions, we find that a significant fraction of popular browser extensions are fingerprintable and could thus be used to supplement existing fingerprinting methods. Moreover, by surveying the installed extensions of 854 users, we discover that many users tend to install different sets of fingerprintable browser extensions and could thus be uniquely, or near-uniquely identifiable by extension-based fingerprinting. We use XHound's results to build a proof-of-concept extension-fingerprinting script and show that trackers can fingerprint tens of extensions in just a few seconds. Finally, we describe why the fingerprinting of extensions is more intrusive than the fingerprinting of other browser and system properties, and sketch two different approaches towards defending against extension-based fingerprinting.",
"title": ""
},
{
"docid": "29e4553408f57d9d7acffca58200b1ac",
"text": "With the abundance of exceptionally High Dimensional data, feature selection has become an essential element in the Data Mining process. In this paper, we investigate the problem of efficient feature selection for classification on High Dimensional datasets. We present a novel filter based approach for feature selection that sorts out the features based on a score and then we measure the performance of four different Data Mining classification algorithms on the resulting data. In the proposed approach, we partition the sorted feature and search the important feature in forward manner as well as in reversed manner, while starting from first and last feature simultaneously in the sorted list. The proposed approach is highly scalable and effective as it parallelizes over both attribute and tuples simultaneously allowing us to evaluate many of potential features for High Dimensional datasets. The newly proposed framework for feature selection is experimentally shown to be very valuable with real and synthetic High Dimensional datasets which improve the precision of selected features. We have also tested it to measure classification accuracy against various feature selection process.",
"title": ""
}
] |
scidocsrr
|
3b46cc183e665388fea152ba35f5fc4a
|
Relative Analysis of Hierarchical Routing in Wireless Sensor Networks Using Cuckoo Search
|
[
{
"docid": "1d53b01ee1a721895a17b7d0f3535a28",
"text": "We present a suite of algorithms for self-organization of wireless sensor networks, in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. † This research is supported by DARPA contract number F04701-97-C-0010, and was presented in part at the 37 Allerton Conference on Communication, Computing and Control, September 1999. ‡ Corresponding author.",
"title": ""
}
] |
[
{
"docid": "624ddac45b110bc809db198d60f3cf97",
"text": "Poisson regression models provide a standard framework for the analysis of count data. In practice, however, count data are often overdispersed relative to the Poisson distribution. One frequent manifestation of overdispersion is that the incidence of zero counts is greater than expected for the Poisson distribution and this is of interest because zero counts frequently have special status. For example, in counting disease lesions on plants, a plant may have no lesions either because it is resistant to the disease, or simply because no disease spores have landed on it. This is the distinction between structural zeros, which are inevitable, and sampling zeros, which occur by chance. In recent years there has been considerable interest in models for count data that allow for excess zeros, particularly in the econometric literature. These models complement more conventional models for overdispersion that concentrate on modelling the variance-mean relationship correctly. Application areas are diverse and have included manufacturing defects (Lambert, 1992), patent applications (Crepon & Duguet, 1997), road safety (Miaou, 1994), species abundance (Welsh et al., 1996; Faddy, 1998), medical consultations",
"title": ""
},
{
"docid": "bb3216e89fd98751de0f187b349ba123",
"text": "This study explored the relationships among dispositional self-consciousness, situationally induced-states of self-awareness, ego-mvolvement, and intrinsic motivation Cognitive evaluation theory, as applied to both the interpersonal and intrapersonal spheres, was used as the basis for making predictions about the effects of various types of self-focus Public selfconsciousness, social anxiety, video surveillance and mirror manipulations of self-awareness, and induced ego-involvement were predicted and found to have negative effects on intrinsic motivation since all were hypothesized to involve controlling forms of regulation In contrast, dispositional pnvate self-consciousness and a no-self-focus condition were both found to be unrelated to intrinsic motivation The relationship among these constructs and manipulations was discussed in the context of both Carver and Scheier's (1981) control theory and Deci and Ryan's (1985) motivation theory Recent theory and research on self-awareness, stimulated bv the initial findings of Duval and Wicklund (1972), has suggested that qualitatively distinct styles of attention and consciousness can be involved in the ongoing process of self-regulation (Carver & Scheier, 1981) In particular, Fenigstein, Scheier, and Buss (1975) have distinguished between private self-consciousness and pubhc selfconsciousness as two independent, but not necessarily exclusive, types of attentional focus with important behavioral, cognitive, and affective implications for regulatory processes Pnvate self-consciousness refers to the tendency to be aware of one's thoughts. This research was supported by Research Grant BSN-8018628 from the National Science Foundation The authors are grateful to the following persons who helped this project to reach fruition James Connell Edward Deci, Paul Tero, Shirlev Tracey We are also grateful for the experimental assistance of Margot Cohen Scott Cohen, Loren Feldman, and Darrell Mazlish Thanks also to Eileen Plant and Miriam Gale Requests for reprints should be sent to Richard M Ryan, Department of Psychology, Uniyersity of Rochester, Rochester, NY 14627 Journal of Personality 53 3, September 1985 Copyright © 1985 by Duke University",
"title": ""
},
{
"docid": "873056ee4f2a4fff473dca4e104a4798",
"text": "Key Summary Points Health information technology has been shown to improve quality by increasing adherence to guidelines, enhancing disease surveillance, and decreasing medication errors. Much of the evidence on quality improvement relates to primary and secondary preventive care. The major efficiency benefit has been decreased utilization of care. Effect on time utilization is mixed. Empirically measured cost data are limited and inconclusive. Most of the high-quality literature regarding multifunctional health information technology systems comes from 4 benchmark research institutions. Little evidence is available on the effect of multifunctional commercially developed systems. Little evidence is available on interoperability and consumer health information technology. A major limitation of the literature is its generalizability. Health care experts, policymakers, payers, and consumers consider health information technologies, such as electronic health records and computerized provider order entry, to be critical to transforming the health care industry (1-7). Information management is fundamental to health care delivery (8). Given the fragmented nature of health care, the large volume of transactions in the system, the need to integrate new scientific evidence into practice, and other complex information management activities, the limitations of paper-based information management are intuitively apparent. While the benefits of health information technology are clear in theory, adapting new information systems to health care has proven difficult and rates of use have been limited (9-11). Most information technology applications have centered on administrative and financial transactions rather than on delivering clinical care (12). The Agency for Healthcare Research and Quality asked us to systematically review evidence on the costs and benefits associated with use of health information technology and to identify gaps in the literature in order to provide organizations, policymakers, clinicians, and consumers an understanding of the effect of health information technology on clinical care (see evidence report at www.ahrq.gov). From among the many possible benefits and costs of implementing health information technology, we focus here on 3 important domains: the effects of health information technology on quality, efficiency, and costs. Methods Analytic Frameworks We used expert opinion and literature review to develop analytic frameworks (Table) that describe the components involved with implementing health information technology, types of health information technology systems, and the functional capabilities of a comprehensive health information technology system (13). We modified a framework for clinical benefits from the Institute of Medicine's 6 aims for care (2) and developed a framework for costs using expert consensus that included measures such as initial costs, ongoing operational and maintenance costs, fraction of health information technology penetration, and productivity gains. Financial benefits were divided into monetized benefits (that is, benefits expressed in dollar terms) and nonmonetized benefits (that is, benefits that could not be directly expressed in dollar terms but could be assigned dollar values). Table. Health Information Technology Frameworks Data Sources and Search Strategy We performed 2 searches (in November 2003 and January 2004) of the English-language literature indexed in MEDLINE (1995 to January 2004) using a broad set of terms to maximize sensitivity. (See the full list of search terms and sequence of queries in the full evidence report at www.ahrq.gov.) We also searched the Cochrane Central Register of Controlled Trials, the Cochrane Database of Abstracts of Reviews of Effects, and the Periodical Abstracts Database; hand-searched personal libraries kept by content experts and project staff; and mined bibliographies of articles and systematic reviews for citations. We asked content experts to identify unpublished literature. Finally, we asked content experts and peer reviewers to identify newly published articles up to April 2005. Study Selection and Classification Two reviewers independently selected for detailed review the following types of articles that addressed the workings or implementation of a health technology system: systematic reviews, including meta-analyses; descriptive qualitative reports that focused on exploration of barriers; and quantitative reports. We classified quantitative reports as hypothesis-testing if the investigators compared data between groups or across time periods and used statistical tests to assess differences. We further categorized hypothesis-testing studies (for example, randomized and nonrandomized, controlled trials, controlled before-and-after studies) according to whether a concurrent comparison group was used. Hypothesis-testing studies without a concurrent comparison group included those using simple prepost, time-series, and historical control designs. Remaining hypothesis-testing studies were classified as cross-sectional designs and other. We classified quantitative reports as a predictive analysis if they used methods such as statistical modeling or expert panel estimates to predict what might happen with implementation of health information technology rather than what has happened. These studies typically used hybrid methodsfrequently mixing primary data collection with secondary data collection plus expert opinion and assumptionsto make quantitative estimates for data that had otherwise not been empirically measured. Cost-effectiveness and cost-benefit studies generally fell into this group. Data Extraction and Synthesis Two reviewers independently appraised and extracted details of selected articles using standardized abstraction forms and resolved discrepancies by consensus. We then used narrative synthesis methods to integrate findings into descriptive summaries. Each institution that accounted for more than 5% of the total sample of 257 papers was designated as a benchmark research leader. We grouped syntheses by institution and by whether the systems were commercially or internally developed. Role of the Funding Sources This work was produced under Agency for Healthcare Research and Quality contract no. 2002. In addition to the Agency for Healthcare Research and Quality, this work was also funded by the Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services, and the Office of Disease Prevention and Health Promotion, U.S. Department of Health and Human Services. The funding sources had no role in the design, analysis, or interpretation of the study or in the decision to submit the manuscript for publication. Data Synthesis Literature Selection Overview Of 867 articles, we rejected 141 during initial screening: 124 for not having health information technology as the subject, 4 for not reporting relevant outcomes, and 13 for miscellaneous reasons (categories not mutually exclusive). Of the remaining 726 articles, we excluded 469 descriptive reports that did not examine barriers (Figure). We recorded details of and summarized each of the 257 articles that we did include in an interactive database (healthit.ahrq.gov/tools/rand) that serves as the evidence table for our report (14). Twenty-four percent of all studies came from the following 4 benchmark institutions: 1) the Regenstrief Institute, 2) Brigham and Women's Hospital/Partners Health Care, 3) the Department of Veterans Affairs, and 4) LDS Hospital/Intermountain Health Care. Figure. Search flow for health information technology ( HIT ) literature. Pediatrics Types and Functions of Technology Systems The reports addressed the following types of primary systems: decision support aimed at providers (63%), electronic health records (37%), and computerized provider order entry (13%). Specific functional capabilities of systems that were described in reports included electronic documentation (31%), order entry (22%), results management (19%), and administrative capabilities (18%). Only 8% of the described systems had specific consumer health capabilities, and only 1% had capabilities that allowed systems from different facilities to connect with each other and share data interoperably. Most studies (n= 125) assessed the effect of the systems in the outpatient setting. Of the 213 hypothesis-testing studies, 84 contained some data on costs. Several studies assessed interventions with limited functionality, such as stand-alone decision support systems (15-17). Such studies provide limited information about issues that today's decision makers face when selecting and implementing health information technology. Thus, we preferentially highlight in the following paragraphs studies that were conducted in the United States, that had empirically measured data on multifunctional systems, and that included health information and data storage in the form of electronic documentation or order-entry capabilities. Predictive analyses were excluded. Seventy-six studies met these criteria: 54 from the 4 benchmark leaders and 22 from other institutions. Data from Benchmark Institutions The health information technology systems evaluated by the benchmark leaders shared many characteristics. All the systems were multifunctional and included decision support, all were internally developed by research experts at the respective academic institutions, and all had capabilities added incrementally over several years. Furthermore, most reported studies of these systems used research designs with high internal validity (for example, randomized, controlled trials). Appendix Table 1 (18-71) provides a structured summary of each study from the 4 benchmark institutions. This table also includes studies that met inclusion criteria not highlighted in this synthesis (26, 27, 30, 39, 40, 53, 62, 65, 70, 71). The data supported 5 primary themes (3 directly r",
"title": ""
},
{
"docid": "560577e6abcccdb399d437cbd52ad266",
"text": "With smart devices, particular smartphones, becoming our everyday companions, the ubiquitous mobile Internet and computing applications pervade people’s daily lives. With the surge demand on high-quality mobile services at anywhere, how to address the ubiquitous user demand and accommodate the explosive growth of mobile traffics is the key issue of the next generation mobile networks. The Fog computing is a promising solution towards this goal. Fog computing extends cloud computing by providing virtualized resources and engaged location-based services to the edge of the mobile networks so as to better serve mobile traffics. Therefore, Fog computing is a lubricant of the combination of cloud computing and mobile applications. In this article, we outline the main features of Fog computing and describe its concept, architecture and design goals. Lastly, we discuss some of the future research issues from the networking perspective.",
"title": ""
},
{
"docid": "398040041440f597b106c49c79be27ea",
"text": "BACKGROUND\nRecently, human germinal center-associated lymphoma (HGAL) gene protein has been proposed as an adjunctive follicular marker to CD10 and BCL6.\n\n\nMETHODS\nOur aim was to evaluate immunoreactivity for HGAL in 82 cases of follicular lymphomas (FLs)--67 nodal, 5 cutaneous and 10 transformed--which were all analysed histologically, by immunohistochemistry and PCR.\n\n\nRESULTS\nImmunostaining for HGAL was more frequently positive (97.6%) than that for BCL6 (92.7%) and CD10 (90.2%) in FLs; the cases negative for bcl6 and/or for CD10 were all positive for HGAL, whereas the two cases negative for HGAL were positive with BCL6; no difference in HGAL immunostaining was found among different malignant subtypes or grades.\n\n\nCONCLUSIONS\nTherefore, HGAL can be used in the immunostaining of FLs as the most sensitive germinal center (GC)-marker; when applied alone, it would half the immunostaining costs, reserving the use of the other two markers only to HGAL-negative cases.",
"title": ""
},
{
"docid": "ef8ba8ae9696333f5da066813a4b79d7",
"text": "Neural image/video captioning models can generate accurate descriptions, but their internal process of mapping regions to words is a black box and therefore difficult to explain. Top-down neural saliency methods can find important regions given a high-level semantic task such as object classification, but cannot use a natural language sentence as the top-down input for the task. In this paper, we propose Caption-Guided Visual Saliency to expose the region-to-word mapping in modern encoder-decoder networks and demonstrate that it is learned implicitly from caption training data, without any pixel-level annotations. Our approach can produce spatial or spatiotemporal heatmaps for both predicted captions, and for arbitrary query sentences. It recovers saliency without the overhead of introducing explicit attention layers, and can be used to analyze a variety of existing model architectures and improve their design. Evaluation on large-scale video and image datasets demonstrates that our approach achieves comparable captioning performance with existing methods while providing more accurate saliency heatmaps. Our code is available at visionlearninggroup.github.io/caption-guided-saliency/.",
"title": ""
},
{
"docid": "054fcf065915118bbfa3f12759cb6912",
"text": "Automatization of the diagnosis of any kind of disease is of great importance and its gaining speed as more and more deep learning solutions are applied to different problems. One of such computer-aided systems could be a decision support tool able to accurately differentiate between different types of breast cancer histological images – normal tissue or carcinoma (benign, in situ or invasive). In this paper authors present a deep learning solution, based on convolutional capsule network, for classification of four types of images of breast tissue biopsy when hematoxylin and eosin staining is applied. The crossvalidation accuracy, averaged over four classes, was achieved to be 87 % with equally high sensitivity.",
"title": ""
},
{
"docid": "e1315cfdc9c1a33b7b871c130f34d6ce",
"text": "TextTiling is a technique for subdividing texts into multi-paragraph units that represent passages, or subtopics. The discourse cues for identifying major subtopic shifts are patterns of lexical co-occurrence and distribution. The algorithm is fully implemented and is shown to produce segmentation that corresponds well to human judgments of the subtopic boundaries of 12 texts. Multi-paragraph subtopic segmentation should be useful for many text analysis tasks, including information retrieval and summarization.",
"title": ""
},
{
"docid": "aeb19f8f9c6e5068fc602682e4ae04d3",
"text": "Received: 29 November 2004 Revised: 26 July 2005 Accepted: 4 November 2005 Abstract Interpretive research in information systems (IS) is now a well-established part of the field. However, there is a need for more material on how to carry out such work from inception to publication. I published a paper a decade ago (Walsham, 1995) which addressed the nature of interpretive IS case studies and methods for doing such research. The current paper extends this earlier contribution, with a widened scope of all interpretive research in IS, and through further material on carrying out fieldwork, using theory and analysing data. In addition, new topics are discussed on constructing and justifying a research contribution, and on ethical issues and tensions in the conduct of interpretive work. The primary target audience for the paper is lessexperienced IS researchers, but I hope that the paper will also stimulate reflection for the more-experienced IS researcher and be of relevance to interpretive researchers in other social science fields. European Journal of Information Systems (2006) 15, 320–330. doi:10.1057/palgrave.ejis.3000589",
"title": ""
},
{
"docid": "6bfdd78045816085cd0fa5d8bb91fd18",
"text": "Contextual factors can greatly influence the users' preferences in listening to music. Although it is hard to capture these factors directly, it is possible to see their effects on the sequence of songs liked by the user in his/her current interaction with the system. In this paper, we present a context-aware music recommender system which infers contextual information based on the most recent sequence of songs liked by the user. Our approach mines the top frequent tags for songs from social tagging Web sites and uses topic modeling to determine a set of latent topics for each song, representing different contexts. Using a database of human-compiled playlists, each playlist is mapped into a sequence of topics and frequent sequential patterns are discovered among these topics. These patterns represent frequent sequences of transitions between the latent topics representing contexts. Given a sequence of songs in a user's current interaction, the discovered patterns are used to predict the next topic in the playlist. The predicted topics are then used to post-filter the initial ranking produced by a traditional recommendation algorithm. Our experimental evaluation suggests that our system can help produce better recommendations in comparison to a conventional recommender system based on collaborative or content-based filtering. Furthermore, the topic modeling approach proposed here is also useful in providing better insight into the underlying reasons for song selection and in applications such as playlist construction and context prediction.",
"title": ""
},
{
"docid": "638f7bf2f47895274995df166564ecc1",
"text": "In recent years, the video game market has embraced augmented reality video games, a class of video games that is set to grow as gaming technologies develop. Given the widespread use of video games among children and adolescents, the health implications of augmented reality technology must be closely examined. Augmented reality technology shows a potential for the promotion of healthy behaviors and social interaction among children. However, the full immersion and physical movement required in augmented reality video games may also put users at risk for physical and mental harm. Our review article and commentary emphasizes both the benefits and dangers of augmented reality video games for children and adolescents.",
"title": ""
},
{
"docid": "8924c1551030dc7e9aaf5611fd0a9ae2",
"text": "The term affordance describes an object’s utilitarian function or actionable possibilities. Product designers have taken great interest in the concept of affordances because of the bridge they provide relating to design, the interpretation of design and, ultimately, functionality in the hands of consumers. These concepts have been widely studied and applied in the field of psychology but have had limited formal application to packaging design and evaluation. We believe that the concepts related to affordances will reveal novel opportunities for packaging innovation. To catalyse this, presented work had the following objectives: (a) to propose a method by which packaging designers can purposefully consider affordances during the design process; (b) to explain this method in the context of a packaging-related case study; and (c) to measure the effect on package usability when an affordance-based design approach is employed. © 2014 The Authors. Packaging Technology and Science published by John Wiley & Sons Ltd.",
"title": ""
},
{
"docid": "8e077186aef0e7a4232eec0d8c73a5a2",
"text": "The appetite for up-to-date information about earth’s surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection. 2013 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "fddadfbc6c1b34a8ac14f8973f052da5",
"text": "Abstract. Centroidal Voronoi tessellations are useful for subdividing a region in Euclidean space into Voronoi regions whose generators are also the centers of mass, with respect to a prescribed density function, of the regions. Their extensions to general spaces and sets are also available; for example, tessellations of surfaces in a Euclidean space may be considered. In this paper, a precise definition of such constrained centroidal Voronoi tessellations (CCVTs) is given and a number of their properties are derived, including their characterization as minimizers of an “energy.” Deterministic and probabilistic algorithms for the construction of CCVTs are presented and some analytical results for one of the algorithms are given. Computational examples are provided which serve to illustrate the high quality of CCVT point sets. Finally, CCVT point sets are applied to polynomial interpolation and numerical integration on the sphere.",
"title": ""
},
{
"docid": "bf272aa2413f1bc186149e814604fb03",
"text": "Reading has been studied for decades by a variety of cognitive disciplines, yet no theories exist which sufficiently describe and explain how people accomplish the complete task of reading real-world texts. In particular, a type of knowledge intensive reading known as creative reading has been largely ignored by the past research. We argue that creative reading is an aspect of practically all reading experiences; as a result, any theory which overlooks this will be insufficient. We have built on results from psychology, artificial intelligence, and education in order to produce a functional theory of the complete reading process. The overall framework describes the set of tasks necessary for reading to be performed. Within this framework, we have developed a theory of creative reading. The theory is implemented in the ISAAC (Integrated Story Analysis And Creativity) system, a reading system which reads science fiction stories.",
"title": ""
},
{
"docid": "7c901910ead6c3e4723803085b7495d5",
"text": "Lenneberg (1967) hypothesized that language could be acquired only within a critical period, extending from early infancy until puberty. In its basic form, the critical period hypothesis need only have consequences for first language acquisition. Nevertheless, it is essential to our understanding of the nature of the hypothesized critical period to determine whether or not it extends as well to second language acquisition. If so, it should be the case that young children are better second language learners than adults and should consequently reach higher levels of final proficiency in the second language. This prediction was tested by comparing the English proficiency attained by 46 native Korean or Chinese speakers who had arrived in the United States between the ages of 3 and 39, and who had lived in the United States between 3 and 26 years by the time of testing. These subjects were tested on a wide variety of structures of English grammar, using a grammaticality judgment task. Both correlational and t-test analyses demonstrated a clear and strong advantage for earlier arrivals over the later arrivals. Test performance was linearly related to age of arrival up to puberty; after puberty, performance was low but highly variable and unrelated to age of arrival. This age effect was shown not to be an inadvertent result of differences in amount of experience with English, motivation, self-consciousness, or American identification. The effect also appeared on every grammatical structure tested, although the structures varied markedly in the degree to which they were well mastered by later learners. The results support the conclusion that a critical period for language acquisition extends its effects to second language acquisition.",
"title": ""
},
{
"docid": "584e84ac1a061f1bf7945ab4cf54d950",
"text": "Paul White, PhD, MD§ Acupuncture has been used in China and other Asian countries for the past 3000 yr. Recently, this technique has been gaining increased popularity among physicians and patients in the United States. Even though acupuncture-induced analgesia is being used in many pain management programs in the United States, the mechanism of action remains unclear. Studies suggest that acupuncture and related techniques trigger a sequence of events that include the release of neurotransmitters, endogenous opioid-like substances, and activation of c-fos within the central nervous system. Recent developments in central nervous system imaging techniques allow scientists to better evaluate the chain of events that occur after acupuncture-induced stimulation. In this review article we examine current biophysiological and imaging studies that explore the mechanisms of acupuncture analgesia.",
"title": ""
},
{
"docid": "5c5225b5e66d49f17a881ed1843e944c",
"text": "The organic-inorganic hybrid perovskites methylammonium lead iodide (CH3NH3PbI3) and the partially chlorine-substituted mixed halide CH3NH3PbI3-xClx emit strong and broad photoluminescence (PL) around their band gap energy of ∼1.6 eV. However, the nature of the radiative decay channels behind the observed emission and, in particular, the spectral broadening mechanisms are still unclear. Here we investigate these processes for high-quality vapor-deposited films of CH3NH3PbI3-xClx using time- and excitation-energy dependent photoluminescence spectroscopy. We show that the PL spectrum is homogenously broadened with a line width of 103 meV most likely as a consequence of phonon coupling effects. Further analysis reveals that defects or trap states play a minor role in radiative decay channels. In terms of possible lasing applications, the emission spectrum of the perovskite is sufficiently broad to have potential for amplification of light pulses below 100 fs pulse duration.",
"title": ""
},
{
"docid": "9097bf29a9ad2b33919e0667d20bf6d7",
"text": "Object detection, though gaining popularity, has largely been limited to detection from the ground or from satellite imagery. Aerial images, where the target may be obfuscated from the environmental conditions, angle-of-attack, and zoom level, pose a more significant challenge to correctly detect targets in. This paper describes the implementation of a regional convolutional neural network to locate and classify objects across several categories in complex, aerial images. Our current results show promise in detecting and classifying objects. Further adjustments to the network and data input should increase the localization and classification accuracies.",
"title": ""
},
{
"docid": "08ee3e3191ac1b56b3c41e89df62d047",
"text": "This article presents a gesture recognition/adaptation system for human--computer interaction applications that goes beyond activity classification and that, as a complement to gesture labeling, characterizes the movement execution. We describe a template-based recognition method that simultaneously aligns the input gesture to the templates using a Sequential Monte Carlo inference technique. Contrary to standard template-based methods based on dynamic programming, such as Dynamic Time Warping, the algorithm has an adaptation process that tracks gesture variation in real time. The method continuously updates, during execution of the gesture, the estimated parameters and recognition results, which offers key advantages for continuous human--machine interaction. The technique is evaluated in several different ways: Recognition and early recognition are evaluated on 2D onscreen pen gestures; adaptation is assessed on synthetic data; and both early recognition and adaptation are evaluated in a user study involving 3D free-space gestures. The method is robust to noise, and successfully adapts to parameter variation. Moreover, it performs recognition as well as or better than nonadapting offline template-based methods.",
"title": ""
}
] |
scidocsrr
|
11ae217799d644b68b900da25cb99f16
|
An Examination of Regret in Bullying Tweets
|
[
{
"docid": "8dfa68e87eee41dbef8e137b860e19cc",
"text": "We investigate regrets associated with users' posts on a popular social networking site. Our findings are based on a series of interviews, user diaries, and online surveys involving 569 American Facebook users. Their regrets revolved around sensitive topics, content with strong sentiment, lies, and secrets. Our research reveals several possible causes of why users make posts that they later regret: (1) they want to be perceived in favorable ways, (2) they do not think about their reason for posting or the consequences of their posts, (3) they misjudge the culture and norms within their social circles, (4) they are in a \"hot\" state of high emotion when posting, or under the influence of drugs or alcohol, (5) their postings are seen by an unintended audience, (6) they do not foresee how their posts could be perceived by people within their intended audience, and (7) they misunderstand or misuse the Facebook platform. Some reported incidents had serious repercussions, such as breaking up relationships or job losses. We discuss methodological considerations in studying negative experiences associated with social networking posts, as well as ways of helping users of social networking sites avoid such regrets.",
"title": ""
}
] |
[
{
"docid": "5c0994fab71ea871fad6915c58385572",
"text": "We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.",
"title": ""
},
{
"docid": "543dc9543221b507746ebf1fe8d14928",
"text": "Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models’ usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study population. This article presents the results of a simulation study that examines the performance of likelihood-based tests and the traditionally used Information Criterion (ICs) used for determining the number of classes in mixture modeling. We look at the performance of these tests and indexes for 3 types of mixture models: latent class analysis (LCA), a factor mixture model (FMA), and a growth mixture models (GMM). We evaluate the ability of the tests and indexes to correctly identify the number of classes at three different sample sizes (n D 200, 500, 1,000). Whereas the Bayesian Information Criterion performed the best of the ICs, the bootstrap likelihood ratio test proved to be a very consistent indicator of classes across all of the models considered.",
"title": ""
},
{
"docid": "89092357b733b66d63580e849435e40a",
"text": "Universal asynchronous receiver transmitter, abbreviated UART is a integrated circuit used for serial communications over a computer or peripheral device serial port. UARTs are now commonly included in microcontrollers. The universal designation indicates that the data format and transmission speeds are configurable and that the actual electric signaling levels and methods (such as differential signaling etc.) typically are handled by a special driver circuit external to the UART. Baud rate of 20Mbps using clock of 20MHz is used. FIFO (First-In-First Out) is used to store data temporarily during high speed transmission to get synchronization. The design is synthesized in Verilog HDL and reliability of the Verilog HDL implementation of UART is verified by simulated waveforms. We are using Cadence tool for simulation and synthesis.",
"title": ""
},
{
"docid": "295c6a54db24bf28f5970e60e6bf5971",
"text": "This thesis presents a learning based approach for detecting classes of objects and patterns with variable image appearance but highly predictable image boundaries. It consists of two parts. In part one, we introduce our object and pattern detection approach using a concrete human face detection example. The approach rst builds a distribution-based model of the target pattern class in an appropriate feature space to describe the target's variable image appearance. It then learns from examples a similarity measure for matching new patterns against the distribution-based target model. The approach makes few assumptions about the target pattern class and should therefore be fairly general, as long as the target class has predictable image boundaries. Because our object and pattern detection approach is very much learning-based, how well a system eventually performs depends heavily on the quality of training examples it receives. The second part of this thesis looks at how one can select high quality examples for function approximation learning tasks. We propose an active learning formulation for function approximation, and show for three speci c approximation function classes, that the active example selection strategy learns its target with fewer data samples than random sampling. We then simplify the original active learning formulation, and show how it leads to a tractable example selection paradigm, suitable for use in many object and pattern detection problems. Copyright c Massachusetts Institute of Technology, 1995 This report describes research done at the Arti cial Intelligence Laboratory and within the Center for Biological and Computational Learning. This research is sponsored by grants from the O ce of Naval Research under contracts N00014-91-J-1270 and N00014-92-J-1879; by a grant from the National Science Foundation under contract ASC-9217041. Support for the A.I. Laboratory's arti cial intelligence research is provided by ONR contract N00014-91-J-4038. Learning and Example Selection for Object and Pattern Detection",
"title": ""
},
{
"docid": "e85cf5b993cc4d82a1dea47f9ce5d18b",
"text": "We recently proposed an approach inspired by Sparse Component Analysis for real-time localisation of multiple sound sources using a circular microphone array. The method was based on identifying time-frequency zones where only one source is active, reducing the problem to single-source localisation in these zones. A histogram of estimated Directions of Arrival (DOAs) was formed and then processed to obtain improved DOA estimates, assuming that the number of sources was known. In this paper, we extend our previous work by proposing a new method for the final DOA estimations, that outperforms our previous method at lower SNRs and in the case of six simultaneous speakers. In keeping with the spirit of our previous work, the new method is very computationally efficient, facilitating its use in real-time systems.",
"title": ""
},
{
"docid": "4d44572846a0989bf4bc230b669c88b7",
"text": "Application-specific integrated circuit (ASIC) ML4425 is often used for sensorless control of permanent-magnet (PM) brushless direct current (BLDC) motor drives. It integrates the terminal voltage of the unenergized winding that contains the back electromotive force (EMF) information and uses a phase-locked loop (PLL) to determine the proper commutation sequence for the BLDC motor. However, even without pulsewidth modulation, the terminal voltage is distorted by voltage pulses due to the freewheel diode conduction. The pulses, which appear very wide in an ultrahigh-speed (120 kr/min) drive, are also integrated by the ASIC. Consequently, the motor commutation is significantly retarded, and the drive performance is deteriorated. In this paper, it is proposed that the ASIC should integrate the third harmonic back EMF instead of the terminal voltage, such that the commutation retarding is largely reduced and the motor performance is improved. Basic principle and implementation of the new ASIC-based sensorless controller will be presented, and experimental results will be given to verify the control strategy. On the other hand, phase delay in the motor currents arises due to the influence of winding inductance, reducing the drive performance. Therefore, a novel circuit with discrete components is proposed. It also uses the integration of third harmonic back EMF and the PLL technique and provides controllable advanced commutation to the BLDC motor.",
"title": ""
},
{
"docid": "d1e9eb1357381310c4540a6dcbe8973a",
"text": "We introduce a method for learning Bayesian networks that handles the discretization of continuous variables as an integral part of the learning process. The main ingredient in this method is a new metric based on the Minimal Description Length principle for choosing the threshold values for the discretization while learning the Bayesian network structure. This score balances the complexity of the learned discretization and the learned network structure against how well they model the training data. This ensures that the discretization of each variable introduces just enough intervals to capture its interaction with adjacent variables in the network. We formally derive the new metric, study its main properties, and propose an iterative algorithm for learning a discretization policy. Finally, we illustrate its behavior in applications to supervised learning.",
"title": ""
},
{
"docid": "1f9c032db6d92771152b6831acbd8af3",
"text": "Cyberbullying has provoked public concern after well-publicized suicides of adolescents. This mixed-methods study investigates the social representation of these suicides. A content analysis of 184 U.S. newspaper articles on death by suicide associated with cyberbullying or aggression found that few articles adhered to guidelines suggested by the World Health Organization and the American Foundation for Suicide Prevention to protect against suicidal behavioral contagion. Few articles made reference to suicide or bullying prevention resources, and most suggested that the suicide had a single cause. Thematic analysis of a subset of articles found that individual deaths by suicide were used as cautionary tales to prompt attention to cyberbullying. This research suggests that newspaper coverage of these events veers from evidence-based guidelines and that more work is needed to determine how best to engage with journalists about the potential consequences of cyberbullying and suicide coverage.",
"title": ""
},
{
"docid": "8ff4c6a5208b22a47eb5006c329817dc",
"text": "Goal: To evaluate a novel kind of textile electrodes based on woven fabrics treated with PEDOT:PSS, through an easy fabrication process, testing these electrodes for biopotential recordings. Methods: Fabrication is based on raw fabric soaking in PEDOT:PSS using a second dopant, squeezing and annealing. The electrodes have been tested on human volunteers, in terms of both skin contact impedance and quality of the ECG signals recorded at rest and during physical activity (power spectral density, baseline wandering, QRS detectability, and broadband noise). Results: The electrodes are able to operate in both wet and dry conditions. Dry electrodes are more prone to noise artifacts, especially during physical exercise and mainly due to the unstable contact between the electrode and the skin. Wet (saline) electrodes present a stable and reproducible behavior, which is comparable or better than that of traditional disposable gelled Ag/AgCl electrodes. Conclusion: The achieved results reveal the capability of this kind of electrodes to work without the electrolyte, providing a valuable interface with the skin, due to mixed electronic and ionic conductivity of PEDOT:PSS. These electrodes can be effectively used for acquiring ECG signals. Significance: Textile electrodes based on PEDOT:PSS represent an important milestone in wearable monitoring, as they present an easy and reproducible fabrication process, very good performance in wet and dry (at rest) conditions and a superior level of comfort with respect to textile electrodes proposed so far. This paves the way to their integration into smart garments.",
"title": ""
},
{
"docid": "86f0e783a93fc783e10256c501008b0d",
"text": "We present a biologically-motivated system for the recognition of actions from video sequences. The approach builds on recent work on object recognition based on hierarchical feedforward architectures [25, 16, 20] and extends a neurobiological model of motion processing in the visual cortex [10]. The system consists of a hierarchy of spatio-temporal feature detectors of increasing complexity: an input sequence is first analyzed by an array of motion- direction sensitive units which, through a hierarchy of processing stages, lead to position-invariant spatio-temporal feature detectors. We experiment with different types of motion-direction sensitive units as well as different system architectures. As in [16], we find that sparse features in intermediate stages outperform dense ones and that using a simple feature selection approach leads to an efficient system that performs better with far fewer features. We test the approach on different publicly available action datasets, in all cases achieving the highest results reported to date.",
"title": ""
},
{
"docid": "0bb6e496cd176e85fcec98bed669e18d",
"text": "Men and women clearly differ in some psychological domains. A. H. Eagly (1995) shows that these differences are not artifactual or unstable. Ideally, the next scientific step is to develop a cogent explanatory framework for understanding why the sexes differ in some psychological domains and not in others and for generating accurate predictions about sex differences as yet undiscovered. This article offers a brief outline of an explanatory framework for psychological sex differences--one that is anchored in the new theoretical paradigm of evolutionary psychology. Men and women differ, in this view, in domains in which they have faced different adaptive problems over human evolutionary history. In all other domains, the sexes are predicted to be psychologically similar. Evolutionary psychology jettisons the false dichotomy between biology and environment and provides a powerful metatheory of why sex differences exist, where they exist, and in what contexts they are expressed (D. M. Buss, 1995).",
"title": ""
},
{
"docid": "46fc7691db8cd4414c810be22818734f",
"text": "The Internet of Things (IoT) realizes a vision where billions of interconnected devices are deployed just about everywhere, from inside our bodies to the most remote areas of the globe. As the IoT will soon pervade every aspect of our lives and will be accessible from anywhere, addressing critical IoT security threats is now more important than ever. Traditional approaches where security is applied as an afterthought and as a “patch” against known attacks are insufficient. IoT challenges require a new secure-by-design vision, where threats are addressed proactively and IoT devices learn to dynamically adapt to different threats. In this paper, we first provide a taxonomy and survey the state of the art in IoT security research, and offer a roadmap of concrete research challenges to address existing and next-generation IoT security threats.",
"title": ""
},
{
"docid": "ef09bc08cc8e94275e652e818a0af97f",
"text": "The biosynthetic pathway of L-tartaric acid, the form most commonly encountered in nature, and its catabolic ties to vitamin C, remain a challenge to plant scientists. Vitamin C and L-tartaric acid are plant-derived metabolites with intrinsic human value. In contrast to most fruits during development, grapes accumulate L-tartaric acid, which remains within the berry throughout ripening. Berry taste and the organoleptic properties and aging potential of wines are intimately linked to levels of L-tartaric acid present in the fruit, and those added during vinification. Elucidation of the reactions relating L-tartaric acid to vitamin C catabolism in the Vitaceae showed that they proceed via the oxidation of L-idonic acid, the proposed rate-limiting step in the pathway. Here we report the use of transcript and metabolite profiling to identify candidate cDNAs from genes expressed at developmental times and in tissues appropriate for L-tartaric acid biosynthesis in grape berries. Enzymological analyses of one candidate confirmed its activity in the proposed rate-limiting step of the direct pathway from vitamin C to tartaric acid in higher plants. Surveying organic acid content in Vitis and related genera, we have identified a non-tartrate-forming species in which this gene is deleted. This species accumulates in excess of three times the levels of vitamin C than comparably ripe berries of tartrate-accumulating species, suggesting that modulation of tartaric acid biosynthesis may provide a rational basis for the production of grapes rich in vitamin C.",
"title": ""
},
{
"docid": "9b470feac9ae4edd11b87921934c9fc2",
"text": "Cutaneous melanoma may in some instances be confused with seborrheic keratosis, which is a very common neoplasia, more often mistaken for actinic keratosis and verruca vulgaris. Melanoma may clinically resemble seborrheic keratosis and should be considered as its possible clinical simulator. We report a case of melanoma with dermatoscopic characteristics of seborrheic keratosis and emphasize the importance of the dermatoscopy algorithm in differentiating between a melanocytic and a non-melanocytic lesion, of the excisional biopsy for the establishment of the diagnosis of cutaneous tumors, and of the histopathologic examination in all surgically removed samples.",
"title": ""
},
{
"docid": "ba2769abc859882f600e64cb14af2ac6",
"text": "OBJECTIVE\nThis study measures and compares the outcome of conservative physical therapy with traction, by using magnetic resonance imaging and clinical parameters in patients presenting with low back pain caused by lumbar disc herniation.\n\n\nMETHODS\nA total of 26 patients with LDH (14F, 12M with mean aged 37 +/- 11) were enrolled in this study and 15 sessions (per day on 3 weeks) of physical therapy were applied. That included hot pack, ultrasound, electrotherapy and lumbar traction. Physical examination of the lumbar spine, severity of pain, sleeping order, patient and physician global assessment with visual analogue scale, functional disability by HAQ, Roland Disability Questionnaire, and Modified Oswestry Disability Questionnaire were assessed at baseline and at 4-6 weeks after treatment. Magnetic resonance imaging examinations were carried out before and 4-6 weeks after the treatment\n\n\nRESULTS\nAll patients completed the therapy session. There were significant reductions in pain, sleeping disturbances, patient and physician global assessment and disability scores, and significant increases in lumbar movements between baseline and follow-up periods. There were significant reductions of size of the herniated mass in five patients, and significant increase in 3 patients on magnetic resonance imaging after treatment, but no differences in other patients.\n\n\nCONCLUSIONS\nThis study showed that conventional physical therapies with lumbar traction were effective in the treatment of patient with subacute LDH. These results suggest that clinical improvement is not correlated with the finding of MRI. Patients with LDH should be monitored clinically (Fig. 3, Ref. 18).",
"title": ""
},
{
"docid": "85c3dc3dae676f0509a99c6d27db8423",
"text": "Swarming, or aggregations of organisms in groups, can be found in nature in many organisms ranging from simple bacteria to mammals. Such behavior can result from several different mechanisms. For example, individuals may respond directly to local physical cues such as concentration of nutrients or distribution of some chemicals as seen in some bacteria and social insects, or they may respond directly to other individuals as seen in fish, birds, and herds of mammals. In this dissertation, we consider models for aggregating and social foraging swarms and perform rigorous stability analysis of emerging collective behavior. Moreover, we consider formation control of a general class of multi-agent systems in the framework of nonlinear output regulation problem with application on formation control of mobile robots. First, an individual-based continuous time model for swarm aggregation in an n-dimensional space is identified and its stability properties are analyzed. The motion of each individual is determined by two factors: (i) attraction to the other individuals on long distances and (ii) repulsion from the other individuals on short distances. It is shown that the individuals (autonomous agents or biological creatures) will form a cohesive swarm in a finite time. Moreover, explicit bounds on the swarm size and time of convergence are derived. Then, the results are generalized to a more general class of attraction/repulsion functions and extended to handle formation stabilization and uniform swarm density. After that, we consider social foraging swarms. We ii assume that the swarm is moving in an environment with an ”attractant/repellent” profile (i.e., a profile of nutrients or toxic substances) which also affects the motion of each individual by an attraction to the more favorable or nutrient rich regions (or repulsion from the unfavorable or toxic regions) of the profile. The stability properties of the collective behavior of the swarm for different profiles are studied and conditions for collective convergence to more favorable regions are provided. Then, we use the ideas for modeling and analyzing the behavior of honey bee clusters and in-transit swarms, a phenomena seen during the reproduction of the bees. After that, we consider one-dimensional asynchronous swarms with time delays. We prove that, despite the asynchronism and time delays in the motion of the individuals, the swarm will converge to a comfortable position with comfortable intermember spacing. Finally, we consider formation control of a multi-agent system with general nonlinear dynamics. It is assumed that the formation is required to follow a virtual leader whose dynamics are generated by an autonomous neutrally stable system. We develop a decentralized control strategy based on the nonlinear output regulation (servomechanism) theory. We illustrate the procedure with application to formation control of mobile robots.",
"title": ""
},
{
"docid": "f9eff7a4652f6242911f41ba180f75ed",
"text": "The last ten years have seen a significant increase in computationally relevant research seeking to build models of narrative and its use. These efforts have focused in and/or drawn from a range of disciplines, including narrative theory Many of these research efforts have been informed by a focus on the development of an explicit model of narrative and its function. Computational approaches from artificial intelligence (AI) are particularly well-suited to such modeling tasks, as they typically involve precise definitions of aspects of some domain of discourse and well-defined algorithms for reasoning over those definitions. In the case of narrative modeling, there is a natural fit with AI techniques. AI approaches often concern themselves with representing and reasoning about some real world domain of discourse – a microworld where inferences must be made in order to draw conclusions about some higher order property of the world or to explain, predict, control or communicate about the microworld's dynamic state. In this regard, the fictional worlds created by storytellers and the ways that we communicate about them suggest promising and immediate analogs for application of existing AI methods. One of the most immediate analogs between AI research and narrative models lies in the area of reasoning about actions and plans. The goals and plans that characters form and act upon within a story are the primary elements of the story's plot. At first glance, story plans have many of the same features as knowledge representations developed by AI researchers to characterize the plans formed by industrial robots operating to assemble automobile parts on a factory floor or by autonomous vehicles traversing unknown physical landscapes. As we will discuss below, planning representations have offered significant promise in modeling plot structure. Equally as significantly, however, is their ability to be used by intelligent algorithms in the automatic creation of plot lines. Just as AI planning systems can produce new plans to achieve an agent's goals in the face of a unanticipated execution context, so too may planning systems work to produce the plans of a collection of characters as they scheme to obtain, thwart, overcome or succeed.",
"title": ""
},
{
"docid": "ca873d33aacb15d97c830a60dba6f7a3",
"text": "Internet of Things (IoT) is extension of current internet to provide communication, connection, and inter-networking between various devices or physical objects also known as “Things.” In this paper we have reported an effective use of IoT for Environmental Condition Monitoring and Controlling in Homes. We also provide fault detection and correction in any devices connected to this system automatically. Home Automation is nothing but automation of regular activities inside the home. Now a day's due to huge advancement in wireless sensor network and other computation technologies, it is possible to provide flexible and low cost home automation system. However there is no any system available in market which provide home automation as well as error detection in the devices efficiently. In this system we use prediction to find out the required solution if any problem occurs in any device connected to the system. To achieve that we are applying Data Mining concept. For efficient data mining we use Naive Bayes Classifier algorithm to find out the best possible solution. This gives a huge upper hand on other available home automation system, and we actually manage to provide a real intelligent system.",
"title": ""
},
{
"docid": "e97c90a7175b07d6c08e7fc53b197c2d",
"text": "Retinal image of surrounding objects varies tremendously due to the changes in position, size, pose, illumination condition, background context, occlusion, noise, and non-rigid deformations. But despite these huge variations, our visual system is able to invariantly recognize any object in just a fraction of a second. To date, various computational models have been proposed to mimic the hierarchical processing gically inspired network architecture and learning rule significantly improves the models' performance when facing challenging invariant object recognition problems. Our model is an asynchronous feedforward spiking neural network. When the network is presented with natural images, the neurons in the entry layers detect edges, and the most activated ones fire first, while neurons in higher layers are equipped with spike timing-dependent plasticity. These neurons progressively become selective to intermediate complexity visual features appropriate for object categorization. The model is evaluated on 3D-Object and ETH-80 datasets which are two benchmarks for invariant object recognition, and is shown to outperform state-of-the-art models, including DeepConvNet and HMAX. This demonstrates its ability to accurately recognize different instances of multiple object classes even under various appearance conditions (different views, scales, tilts, and backgrounds). Several statistical analysis techniques are used to show that our model extracts class specific and highly informative features. & 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "020e83789704528e5842b8cac5bf89b6",
"text": "The present study experimentally investigated the effect of Facebook usage on women's mood and body image, whether these effects differ from an online fashion magazine, and whether appearance comparison tendency moderates any of these effects. Female participants (N=112) were randomly assigned to spend 10min browsing their Facebook account, a magazine website, or an appearance-neutral control website before completing state measures of mood, body dissatisfaction, and appearance discrepancies (weight-related, and face, hair, and skin-related). Participants also completed a trait measure of appearance comparison tendency. Participants who spent time on Facebook reported being in a more negative mood than those who spent time on the control website. Furthermore, women high in appearance comparison tendency reported more facial, hair, and skin-related discrepancies after Facebook exposure than exposure to the control website. Given its popularity, more research is needed to better understand the impact that Facebook has on appearance concerns.",
"title": ""
}
] |
scidocsrr
|
43800ddb4f124a9f1c20037d29855fd0
|
Usability measurement for speech systems : SASSI revisited
|
[
{
"docid": "5750ebcfd885097aeeef66582380c286",
"text": "In the present paper, we investigate the validity and reliability of de-facto evaluation standards, defined for measuring or predicting the quality of the interaction with spoken dialogue systems. Two experiments have been carried out with a dialogue system for controlling domestic devices. During these experiments, subjective judgments of quality have been collected by two questionnaire methods (ITU-T Rec. P.851 and SASSI), and parameters describing the interaction have been logged and annotated. Both metrics served the derivation of prediction models according to the PARADISE approach. Although the limited database allows only tentative conclusions to be drawn, the results suggest that both questionnaire methods provide valid measurements of a large number of different quality aspects; most of the perceptive dimensions underlying the subjective judgments can also be measured with a high reliability. The extracted parameters mainly describe quality aspects which are directly linked to the system, environmental and task characteristics. Used as an input to prediction models, the parameters provide helpful information for system design and optimization, but not general predictions of system usability and acceptability. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "106b7450136b9eafdddbaca5131be2f5",
"text": "This paper describes the main features of a low cost and compact Ka-band satcom terminal being developed within the ESA-project LOCOMO. The terminal will be compliant with all capacities associated with communication on the move supplying higher quality, better performance and faster speed services than the current available solutions in Ku band. The terminal will be based on a dual polarized low profile Ka-band antenna with TX and RX capabilities.",
"title": ""
},
{
"docid": "dd37e97635b0ded2751d64cafcaa1aa4",
"text": "DEVICES, AND STRUCTURES By S.E. Lyshevshi, CRC Press, 2002. This book is the first of the CRC Press “Nanoand Microscience, Engineering, Technology, and Medicine Series,” of which the author of this book is also the editor. This book could be a textbook of a semester course on microelectro mechanical systems (MEMS) and nanoelectromechanical systems (NEMS). The objective is to cover the topic from basic theory to the design and development of structures of practical devices and systems. The idea of MEMS and NEMS is to utilize and further extend the technology of integrated circuits (VLSI) to nanometer structures of mechanical and biological devices for potential applications in molecular biology and medicine. MEMS and NEMS (nanotechnology) are hot topics in the future development of electronics. The interest is not limited to electrical engineers. In fact, many scientists and researchers are interested in developing MEMS and NEMS for biological and medical applications. Thus, this field has attracted researchers from many different fields. Many new books are coming out. This book seems to be the first one aimed to be a textbook for this field, but it is very hard to write a book for readers with such different backgrounds. The author of this book has emphasized computer modeling, mostly due to his research interest in this field. It would be good to provide coverage on biological and medical MEMS, for example, by reporting a few gen or DNA-related cases. Furthermore, the mathematical modeling in term of a large number of nonlinear coupled differential equations, as used in many places in the book, does not appear to have any practical value to the actual physical structures.",
"title": ""
},
{
"docid": "4f3d2b869322125a8fad8a39726c99f8",
"text": "Routing Protocol for Low Power and Lossy Networks (RPL) is the routing protocol for IoT and Wireless Sensor Networks. RPL is a lightweight protocol, having good routing functionality, but has basic security functionality. This may make RPL vulnerable to various attacks. Providing security to IoT networks is challenging, due to their constrained nature and connectivity to the unsecured internet. This survey presents the elaborated review on the security of Routing Protocol for Low Power and Lossy Networks (RPL). This survey is built upon the previous work on RPL security and adapts to the security issues and constraints specific to Internet of Things. An approach to classifying RPL attacks is made based on Confidentiality, Integrity, and Availability. Along with that, we surveyed existing solutions to attacks which are evaluated and given possible solutions (theoretically, from various literature) to the attacks which are not yet evaluated. We further conclude with open research challenges and future work needs to be done in order to secure RPL for Internet of Things (IoT).",
"title": ""
},
{
"docid": "3b0f5d827a58fc6077e7c304cd2d35b8",
"text": "BACKGROUND\nPatients suffering from depression experience significant mood, anxiety, and cognitive symptoms. Currently, most antidepressants work by altering neurotransmitter activity in the brain to improve these symptoms. However, in the last decade, research has revealed an extensive bidirectional communication network between the gastrointestinal tract and the central nervous system, referred to as the \"gut-brain axis.\" Advances in this field have linked psychiatric disorders to changes in the microbiome, making it a potential target for novel antidepressant treatments. The aim of this review is to analyze the current body of research assessing the effects of probiotics, on symptoms of depression in humans.\n\n\nMETHODS\nA systematic search of five databases was performed and study selection was completed using the preferred reporting items for systematic reviews and meta-analyses process.\n\n\nRESULTS\nTen studies met criteria and were analyzed for effects on mood, anxiety, and cognition. Five studies assessed mood symptoms, seven studies assessed anxiety symptoms, and three studies assessed cognition. The majority of the studies found positive results on all measures of depressive symptoms; however, the strain of probiotic, the dosing, and duration of treatment varied widely and no studies assessed sleep.\n\n\nCONCLUSION\nThe evidence for probiotics alleviating depressive symptoms is compelling but additional double-blind randomized control trials in clinical populations are warranted to further assess efficacy.",
"title": ""
},
{
"docid": "81cf3581955988c71b58e7a097ea00bd",
"text": "Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specifics of the computational techniques employed. We consider eight variant vertex coloring problems here. This article begins with a gentle introduction to the problem of computing a sparse Jacobian, followed by an overview of the historical development of the research area. Then we present a unifying framework for the graph models of the variant matrix estimation problems. The framework is based upon the viewpoint that a partition of a matrix into structurally orthogonal groups of columns corresponds to distance-2 coloring an appropriate graph representation. The unified framework helps integrate earlier work and leads to fresh insights; enables the design of more efficient algorithms for many problems; leads to new algorithms for others; and eases the task of building graph models for new problems. We report computational results on two of the coloring problems to support our claims. Most of the methods for these problems treat a column or a row of a matrix as an atomic entity, and partition the columns or rows (or both). A brief review of methods that do not fit these criteria is provided. We also discuss results in discrete mathematics and theoretical computer science that intersect with the topics considered here.",
"title": ""
},
{
"docid": "b6e62590995a41adb1128703060e0e2d",
"text": "Consumer-grade digital fabrication such as 3D printing is on the rise, and we believe it can be leveraged to great benefit in the arena of special education. Although 3D printing is beginning to infiltrate mainstream education, little to no research has explored 3D printing in the context of students with special support needs. We present a formative study exploring the use of 3D printing at three locations serving populations with varying ability, including individuals with cognitive, motor, and visual impairments. We found that 3D design and printing performs three functions in special education: developing 3D design and printing skills encourages STEM engagement; 3D printing can support the creation of educational aids for providing accessible curriculum content; and 3D printing can be used to create custom adaptive devices. In addition to providing opportunities to students, faculty, and caregivers in their efforts to integrate 3D printing in special education settings, our investigation also revealed several concerns and challenges. We present our investigation at three diverse sites as a case study of 3D printing in the realm of special education, discuss obstacles to efficient 3D printing in this context, and offer suggestions for designers and technologists.",
"title": ""
},
{
"docid": "95633e39a6f1dee70317edfc56e248f4",
"text": "We construct a deep portfolio theory. By building on Markowitz’s classic risk-return trade-off, we develop a self-contained four-step routine of encode, calibrate, validate and verify to formulate an automated and general portfolio selection process. At the heart of our algorithm are deep hierarchical compositions of portfolios constructed in the encoding step. The calibration step then provides multivariate payouts in the form of deep hierarchical portfolios that are designed to target a variety of objective functions. The validate step trades-off the amount of regularization used in the encode and calibrate steps. The verification step uses a cross validation approach to trace out an ex post deep portfolio efficient frontier. We demonstrate all four steps of our portfolio theory numerically.",
"title": ""
},
{
"docid": "25ad61565be2eb2490b3bbc03b196d09",
"text": "To reduce energy costs and emissions of microgrids, daily operation is critical. The problem is to commit and dispatch distributed devices with renewable generation to minimize the total energy and emission cost while meeting the forecasted energy demand. The problem is challenging because of the intermittent nature of renewables. In this paper, photovoltaic (PV) uncertainties are modeled by a Markovian process. For effective coordination, other devices are modeled as Markov processes with states depending on PV states. The entire problem is Markovian. This combinatorial problem is solved using branch-and-cut. Beyond energy and emission costs, to consider capital and maintenance costs in the long run, microgrid design is also essential. The problem is to decide device sizes with given types to minimize the lifetime cost while meeting energy demand. Its complexity increases exponentially with the problem size. To evaluate the lifetime cost including the reliability cost and the classic components such as capital and fuel costs, a linear model is established. By selecting a limited number of possible combinations of device sizes, exhaustive search is used to find the optimized design. The results show that the operation method is efficient in saving cost and scalable, and microgrids have lower lifetime costs than conventional energy systems. Implications for regulators and distribution utilities are also discussed.",
"title": ""
},
{
"docid": "b5a4b5b3e727dde52a9c858d3360a2e7",
"text": "Differential privacy is a recent framework for computation on sensitive data, which has shown considerable promise in the regime of large datasets. Stochastic gradient methods are a popular approach for learning in the data-rich regime because they are computationally tractable and scalable. In this paper, we derive differentially private versions of stochastic gradient descent, and test them empirically. Our results show that standard SGD experiences high variability due to differential privacy, but a moderate increase in the batch size can improve performance significantly.",
"title": ""
},
{
"docid": "924d833125453fa4c525df5f607724e1",
"text": "Strong stubborn sets have recently been analyzed and successfully applied as a pruning technique for planning as heuristic search. Strong stubborn sets are defined declaratively as constraints over operator sets. We show how these constraints can be relaxed to offer more freedom in choosing stubborn sets while maintaining the correctness and optimality of the approach. In general, many operator sets satisfy the definition of stubborn sets. We study different strategies for selecting among these possibilities and show that existing approaches can be considerably improved by rather simple strategies, eliminating most of the overhead of the previous",
"title": ""
},
{
"docid": "ec6d6d6f8dc3db0bdae42ee0173b1639",
"text": "AIMS\nWe investigated the population-level relationship between exposure to brand-specific advertising and brand-specific alcohol use among US youth.\n\n\nMETHODS\nWe conducted an internet survey of a national sample of 1031 youth, ages 13-20, who had consumed alcohol in the past 30 days. We ascertained all of the alcohol brands respondents consumed in the past 30 days, as well as which of 20 popular television shows they had viewed during that time period. Using a negative binomial regression model, we examined the relationship between aggregated brand-specific exposure to alcohol advertising on the 20 television shows [ad stock, measured in gross rating points (GRPs)] and youth brand-consumption prevalence, while controlling for the average price and overall market share of each brand.\n\n\nRESULTS\nBrands with advertising exposure on the 20 television shows had a consumption prevalence about four times higher than brands not advertising on those shows. Brand-level advertising elasticity of demand varied by exposure level, with higher elasticity in the lower exposure range. The estimated advertising elasticity of 0.63 in the lower exposure range indicates that for each 1% increase in advertising exposure, a brand's youth consumption prevalence increases by 0.63%.\n\n\nCONCLUSIONS\nAt the population level, underage youths' exposure to brand-specific advertising was a significant predictor of the consumption prevalence of that brand, independent of each brand's price and overall market share. The non-linearity of the observed relationship suggests that youth advertising exposure may need to be lowered substantially in order to decrease consumption of the most heavily advertised brands.",
"title": ""
},
{
"docid": "6de2b5fa5c8d3db9f9d599b6ebb56782",
"text": "Extreme sensitivity of soil organic carbon (SOC) to climate and land use change warrants further research in different terrestrial ecosystems. The aim of this study was to investigate the link between aggregate and SOC dynamics in a chronosequence of three different land uses of a south Chilean Andisol: a second growth Nothofagus obliqua forest (SGFOR), a grassland (GRASS) and a Pinus radiataplantation (PINUS). Total carbon content of the 0–10 cm soil layer was higher for GRASS (6.7 kg C m −2) than for PINUS (4.3 kg C m−2), while TC content of SGFOR (5.8 kg C m−2) was not significantly different from either one. High extractable oxalate and pyrophosphate Al concentrations (varying from 20.3–24.4 g kg −1, and 3.9– 11.1 g kg−1, respectively) were found in all sites. In this study, SOC and aggregate dynamics were studied using size and density fractionation experiments of the SOC, δ13C and total carbon analysis of the different SOC fractions, and C mineralization experiments. The results showed that electrostatic sorption between and among amorphous Al components and clay minerals is mainly responsible for the formation of metal-humus-clay complexes and the stabilization of soil aggregates. The process of ligand exchange between SOC and Al would be of minor importance resulting in the absence of aggregate hierarchy in this soil type. Whole soil C mineralization rate constants were highest for SGFOR and PINUS, followed by GRASS (respectively 0.495, 0.266 and 0.196 g CO 2-C m−2 d−1 for the top soil layer). In contrast, incubation experiments of isolated macro organic matter fractions gave opposite results, showing that the recalcitrance of the SOC decreased in another order: PINUS>SGFOR>GRASS. We deduced that electrostatic sorption processes and physical protection of SOC in soil aggregates were the main processes determining SOC stabilization. As a result, high aggregate carbon concentraCorrespondence to: D. Huygens ([email protected]) tions, varying from 148 till 48 g kg −1, were encountered for all land use sites. Al availability and electrostatic charges are dependent on pH, resulting in an important influence of soil pH on aggregate stability. Recalcitrance of the SOC did not appear to largely affect SOC stabilization. Statistical correlations between extractable amorphous Al contents, aggregate stability and C mineralization rate constants were encountered, supporting this hypothesis. Land use changes affected SOC dynamics and aggregate stability by modifying soil pH (and thus electrostatic charges and available Al content), root SOC input and management practices (such as ploughing and accompanying drying of the soil).",
"title": ""
},
{
"docid": "59a4bf897006a0bcadb562ff6446e4e5",
"text": "As the number and variety of cyber threats increase, it becomes more critical to share intelligence information in a fast and efficient manner. However, current cyber threat intelligence data do not contain sufficient information about how to specify countermeasures or how institutions should apply countermeasures automatically on their networks. A flexible and agile network architecture is required in order to determine and deploy countermeasures quickly. Software-defined networks facilitate timely application of cyber security measures thanks to their programmability. In this work, we propose a novel model for producing software-defined networking-based solutions against cyber threats and configuring networks automatically using risk analysis. We have developed a prototype implementation of the proposed model and demonstrated the applicability of the model. Furthermore, we have identified and presented future research directions in this area.",
"title": ""
},
{
"docid": "db55d7b7e0185d872b27c89c3892a289",
"text": "Bitcoin relies on the Unspent Transaction Outputs (UTXO) set to efficiently verify new generated transactions. Every unspent output, no matter its type, age, value or length is stored in every full node. In this paper we introduce a tool to study and analyze the UTXO set, along with a detailed description of the set format and functionality. Our analysis includes a general view of the set and quantifies the difference between the two existing formats up to the date. We also provide an accurate analysis of the volume of dust and unprofitable outputs included in the set, the distribution of the block height in which the outputs where included, and the use of non-standard outputs.",
"title": ""
},
{
"docid": "b34485c65c4e6780166ea0af5f13c08a",
"text": "The rise of the Internet of Things (IoT) and the recent focus on a gamut of 'Smart City' initiatives world-wide have pushed for new advances in data stream systems to (1) support complex analytics and evolving graph applications as continuous queries, and (2) deliver fast and scalable processing on large data streams. Unfortunately current continuous query languages (CQL) lack the features and constructs needed to support the more advanced applications. For example recursive queries are now part of SQL, Datalog, and other query languages, but they are not supported by most CQLs, a fact that caused a significant loss of expressive power, which is further aggravated by the limitation that only non-blocking queries can be supported. To overcome these limitations we have developed an a dvanced st ream r easo ning system ASTRO that builds on recent advances in supporting aggregates in recursive queries. In this demo, we will briefly elucidate the formal Streamlog semantics, which combined with the Pre-Mappability (PreM) concept, allows the declarative specification of many complex continuous queries, which are then efficiently executed in real-time by the portable ASTRO architecture. Using different case studies, we demonstrate (i) the ease-of-use, (ii) the expressive power and (iii) the robustness of our system, as compared to other state-of-the-art declarative CQL systems.",
"title": ""
},
{
"docid": "afb3098f38a8a3f0daad4d9e0e314ca2",
"text": "We have developed a genetic approach to visualize axons from olfactory sensory neurons expressing a given odorant receptor, as they project to the olfactory bulb. Neurons expressing a specific receptor project to only two topographically fixed loci among the 1800 glomeruli in the mouse olfactory bulb. Our data provide direct support for a model in which a topographic map of receptor activation encodes odor quality in the olfactory bulb. Receptor swap experiments suggest that the olfactory receptor plays an instructive role in the guidance process but cannot be the sole determinant in the establishment of this map. This genetic approach may be more broadly applied to visualize the development and plasticity of projections in the mammalian nervous system.",
"title": ""
},
{
"docid": "b25e35dd703d19860bbbd8f92d80bd26",
"text": "Business analytics (BA) systems are an important strategic investment for many organisations and can potentially contribute significantly to firm performance. Establishing strong BA capabilities is currently one of the major concerns of chief information officers. This research project aims to develop a BA capability maturity model (BACMM). The BACMM will help organisations to scope and evaluate their BA initiatives. This research-in-progress paper describes the current BACMM, relates it to existing capability maturity models and explains its theoretical base. It also discusses the design science research approach being used to develop the BACMM and provides details of further work within the research project. Finally, the paper concludes with a discussion of how the BACMM might be used in practice.",
"title": ""
},
{
"docid": "c26339c1a74de4797096d2ea58e60f25",
"text": "Existing systems deliver high accuracy and F1-scores for detecting paraphrase and semantic similarity on traditional clean-text corpus. For instance, on the clean-text Microsoft Paraphrase benchmark database, the existing systems attain an accuracy as high as 0.8596. However, existing systems for detecting paraphrases and semantic similarity on user-generated short-text content on microblogs such as Twitter, comprising of noisy and ad hoc short-text, needs significant research attention. In this paper, we propose a machine learning based approach towards this. We propose a set of features that, although well-known in the NLP literature for solving other problems, have not been explored for detecting paraphrase or semantic similarity, on noisy user-generated short-text data such as Twitter. We apply support vector machine (SVM) based learning. We use the benchmark Twitter paraphrase data, released as a part of SemEval 2015, for experiments. Our system delivers a paraphrase detection F1-score of 0.717 and semantic similarity detection F1-score of 0.741, thereby significantly outperforming the existing systems, that deliver F1-scores of 0.696 and 0.724 for the two problems respectively. Our features also allow us to obtain a rank among the top-10, when trained on the Microsoft Paraphrase corpus and tested on the corresponding test data, thereby empirically establishing our approach as ubiquitous across the different paraphrase detection databases.",
"title": ""
},
{
"docid": "bbd378407abb1c2a9a5016afee40c385",
"text": "One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning.",
"title": ""
},
{
"docid": "a57e470ad16c025f6b0aae99de25f498",
"text": "Purpose To establish the efficacy and safety of botulinum toxin in the treatment of Crocodile Tear Syndrome and record any possible complications.Methods Four patients with unilateral aberrant VII cranial nerve regeneration following an episode of facial paralysis consented to be included in this study after a comprehensive explanation of the procedure and possible complications was given. On average, an injection of 20 units of botulinum toxin type A (Dysport®) was given to the affected lacrimal gland. The effect was assessed with a Schirmer’s test during taste stimulation. Careful recording of the duration of the effect and the presence of any local or systemic complications was made.Results All patients reported a partial or complete disappearance of the reflex hyperlacrimation following treatment. Schirmer’s tests during taste stimulation documented a significant decrease in tear secretion. The onset of effect of the botulinum toxin was typically 24–48 h after the initial injection and lasted 4–5 months. One patient had a mild increase in his preexisting upper lid ptosis, but no other local or systemic side effects were experienced.Conclusions The injection of botulinum toxin type A into the affected lacrimal glands of patients with gusto-lacrimal reflex is a simple, effective and safe treatment.",
"title": ""
}
] |
scidocsrr
|
6289d2b5c72f8e86e5ecabe38e48778a
|
Common Spatial Patterns in a few channel BCI interface
|
[
{
"docid": "68bb5cb195c910e0a52c81a42a9e141c",
"text": "With advances in brain-computer interface (BCI) research, a portable few- or single-channel BCI system has become necessary. Most recent BCI studies have demonstrated that the common spatial pattern (CSP) algorithm is a powerful tool in extracting features for multiple-class motor imagery. However, since the CSP algorithm requires multi-channel information, it is not suitable for a few- or single-channel system. In this study, we applied a short-time Fourier transform to decompose a single-channel electroencephalography signal into the time-frequency domain and construct multi-channel information. Using the reconstructed data, the CSP was combined with a support vector machine to obtain high classification accuracies from channels of both the sensorimotor and forehead areas. These results suggest that motor imagery can be detected with a single channel not only from the traditional sensorimotor area but also from the forehead area.",
"title": ""
}
] |
[
{
"docid": "b852aad0d205aff17cd8a9b7c21ed99f",
"text": "In present investigation, two glucose based smart tumor-targeted drug delivery systems coupled with enzyme-sensitive release strategy are introduced. Magnetic nanoparticles (Fe3O4) were grafted with carboxymethyl chitosan (CS) and β-cyclodextrin (β-CD) as carriers. Prodigiosin (PG) was used as the model anti-tumor drug, targeting aggressive tumor cells. The morphology, properties and composition and grafting process were characterized by transmission electron microscope (TEM), Fourier transform infrared spectroscopy (FT-IR), vibration sample magnetometer (VSM), X-ray diffraction (XRD) analysis. The results revealed that the core crystal size of the nanoparticles synthesized were 14.2±2.1 and 9.8±1.4nm for β-CD and CS-MNPs respectively when measured using TEM; while dynamic light scattering (DLS) gave diameters of 121.1 and 38.2nm. The saturation magnetization (Ms) of bare magnetic nanoparticles is 50.10emucm-3, while modification with β-CD and CS gave values of 37.48 and 65.01emucm-3, respectively. The anticancer compound, prodigiosin (PG) was loaded into the NPs with an encapsulation efficiency of approximately 81% for the β-CD-MNPs, and 92% for the CS-MNPs. This translates to a drug loading capacity of 56.17 and 59.17mg/100mg MNPs, respectively. Measurement of in vitro release of prodigiosin from the loaded nanocarriers in the presence of the hydrolytic enzymes, alpha-amylase and chitosanase showed that 58.1 and 44.6% of the drug was released after one-hour of incubation. Cytotoxicity studies of PG-loaded nanocarriers on two cancer cell lines, MCF-7 and HepG2, and on a non-cancerous control, NIH/3T3 cells, revealed that the drug loaded nanoparticles had greater efficacy on the cancer cell lines. The selective index (SI) for free PG on MCF-7 and HepG2 cells was 1.54 and 4.42 respectively. This parameter was reduced for PG-loaded β-CD-MNPs to 1.27 and 1.85, while the SI for CS-MNPs improved considerably to 7.03 on MCF-7 cells. Complementary studies by fluorescence and confocal microscopy and flow cytometry confirm specific targeting of the nanocarriers to the cancer cells. The results suggest that CS-MNPs have higher potency and are better able to target the prodigiosin toxicity effect on cancerous cells than β-CD-MNPs.",
"title": ""
},
{
"docid": "c615480e70f3baa5589d0c620549967a",
"text": "A common task in image editing is to change the colours of a picture to match the desired colour grade of another picture. Finding the correct colour mapping is tricky because it involves numerous interrelated operations, like balancing the colours, mixing the colour channels or adjusting the contrast. Recently, a number of automated tools have been proposed to find an adequate one-to-one colour mapping. The focus in this paper is on finding the best linear colour transformation. Linear transformations have been proposed in the literature but independently. The aim of this paper is thus to establish a common mathematical background to all these methods. Also, this paper proposes a novel transformation, which is derived from the Monge-Kantorovicth theory of mass transportation. The proposed solution is optimal in the sense that it minimises the amount of changes in the picture colours. It favourably compares theoretically and experimentally with other techniques for various images and under various colour spaces.",
"title": ""
},
{
"docid": "8a35d273a4f45e64b43bf3a7d02db4ed",
"text": "Many interesting problems in machine learning are being revisited with new deep learning tools. For graph-based semisupervised learning, a recent important development is graph convolutional networks (GCNs), which nicely integrate local vertex features and graph topology in the convolutional layers. Although the GCN model compares favorably with other state-of-the-art methods, its mechanisms are not clear and it still requires considerable amount of labeled data for validation and model selection. In this paper, we develop deeper insights into the GCN model and address its fundamental limits. First, we show that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of oversmoothing with many convolutional layers. Second, to overcome the limits of the GCN model with shallow architectures, we propose both co-training and self-training approaches to train GCNs. Our approaches significantly improve GCNs in learning with very few labels, and exempt them from requiring additional labels for validation. Extensive experiments on benchmarks have verified our theory and proposals.",
"title": ""
},
{
"docid": "292eea3f09d135f489331f876052ce88",
"text": "-Steganography is a term used for covered writing. Steganography can be applied on different file formats, such as audio, video, text, image etc. In image steganography, data in the form of image is hidden under some image by using transformations such as ztransformation, integer wavelet transformation, DWT etc and then sent to the destination. At the destination, the data is extracted from the cover image using the inverse transformation. This paper presents a new approach for image steganography using DWT. The cover image is divided into higher and lower frequency sub-bands and data is embedded into higher frequency sub-bands. Arnold Transformation is used to increase the security. The proposed approach is implemented in MATLAB 7.0 and evaluated on the basis of PSNR, capacity and correlation. The proposed approach results in high capacity image steganography as compared to existing approaches. Keywords-Image Steganography, PSNR, Discrete Wavelet Transform.",
"title": ""
},
{
"docid": "d074d4154c775377f58c236187e70699",
"text": "For a long time PDF documents have arrived in the everyday life of the average computer user, corporate businesses and critical structures, as authorities and military. Due to its wide spread in general, and because out-of-date versions of PDF readers are quite common, using PDF documents has become a popular malware distribution strategy. In this context, malicious documents have useful features: they are trustworthy, attacks can be camouflaged by inconspicuous document content, but still, they can often download and install malware undetected by firewall and anti-virus software. In this paper we present PDF Scrutinizer, a malicious PDF detection and analysis tool. We use static, as well as, dynamic techniques to detect malicious behavior in an emulated environment. We evaluate the quality and the performance of the tool with PDF documents from the wild, and show that PDF Scrutinizer reliably detects current malicious documents, while keeping a low false-positive rate and reasonable runtime performance.",
"title": ""
},
{
"docid": "44de39859665488f8df950007d7a01c6",
"text": "Topic models provide insights into document collections, and their supervised extensions also capture associated document-level metadata such as sentiment. However, inferring such models from data is often slow and cannot scale to big data. We build upon the “anchor” method for learning topic models to capture the relationship between metadata and latent topics by extending the vector-space representation of word-cooccurrence to include metadataspecific dimensions. These additional dimensions reveal new anchor words that reflect specific combinations of metadata and topic. We show that these new latent representations predict sentiment as accurately as supervised topic models, and we find these representations more quickly without sacrificing interpretability. Topic models were introduced in an unsupervised setting (Blei et al., 2003), aiding in the discovery of topical structure in text: large corpora can be distilled into human-interpretable themes that facilitate quick understanding. In addition to illuminating document collections for humans, topic models have increasingly been used for automatic downstream applications such as sentiment analysis (Titov and McDonald, 2008; Paul and Girju, 2010; Nguyen et al., 2013). Unfortunately, the structure discovered by unsupervised topic models does not necessarily constitute the best set of features for tasks such as sentiment analysis. Consider a topic model trained on Amazon product reviews. A topic model might discover a topic about vampire romance. However, we often want to go deeper, discovering facets of a topic that reflect topic-specific sentiment, e.g., “buffy” and “spike” for positive sentiment vs. “twilight” and “cullen” for negative sentiment. Techniques for discovering such associations, called supervised topic models (Section 2), both produce interpretable topics and predict metadata values. While unsupervised topic models now have scalable inference strategies (Hoffman et al., 2013; Zhai et al., 2012), supervised topic model inference has not received as much attention and often scales poorly. The anchor algorithm is a fast, scalable unsupervised approach for finding “anchor words”—precise words with unique co-occurrence patterns that can define the topics of a collection of documents. We augment the anchor algorithm to find supervised sentiment-specific anchor words (Section 3). Our algorithm is faster and just as effective as traditional schemes for supervised topic modeling (Section 4). 1 Anchors: Speedy Unsupervised Models The anchor algorithm (Arora et al., 2013) begins with a V × V matrix Q̄ of word co-occurrences, where V is the size of the vocabulary. Each word type defines a vector Q̄i,· of length V so that Q̄i,j encodes the conditional probability of seeing word j given that word i has already been seen. Spectral methods (Anandkumar et al., 2012) and the anchor algorithm are fast alternatives to traditional topic model inference schemes because they can discover topics via these summary statistics (quadratic in the number of types) rather than examining the whole dataset (proportional to the much larger number of tokens). The anchor algorithm takes its name from the idea of anchor words—words which unambiguously identify a particular topic. For instance, “wicket” might be an anchor word for the cricket topic. Thus, for any anchor word a, Q̄a,· will look like a topic distribution. Q̄wicket,· will have high probability for “bowl”, “century”, “pitch”, and “bat”; these words are related to cricket, but they cannot be anchor words because they are also related to other topics. Because these other non-anchor words could be topically ambiguous, their co-occurrence must be explained through some combination of anchor words; thus for non-anchor word i,",
"title": ""
},
{
"docid": "15d5de81246fff7cf4f679c58ce19a0f",
"text": "Self-transcendence has been associated, in previous studies, with stressful life events and emotional well-being. This study examined the relationships among self-transcendence, emotional well-being, and illness-related distress in women with advanced breast cancer. The study employed a cross-sectional correlational design in a convenience sample (n = 107) of women with Stage IIIb and Stage IV breast cancer. Subjects completed a questionnaire that included Reed's Self-Transcendence Scale; Bradburn's Affect Balance Scale (ABS); a Cognitive Well-Being (CWB) Scale based on work by Campbell, Converse, and Rogers; McCorkle and Young's Symptom Distress Scale (SDS); and the Karnofsky Performance Scale (KPS). Data were analyzed using factor analytic structural equations modeling. Self-transcendence decreased illness distress (assessed by the SDS and the KPS) through the mediating effect of emotional well-being (assessed by the ABS and the CWB Scale). Self-transcendence directly affected emotional well-being (beta = 0.69), and emotional well-being had a strong negative effect on illness distress (beta = -0.84). A direct path from self-transcendence to illness distress (beta = -0.31) became nonsignificant (beta = -0.08) when controlling for emotional well-being. Further research using longitudinal data will seek to validate these relationships and to explain how nurses can promote self-transcendence in women with advanced breast cancer, as well as in others with life-threatening illnesses.",
"title": ""
},
{
"docid": "5b1f9c744daf1798c3af8b717132f87f",
"text": "We have observed a growth in the number of qualitative studies that have no guiding set of philosophic assumptions in the form of one of the established qualitative methodologies. This lack of allegiance to an established qualitative approach presents many challenges for “generic qualitative” studies, one of which is that the literature lacks debate about how to do a generic study well. We encourage such debate and offer four basic requirements as a point of departure: noting the researchers’ position, distinguishing method and methodology, making explicit the approach to rigor, and identifying the researchers’ analytic lens.",
"title": ""
},
{
"docid": "b3d79bd054f5d1deb7cf24c5c8cf397a",
"text": "Waste collection is a highly visible municipal service that involves large expenditures and difficult operational problems, plus it is expensive to operate in terms of investment costs (i.e. vehicles fleet), operational costs (i.e. fuel, maintenances) and environmental costs (i.e. emissions, noise and traffic congestions). Modern traceability devices, like volumetric sensors, identification RFID (Radio Frequency Identification) systems, GPRS (General Packet Radio Service) and GPS (Global Positioning System) technology, permit to obtain data in real time, which is fundamental to implement an efficient and innovative waste collection routing model. The basic idea is that knowing the real time data of each vehicle and the real time replenishment level at each bin makes it possible to decide, in function of the waste generation pattern, what bin should be emptied and what should not, optimizing different aspects like the total covered distance, the necessary number of vehicles and the environmental impact. This paper describes a framework about the traceability technology available in the optimization of solid waste collection, and introduces an innovative vehicle routing model integrated with the real time traceability data, starting the application in an Italian city of about 100,000 inhabitants. The model is tested and validated using simulation and an economical feasibility study is reported at the end of the paper.",
"title": ""
},
{
"docid": "52e21597d51d33e21b30f6a862e5aa98",
"text": "The epidermal growth factor receptor (EGFR)-directed tyrosine kinase inhibitors (TKIs) gefitinib, erlotinib and afatinib are approved treatments for non-small cell lung cancers harbouring activating mutations in the EGFR kinase, but resistance arises rapidly, most frequently owing to the secondary T790M mutation within the ATP site of the receptor. Recently developed mutant-selective irreversible inhibitors are highly active against the T790M mutant, but their efficacy can be compromised by acquired mutation of C797, the cysteine residue with which they form a key covalent bond. All current EGFR TKIs target the ATP-site of the kinase, highlighting the need for therapeutic agents with alternative mechanisms of action. Here we describe the rational discovery of EAI045, an allosteric inhibitor that targets selected drug-resistant EGFR mutants but spares the wild-type receptor. The crystal structure shows that the compound binds an allosteric site created by the displacement of the regulatory C-helix in an inactive conformation of the kinase. The compound inhibits L858R/T790M-mutant EGFR with low-nanomolar potency in biochemical assays. However, as a single agent it is not effective in blocking EGFR-driven proliferation in cells owing to differential potency on the two subunits of the dimeric receptor, which interact in an asymmetric manner in the active state. We observe marked synergy of EAI045 with cetuximab, an antibody therapeutic that blocks EGFR dimerization, rendering the kinase uniformly susceptible to the allosteric agent. EAI045 in combination with cetuximab is effective in mouse models of lung cancer driven by EGFR(L858R/T790M) and by EGFR(L858R/T790M/C797S), a mutant that is resistant to all currently available EGFR TKIs. More generally, our findings illustrate the utility of purposefully targeting allosteric sites to obtain mutant-selective inhibitors.",
"title": ""
},
{
"docid": "aa72af5867ec5862706fc66bacfd622a",
"text": "This paper presents a sensor fusion strategy applied for Simultaneous Localization and Mapping (SLAM) in dynamic environments. The designed approach consists of two features: (i) the first one is a fusion module which synthesizes line segments obtained from laser rangefinder and line features extracted from monocular camera. This policy eliminates any pseudo segments that appear from any momentary pause of dynamic objects in laser data. (ii) The second characteristic is a modified multi-sensor point estimation fusion SLAM (MPEF-SLAM) that incorporates two individual Extended Kalman Filter (EKF) based SLAM algorithms: monocular and laser SLAM. The error of the localization in fused SLAM is reduced compared with those of individual SLAM. Additionally, a new data association technique based on the homography transformation matrix is developed for monocular SLAM. This data association method relaxes the pleonastic computation. The experimental results validate the performance of the proposed sensor fusion and data association method.",
"title": ""
},
{
"docid": "ba02d2ecf18d0bc24a3e8884b5de54ed",
"text": "We present glasses: Global optimisation with Look-Ahead through Stochastic Simulation and Expected-loss Search. The majority of global optimisation approaches in use are myopic, in only considering the impact of the next function value; the non-myopic approaches that do exist are able to consider only a handful of future evaluations. Our novel algorithm, glasses, permits the consideration of dozens of evaluations into the future. This is done by approximating the ideal look-ahead loss function, which is expensive to evaluate, by a cheaper alternative in which the future steps of the algorithm are simulated beforehand. An Expectation Propagation algorithm is used to compute the expected value of the loss. We show that the far-horizon planning thus enabled leads to substantive performance gains in empirical tests.",
"title": ""
},
{
"docid": "8dc3ba4784ea55183e96b466937d050b",
"text": "One of the major problems that clinical neuropsychology has had in memory clinics is to apply ecological, easily administrable and sensitive tests that can make the diagnosis of dementia both precocious and reliable. Often the choice of the best neuropsychological test is hard because of a number of variables that can influence a subject’s performance. In this regard, tests originally devised to investigate cognitive functions in healthy adults are not often appropriate to analyze cognitive performance in old subjects with low education because of their intrinsically complex nature. In the present paper, we present normative values for the Rey–Osterrieth Complex Figure B Test (ROCF-B) a simple test that explores constructional praxis and visuospatial memory. We collected normative data of copy, immediate and delayed recall of the ROCF-B in a group of 346 normal Italian subjects above 40 years. A multiple regression analysis was performed to evaluate the potential effect of age, sex, and education on the three tasks administered to the subjects. Age and education had a significant effect on copying, immediate recall, and delayed recall as well as on the rate of forgetting. Correction grids and equivalent scores with cut-off values relative to each task are available. The availability of normative values can make the ROCF-B a valid instrument to assess non-verbal memory in adults and in the elderly for whom the commonly used ROCF-A is too demanding.",
"title": ""
},
{
"docid": "529e132a37f9fb37ddf04984236f4b36",
"text": "The first steps in analyzing defensive malware are understanding what obfuscations are present in real-world malware binaries, how these obfuscations hinder analysis, and how they can be overcome. While some obfuscations have been reported independently, this survey consolidates the discussion while adding substantial depth and breadth to it. This survey also quantifies the relative prevalence of these obfuscations by using the Dyninst binary analysis and instrumentation tool that was recently extended for defensive malware analysis. The goal of this survey is to encourage analysts to focus on resolving the obfuscations that are most prevalent in real-world malware.",
"title": ""
},
{
"docid": "6cc203d16e715cbd71efdeca380f3661",
"text": "PURPOSE\nTo determine a population-based estimate of communication disorders (CDs) in children; the co-occurrence of intellectual disability (ID), autism, and emotional/behavioral disorders; and the impact of these conditions on the prevalence of CDs.\n\n\nMETHOD\nSurveillance targeted 8-year-olds born in 1994 residing in 2002 in the 3 most populous counties in Utah (n = 26,315). A multiple-source record review was conducted at all major health and educational facilities.\n\n\nRESULTS\nA total of 1,667 children met the criteria of CD. The prevalence of CD was estimated to be 63.4 per 1,000 8-year-olds (95% confidence interval = 60.4-66.2). The ratio of boys to girls was 1.8:1. Four percent of the CD cases were identified with an ID and 3.7% with autism spectrum disorders (ASD). Adjusting the CD prevalence to exclude ASD and/or ID cases significantly affected the CD prevalence rate. Other frequently co-occurring emotional/behavioral disorders with CD were attention deficit/hyperactivity disorder, anxiety, and conduct disorder.\n\n\nCONCLUSIONS\nFindings affirm that CDs and co-occurring mental health conditions are a major educational and public health concern.",
"title": ""
},
{
"docid": "bd92af2495300beb16e8832a80e9fc25",
"text": "Increasingly, business analytics is seen to provide the possibilities for businesses to effectively support strategic decision-making, thereby to become a source of strategic business value. However, little research exists regarding the mechanism through which business analytics supports strategic decisionmaking and ultimately organisational performance. This paper draws upon literature on IT affordances and strategic decision-making to (1) understand the decision-making affordances provided by business analytics, and (2) develop a research model linking business analytics, data-driven culture, decision-making affordances, strategic decision-making, and organisational performance. The model is empirically tested using structural equation modelling based on 296 survey responses collected from UK businesses. The study produces four main findings: (1) business analytics has a positive effect on decision-making affordances both directly and indirectly through the mediation of a data-driven culture; (2) decision-making affordances significantly influence strategic decision comprehensiveness positively and intuitive decision-making negatively; (3) data-driven culture has a significant and positive effect on strategic decision comprehensiveness; and (4) strategic decision comprehensiveness has a positive effect on organisational performance but a negative effect on intuitive decision-making.",
"title": ""
},
{
"docid": "938e44b4c03823584d9f9fb9209a9b1e",
"text": "The competitive MNIST handwritten digit recognition benchmark has a long history of broken records since 1998. The most recent substantial improvement by others dates back 7 years (error rate 0.4%) . Recently we were able to significantly improve this result, using graphics cards to greatly speed up training of simple but deep MLPs, which achieved 0.35%, outperforming all the previous more complex methods. Here we report another substantial improvement: 0.31% obtained using a committee of MLPs.",
"title": ""
},
{
"docid": "1ead17fc0770233db8903db2b4f15c79",
"text": "The major objective of this paper is to examine the determinants of collaborative commerce (c-commerce) adoption with special emphasis on Electrical and Electronic organizations in Malaysia. Original research using a self-administered questionnaire was distributed to 400 Malaysian organizations. Out of the 400 questionnaires posted, 109 usable questionnaires were returned, yielding a response rate of 27.25%. Data were analysed by using correlation and multiple regression analysis. External environment, organization readiness and information sharing culture were found to be significant in affecting organ izations decision to adopt c-commerce. Information sharing culture factor was found to have the strongest influence on the adoption of c-commerce, followed by organization readiness and external environment. Contrary to other technology adoption studies, this research found that innovation attributes have no significant influence on the adoption of c-commerce. In terms of theoretical contributions, this study has extended previous researches conducted in western countries and provides great potential by advancing the understanding between the association of adoption factors and c-commerce adoption level. This research show that adoption studies could move beyond studying the factors based on traditional adoption models. Organizations planning to adopt c-commerce would also be able to applied strategies based on the findings from this research.",
"title": ""
},
{
"docid": "736f8a02bbe5ab9a5b9dd5026430e05c",
"text": "We present a novel approach for interactive navigation and planning of multiple agents in crowded scenes with moving obstacles. Our formulation uses a precomputed roadmap that provides macroscopic, global connectivity for wayfinding and combines it with fast and localized navigation for each agent. At runtime, each agent senses the environment independently and computes a collision-free path based on an extended \"Velocity Obstacles\" concept. Furthermore, our algorithm ensures that each agent exhibits no oscillatory behaviors. We have tested the performance of our algorithm in several challenging scenarios with a high density of virtual agents. In practice, the algorithm performance scales almost linearly with the number of agents and can run at interactive rates on multi-core processors.",
"title": ""
},
{
"docid": "39be1d73b84872b0ae1d61bbd0fc96f8",
"text": "Annotating data is a common bottleneck in building text classifiers. This is particularly problematic in social media domains, where data drift requires frequent retraining to maintain high accuracy. In this paper, we propose and evaluate a text classification method for Twitter data whose only required human input is a single keyword per class. The algorithm proceeds by identifying exemplar Twitter accounts that are representative of each class by analyzing Twitter Lists (human-curated collections of related Twitter accounts). A classifier is then fit to the exemplar accounts and used to predict labels of new tweets and users. We develop domain adaptation methods to address the noise and selection bias inherent to this approach, which we find to be critical to classification accuracy. Across a diverse set of tasks (topic, gender, and political affiliation classification), we find that the resulting classifier is competitive with a fully supervised baseline, achieving superior accuracy on four of six datasets despite using no manually labeled data.",
"title": ""
}
] |
scidocsrr
|
d6b38bfacf6234254620ae433837e2a4
|
A prediction based approach for stock returns using autoregressive neural networks
|
[
{
"docid": "186b18a1ce29ce50bf9309137c09a9b5",
"text": "This work presents a new prediction-based portfolio optimization model tha t can capture short-term investment opportunities. We used neural network predictors to predict stock s’ returns and derived a risk measure, based on the prediction errors, that have the same statistical foundation o f he mean-variance model. The efficient diversification effects holds thanks to the selection of predictor s with low and complementary pairwise error profiles. We employed a large set of experiments with real data from the Brazilian sto ck market to examine our portfolio optimization model, which included the evaluation of the Normality o f the prediction errors. Our results showed that it is possible to obtain Normal prediction errors with non-Normal time series of stock returns, and that the prediction-based portfolio optimization model to ok advantage of short term opportunities, outperforming the mean-variance model and beating the m arket index.",
"title": ""
},
{
"docid": "ee11c968b4280f6da0b1c0f4544bc578",
"text": "A report is presented of some results of an ongoing project using neural-network modeling and learning techniques to search for and decode nonlinear regularities in asset price movements. The author focuses on the case of IBM common stock daily returns. Having to deal with the salient features of economic data highlights the role to be played by statistical inference and requires modifications to standard learning techniques which may prove useful in other contexts.<<ETX>>",
"title": ""
}
] |
[
{
"docid": "2a76205b80c90ff9a4ca3ccb0434bb03",
"text": "Finding out which e-shops offer a specific product is a central challenge for building integrated product catalogs and comparison shopping portals. Determining whether two offers refer to the same product involves extracting a set of features (product attributes) from the web pages containing the offers and comparing these features using a matching function. The existing gold standards for product matching have two shortcomings: (i) they only contain offers from a small number of e-shops and thus do not properly cover the heterogeneity that is found on the Web. (ii) they only provide a small number of generic product attributes and therefore cannot be used to evaluate whether detailed product attributes have been correctly extracted from textual product descriptions. To overcome these shortcomings, we have created two public gold standards: The WDC Product Feature Extraction Gold Standard consists of over 500 product web pages originating from 32 different websites on which we have annotated all product attributes (338 distinct attributes) which appear in product titles, product descriptions, as well as tables and lists. The WDC Product Matching Gold Standard consists of over 75 000 correspondences between 150 products (mobile phones, TVs, and headphones) in a central catalog and offers for these products on the 32 web sites. To verify that the gold standards are challenging enough, we ran several baseline feature extraction and matching methods, resulting in F-score values in the range 0.39 to 0.67. In addition to the gold standards, we also provide a corpus consisting of 13 million product pages from the same websites which might be useful as background knowledge for training feature extraction and matching methods.",
"title": ""
},
{
"docid": "2663800ed92ce1cd44ab1b7760c43e0f",
"text": "Synchronous reluctance motor (SynRM) have rather poor power factor. This paper investigates possible methods to improve the power factor (pf) without impacting its torque density. The study found two possible aspects to improve the power factor with either refining rotor dimensions and followed by current control techniques. Although it is a non-linear mathematical field, it is analysed by analytical equations and FEM simulation is utilized to validate the design progression. Finally, an analytical method is proposed to enhance pf without compromising machine torque density. There are many models examined in this study to verify the design process. The best design with high performance is used for final current control optimization simulation.",
"title": ""
},
{
"docid": "a1c126807088d954b73c2bd5d696c481",
"text": "or, why space syntax works when it looks as though it shouldn't 0 Abstract A common objection to the space syntax analysis of cities is that even in its own terms the technique of using a non-uniform line representation of space and analysing it by measures that are essentially topological, ignores too much geometric and metric detail to be credible. In this paper it is argued that far from ignoring geometric and metric properties the 'line-graph' internalises them into the structure of the graph and in doing so allows the graph analysis to pick up the nonlocal, or extrinsic, properties of spaces that are critical to the movement dynamics through which a city evolves its essential structures. Nonlocal properties are those which are defined by the relation of elements to all others in the system, rather than intrinsic to the element itself. The method also leads to a powerful analysis of urban structures because cities are essentially nonlocal systems. 1 Preliminaries 1.1 The critique of line graphs Space syntax is a family of techniques for representing and analysing spatial layouts of all kinds. A spatial representation is first chosen according to how space is defined for the purposes of the research-rooms, convex spaces, lines, convex isovists, and so on-and then one or more measures of 'configuration' are selected to analyse the patterns formed by that representation. Prior to the researcher setting up the research question, no one representation or measure is privileged over others. Part of the researcher's task is to discover which representation and which measure captures the logic of a particular system, as shown by observation of its functioning. In the study of cities, one representation and one type of measure has proved more consistently fruitful than others: the representation of urban space as a matrix of the 'longest and fewest' lines, the 'axial map', and the analysis of this by translating the line matrix into a graph, and the use of the various versions of the 'topological' (i.e. nonmetric) measure of patterns of line connectivity called 'integration'. (Hillier et al 1982, Steadman 1983, Hillier & Hanson 1984) This 'line graph' approach has proved quite unexpectedly successful. It has generated not only models for predicting urban et al 1998), but also strong theoretical results on urban structure, and even a general theory of the dynamics linking the urban grid, movement, land uses and building densities in 'organic' cities …",
"title": ""
},
{
"docid": "f13000c4870a85e491f74feb20f9b2d4",
"text": "Complex Event Processing (CEP) is a stream processing model that focuses on detecting event patterns in continuous event streams. While the CEP model has gained popularity in the research communities and commercial technologies, the problem of gracefully degrading performance under heavy load in the presence of resource constraints, or load shedding, has been largely overlooked. CEP is similar to “classical” stream data management, but addresses a substantially different class of queries. This unfortunately renders the load shedding algorithms developed for stream data processing inapplicable. In this paper we study CEP load shedding under various resource constraints. We formalize broad classes of CEP load-shedding scenarios as different optimization problems. We demonstrate an array of complexity results that reveal the hardness of these problems and construct shedding algorithms with performance guarantees. Our results shed some light on the difficulty of developing load-shedding algorithms that maximize utility.",
"title": ""
},
{
"docid": "1c6a589d2c74bd1feb3e98c21a1375a9",
"text": "UNLABELLED\nMinimally invasive approach for groin hernia treatment is still controversial, but in the last decade, it tends to become the standard procedure for one day surgery. We present herein the technique of laparoscopic Trans Abdominal Pre Peritoneal approach (TAPP). The surgical technique is presented step-by step;the different procedures key points (e.g. anatomic landmarks recognition, diagnosis of \"occult\" hernias, preperitoneal and hernia sac dissection, mesh placement and peritoneal closure) are described and discussed in detail, several tips and tricks being noted and highlighted.\n\n\nCONCLUSIONS\nTAPP is a feasible method for treating groin hernia associated with low rate of postoperative morbidity and recurrence. The anatomic landmarks are easily recognizable. The laparoscopic exploration allows for the treatment of incarcerated strangulated hernias and the intraoperative diagnosis of occult hernias.",
"title": ""
},
{
"docid": "1fc58f0ed6c2fbd05f190b3d3da2d319",
"text": "Seismology is the scientific study of earthquakes & the propagation of seismic waves through the earth. The large improvement has been seen in seismology from around hundreds of years. The seismic data plays important role in the seismic data acquisition. This data can be used for analysis which helps to locate the correct location of the earthquake. The more efficient systems are used now a day to locate the earthquakes as large improvements has been done in this field. In older days analog systems are used for data acquisition. The analog systems record seismic signals in a permanent way. These systems are large in size, costly and are incompatible with computer. Due to these drawbacks these analog systems are replaced by digital systems so that data can be recorded digitally. Using different sensor to indentify the natural disaster, MEMS, VIBRATION sensor is used to monitor the earth condition , the different values of the different sensor is given to the ADC to convert the values in digital format, if any changes occurs or in abnormality condition BUZZER will ring.",
"title": ""
},
{
"docid": "537d6fdfb26e552fb3254addfbb6ac49",
"text": "We propose a unified framework for building unsupervised representations of entities and their compositions, by viewing each entity as a histogram (or distribution) over its contexts. This enables us to take advantage of optimal transport and construct representations that effectively harness the geometry of the underlying space containing the contexts. Our method captures uncertainty via modelling the entities as distributions and simultaneously provides interpretability with the optimal transport map, hence giving a novel perspective for building rich and powerful feature representations. As a guiding example, we formulate unsupervised representations for text, and demonstrate it on tasks such as sentence similarity and word entailment detection. Empirical results show strong advantages gained through the proposed framework. This approach can potentially be used for any unsupervised or supervised problem (on text or other modalities) with a co-occurrence structure, such as any sequence data. The key tools at the core of this framework are Wasserstein distances and Wasserstein barycenters, hence raising the question from our title.",
"title": ""
},
{
"docid": "968555bbada2d930b97d8bb982580535",
"text": "With the recent developments in three-dimensional (3-D) scanner technologies and photogrammetric techniques, it is now possible to acquire and create accurate models of historical and archaeological sites. In this way, unrestricted access to these sites, which is highly desirable from both a research and a cultural perspective, is provided. Through the process of virtualisation, numerous virtual collections are created. These collections must be archives, indexed and visualised over a very long period of time in order to be able to monitor and restore them as required. However, the intrinsic complexities and tremendous importance of ensuring long-term preservation and access to these collections have been widely overlooked. This neglect may lead to the creation of a so-called “Digital Rosetta Stone”, where models become obsolete and the data cannot be interpreted or virtualised. This paper presents a framework for the long-term preservation of 3-D culture heritage data as well as the application thereof in monitoring, restoration and virtual access. The interplay between raw data and model is considered as well as the importance of calibration. Suitable archiving and indexing techniques are described and the issue of visualisation over a very long period of time is addressed. An approach to experimentation though detachment, migration and emulation is presented.",
"title": ""
},
{
"docid": "8b752b8607b6296b35d34bb59830e8e4",
"text": "The innate immune system is the first line of defense against infection and responses are initiated by pattern recognition receptors (PRRs) that detect pathogen-associated molecular patterns (PAMPs). PRRs also detect endogenous danger-associated molecular patterns (DAMPs) that are released by damaged or dying cells. The major PRRs include the Toll-like receptor (TLR) family members, the nucleotide binding and oligomerization domain, leucine-rich repeat containing (NLR) family, the PYHIN (ALR) family, the RIG-1-like receptors (RLRs), C-type lectin receptors (CLRs) and the oligoadenylate synthase (OAS)-like receptors and the related protein cyclic GMP-AMP synthase (cGAS). The different PRRs activate specific signaling pathways to collectively elicit responses including the induction of cytokine expression, processing of pro-inflammatory cytokines and cell-death responses. These responses control a pathogenic infection, initiate tissue repair and stimulate the adaptive immune system. A central theme of many innate immune signaling pathways is the clustering of activated PRRs followed by sequential recruitment and oligomerization of adaptors and downstream effector enzymes, to form higher-order arrangements that amplify the response and provide a scaffold for proximity-induced activation of the effector enzymes. Underlying the formation of these complexes are co-operative assembly mechanisms, whereby association of preceding components increases the affinity for downstream components. This ensures a rapid immune response to a low-level stimulus. Structural and biochemical studies have given key insights into the assembly of these complexes. Here we review the current understanding of assembly of immune signaling complexes, including inflammasomes initiated by NLR and PYHIN receptors, the myddosomes initiated by TLRs, and the MAVS CARD filament initiated by RIG-1. We highlight the co-operative assembly mechanisms during assembly of each of these complexes.",
"title": ""
},
{
"docid": "20c31eaaa80b66cf100ffd24b3b01ede",
"text": "Time series data has become a ubiquitous and important data source in many application domains. Most companies and organizations strongly rely on this data for critical tasks like decision-making, planning, predictions, and analytics in general. While all these tasks generally focus on actual data representing organization and business processes, it is also desirable to apply them to alternative scenarios in order to prepare for developments that diverge from expectations or assess the robustness of current strategies. When it comes to the construction of such what-if scenarios, existing tools either focus on scalar data or they address highly specific scenarios. In this work, we propose a generally applicable and easy-to-use method for the generation of what-if scenarios on time series data. Our approach extracts descriptive features of a data set and allows the construction of an alternate version by means of filtering and modification of these features.",
"title": ""
},
{
"docid": "f2a1e5d8e99977c53de9f2a82576db69",
"text": "During the last years, several masking schemes for AES have been proposed to secure hardware implementations against DPA attacks. In order to investigate the effectiveness of these countermeasures in practice, we have designed and manufactured an ASIC. The chip features an unmasked and two masked AES-128 encryption engines that can be attacked independently. In addition to conventional DPA attacks on the output of registers, we have also mounted attacks on the output of logic gates. Based on simulations and physical measurements we show that the unmasked and masked implementations leak side-channel information due to glitches at the output of logic gates. It turns out that masking the AES S-Boxes does not prevent DPA attacks, if glitches occur in the circuit.",
"title": ""
},
{
"docid": "e640d487052b9399bea6c0d06ce189b0",
"text": "We propose a novel deep supervised neural network for the task of action recognition in videos, which implicitly takes advantage of visual tracking and shares the robustness of both deep Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). In our method, a multi-branch model is proposed to suppress noise from background jitters. Specifically, we firstly extract multi-level deep features from deep CNNs and feed them into 3dconvolutional network. After that we feed those feature cubes into our novel joint LSTM module to predict labels and to generate attention regularization. We evaluate our model on two challenging datasets: UCF101 and HMDB51. The results show that our model achieves the state-of-art by only using convolutional features.",
"title": ""
},
{
"docid": "17c0ef52e8f4dade526bf56f158967ef",
"text": "Consider a distributed computing setup consisting of a master node and n worker nodes, each equipped with p cores, and a function f (x) = g(f1(x), f2(x),…, fk(x)), where each fi can be computed independently of the rest. Assuming that the worker computational times have exponential tails, what is the minimum possible time for computing f? Can we use coding theory principles to speed up this distributed computation? In [1], it is shown that distributed computing of linear functions can be expedited by applying linear erasure codes. However, it is not clear if linear codes can speed up distributed computation of ‘nonlinear’ functions as well. To resolve this problem, we propose the use of sparse linear codes, exploiting the modern multicore processing architecture. We show that 1) our coding solution achieves the order optimal runtime, and 2) it is at least Θ(√log n) times faster than any uncoded schemes where the number of workers is n.",
"title": ""
},
{
"docid": "39d522e6db7971ccf8a9d3bd3a915a10",
"text": "The Internet of Things (IoT) is next generation technology that is intended to improve and optimize daily life by operating intelligent sensors and smart objects together. At application layer, communication of resourceconstrained devices is expected to use constrained application protocol (CoAP).Communication security is an important aspect of IoT environment. However closed source security solutions do not help in formulating security in IoT so that devices can communicate securely with each other. To protect the transmission of confidential information secure CoAP uses datagram transport layer security (DTLS) as the security protocol for communication and authentication of communicating devices. DTLS was initially designed for powerful devices that are connected through reliable and high bandwidth link. This paper proposes a collaboration of DTLS and CoAP for IoT. Additionally proposed DTLS header compression scheme that helps to reduce packet size, energy consumption and avoids fragmentation by complying the 6LoWPAN standards. Also proposed DTLS header compression scheme does not compromises the point-to-point security provided by DTLS. Since DTLS has chosen as security protocol underneath the CoAP, enhancement to the existing DTLS also provided by introducing the use of raw public key in DTLS.",
"title": ""
},
{
"docid": "477ca9c55310235c691f6420d63005a7",
"text": "We present Sigma*, a novel technique for learning symbolic models of software behavior. Sigma* addresses the challenge of synthesizing models of software by using symbolic conjectures and abstraction. By combining dynamic symbolic execution to discover symbolic input-output steps of the programs and counterexample guided abstraction refinement to over-approximate program behavior, Sigma* transforms arbitrary source representation of programs into faithful input-output models. We define a class of stream filters---programs that process streams of data items---for which Sigma* converges to a complete model if abstraction refinement eventually builds up a sufficiently strong abstraction. In other words, Sigma* is complete relative to abstraction. To represent inferred symbolic models, we use a variant of symbolic transducers that can be effectively composed and equivalence checked. Thus, Sigma* enables fully automatic analysis of behavioral properties such as commutativity, reversibility and idempotence, which is useful for web sanitizer verification and stream programs compiler optimizations, as we show experimentally. We also show how models inferred by Sigma* can boost performance of stream programs by parallelized code generation.",
"title": ""
},
{
"docid": "ea5697d417fe154be77d941c19d8a86e",
"text": "The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda1 and Haskell. The fundamental premises on which the functional programming methodology stands are critically analyzed with respect to philosophical, theoretical, and pragmatic concerns. Particular attention is paid to the main features that characterize modern functional languages: higher-order functions, lazy evaluation, equations and pattern matching, strong static typing and type inference, and data abstraction. In addition, current research areas—such as parallelism, nondeterminism, input/output, and state-oriented computations—are examined with the goal of predicting the future development and application of functional languages.",
"title": ""
},
{
"docid": "5104891f21240e2ac0f0480e4b5da28e",
"text": "The paper describes the modeling and control of a robot with flexible joints (the DLR medical robot), which has strong mechanical couplings between pairs of joints realized with a differential gear-box. Because of this coupling, controllers developed before for the DLR light-weight robots cannot be directly applied. The previous control approach is extended in order to allow a multi-input-multi-output (MIMO) design for the strongly coupled joints. Asymptotic stability is shown for the MIMO controller. Finally, experimental results with the DLR medical robot are presented.",
"title": ""
},
{
"docid": "10e66f0c9cc3532029de388c2018f8ed",
"text": "1. ABSTRACT WC have developed a series of lifelike computer characters called Virtual Petz. These are autonomous agents with real-time layered 3D animation and sound. Using a mouse the user moves a hand-shaped cursor to directly touch, pet, and pick up the characters, as well as use toys and objects. Virtual Petz grow up over time on the user’s PC computer desktop, and strive to be the user’s friends and companions. They have evolving social relationships with the user and each other. To implement these agents we have invented hybrid techniques that draw from cartoons, improvisational drama, AI and video games. 1.1",
"title": ""
},
{
"docid": "a636f977eb29b870cefe040f3089de44",
"text": "We consider the network implications of virtual reality (VR) and augmented reality (AR). While there are intrinsic challenges for AR/VR applications to deliver on their promise, their impact on the underlying infrastructure will be undeniable. We look at augmented and virtual reality and consider a few use cases where they could be deployed. These use cases define a set of requirements for the underlying network. We take a brief look at potential network architectures. We then make the case for Information-centric networks as a potential architecture to assist the deployment of AR/VR and draw a list of challenges and future research directions for next generation networks to better support AR/VR.",
"title": ""
},
{
"docid": "f050f004d455648767c8663768cfcc42",
"text": "In this paper, a metamaterial-based scanning leaky-wave antenna is developed, and then applied to the Doppler radar system for the noncontact vital-sign detection. With the benefit of the antenna beam scanning, the radar system can not only sense human subject, but also detect the vital signs within the specific scanning region. The Doppler radar module is designed at 5.8 GHz, and implemented by commercial integrated circuits and coplanar waveguide (CPW) passive components. Two scanning antennas are then connected with the transmitting and receiving ports of the module, respectively. In addition, since the main beam of the developed scanning antenna is controlled by the frequency, one can easily tune the frequency of the radar source from 5.1 to 6.5 GHz to perform the 59° spatial scanning. The measured respiration and heartbeat rates are in good agreement with the results acquired from the medical finger pulse sensor.",
"title": ""
}
] |
scidocsrr
|
47bc10208873706fd75e142b84e15dd7
|
Policy development and implementation in health promotion--from theory to practice: the ADEPT model.
|
[
{
"docid": "1137cdf90ff6229865ae20980739afc5",
"text": "This paper addresses the role of policy and evidence in health promotion. The concept of von Wright’s “logic of events” is introduced and applied to health policy impact analysis. According to von Wright (1976), human action can be explained by a restricted number of determinants: wants, abilities, duties, and opportunities. The dynamics of action result from changes in opportunities (logic of events). Applied to the policymaking process, the present model explains personal wants as subordinated to political goals. Abilities of individual policy makers are part of organisational resources. Also, personal duties are subordinated to institutional obligations. Opportunities are mainly related to political context and public support. The present analysis suggests that policy determinants such as concrete goals, sufficient resources and public support may be crucial for achieving an intended behaviour change on the population level, while other policy determinants, e.g., personal commitment and organisational capacities, may especially relate to the policy implementation process. The paper concludes by indicating ways in which future research using this theoretical framework might contribute to health promotion practice for improved health outcomes across populations.",
"title": ""
}
] |
[
{
"docid": "c47525f2456de0b9b87a5ebbb5a972fb",
"text": "This article reviews the potential use of visual feedback, focusing on mirror visual feedback, introduced over 15 years ago, for the treatment of many chronic neurological disorders that have long been regarded as intractable such as phantom pain, hemiparesis from stroke and complex regional pain syndrome. Apart from its clinical importance, mirror visual feedback paves the way for a paradigm shift in the way we approach neurological disorders. Instead of resulting entirely from irreversible damage to specialized brain modules, some of them may arise from short-term functional shifts that are potentially reversible. If so, relatively simple therapies can be devised--of which mirror visual feedback is an example--to restore function.",
"title": ""
},
{
"docid": "0869fee5888a97f424856570f2b9dc2c",
"text": "This paper evaluates the four leading techniques proposed in the literature for construction of prediction intervals (PIs) for neural network point forecasts. The delta, Bayesian, bootstrap, and mean-variance estimation (MVE) methods are reviewed and their performance for generating high-quality PIs is compared. PI-based measures are proposed and applied for the objective and quantitative assessment of each method's performance. A selection of 12 synthetic and real-world case studies is used to examine each method's performance for PI construction. The comparison is performed on the basis of the quality of generated PIs, the repeatability of the results, the computational requirements and the PIs variability with regard to the data uncertainty. The obtained results in this paper indicate that: 1) the delta and Bayesian methods are the best in terms of quality and repeatability, and 2) the MVE and bootstrap methods are the best in terms of low computational load and the width variability of PIs. This paper also introduces the concept of combinations of PIs, and proposes a new method for generating combined PIs using the traditional PIs. Genetic algorithm is applied for adjusting the combiner parameters through minimization of a PI-based cost function subject to two sets of restrictions. It is shown that the quality of PIs produced by the combiners is dramatically better than the quality of PIs obtained from each individual method.",
"title": ""
},
{
"docid": "208b4cb4dc4cee74b9357a5ebb2f739c",
"text": "We report improved AMR parsing results by adding a new action to a transitionbased AMR parser to infer abstract concepts and by incorporating richer features produced by auxiliary analyzers such as a semantic role labeler and a coreference resolver. We report final AMR parsing results that show an improvement of 7% absolute in F1 score over the best previously reported result. Our parser is available at: https://github.com/ Juicechuan/AMRParsing",
"title": ""
},
{
"docid": "bc272e837f1071fabcc7056134bae784",
"text": "Parental vaccine hesitancy is a growing problem affecting the health of children and the larger population. This article describes the evolution of the vaccine hesitancy movement and the individual, vaccine-specific and societal factors contributing to this phenomenon. In addition, potential strategies to mitigate the rising tide of parent vaccine reluctance and refusal are discussed.",
"title": ""
},
{
"docid": "5c2b73276c9f845d7eef5c9dc4cea2a1",
"text": "The detection of QR codes, a type of 2D barcode, as described in the literature consists merely in the determination of the boundaries of the symbol region in images obtained with the specific intent of highlighting the symbol. However, many important applications such as those related with accessibility technologies or robotics, depends on first detecting the presence of a barcode in an environment. We employ Viola-Jones rapid object detection framework to address the problem of finding QR codes in arbitrarily acquired images. This framework provides an efficient way to focus the detection process in promising regions of the image and a very fast feature calculation approach for pattern classification. An extensive study of variations in the parameters of the framework for detecting finder patterns, present in three corners of every QR code, was carried out. Detection accuracy superior to 90%, with controlled number of false positives, is achieved. We also propose a post-processing algorithm that aggregates the results of the first step and decides if the detected finder patterns are part of QR code symbols. This two-step processing is done in real time.",
"title": ""
},
{
"docid": "43654115b3c64eef7b3a26d90c092e9b",
"text": "We investigate the problem of domain adaptation for parallel data in Statistical Machine Translation (SMT). While techniques for domain adaptation of monolingual data can be borrowed for parallel data, we explore conceptual differences between translation model and language model domain adaptation and their effect on performance, such as the fact that translation models typically consist of several features that have different characteristics and can be optimized separately. We also explore adapting multiple (4–10) data sets with no a priori distinction between in-domain and out-of-domain data except for an in-domain development set.",
"title": ""
},
{
"docid": "3266a3d561ee91e8f08d81e1aac6ac1b",
"text": "The seminal work of Dwork et al. [ITCS 2012] introduced a metric-based notion of individual fairness. Given a task-specific similarity metric, their notion required that every pair of similar individuals should be treated similarly. In the context of machine learning, however, individual fairness does not generalize from a training set to the underlying population. We show that this can lead to computational intractability even for simple fair-learning tasks. With this motivation in mind, we introduce and study a relaxed notion of approximate metric-fairness: for a random pair of individuals sampled from the population, with all but a small probability of error, if they are similar then they should be treated similarly. We formalize the goal of achieving approximate metric-fairness simultaneously with best-possible accuracy as Probably Approximately Correct and Fair (PACF) Learning. We show that approximate metricfairness does generalize, and leverage these generalization guarantees to construct polynomialtime PACF learning algorithms for the classes of linear and logistic predictors. [email protected]. Research supported by the ISRAEL SCIENCE FOUNDATION (grant No. 5219/17). [email protected]. Research supported by the ISRAEL SCIENCE FOUNDATION (grant No. 5219/17).",
"title": ""
},
{
"docid": "b640ed2bd02ba74ee0eb925ef6504372",
"text": "In the discussion about Future Internet, Software-Defined Networking (SDN), enabled by OpenFlow, is currently seen as one of the most promising paradigm. While the availability and scalability concerns rises as a single controller could be alleviated by using replicate or distributed controllers, there lacks a flexible mechanism to allow controller load balancing. This paper proposes BalanceFlow, a controller load balancing architecture for OpenFlow networks. By utilizing CONTROLLER X action extension for OpenFlow switches and cross-controller communication, one of the controllers, called “super controller”, can flexibly tune the flow-requests handled by each controller, without introducing unacceptable propagation latencies. Experiments based on real topology show that BalanceFlow can adjust the load of each controller dynamically.",
"title": ""
},
{
"docid": "30750e5ee653ee623f6ec38e957f4843",
"text": "Chroma is a widespread feature for cover song recognition, as it is robust against non-tonal components and independent of timbre and specific instruments. However, Chroma is derived from spectrogram, thus it provides a coarse approximation representation of musical score. In this paper, we proposed a similar but more effective feature Note Class Profile (NCP) derived with music transcription techniques. NCP is a multi-dimensional time serie, each column of which denotes the energy distribution of 12 note classes. Experimental results on benchmark datasets demonstrated its superior performance over existing music features. In addition, NCP feature can be enhanced further with the development of music transcription techniques. The source code can be found in github1.",
"title": ""
},
{
"docid": "ace30c4ad4a74f1ba526b4868e47b5c5",
"text": "China and India are home to two of the world's largest populations, and both populations are aging rapidly. Our data compare health status, risk factors, and chronic diseases among people age forty-five and older in China and India. By 2030, 65.6 percent of the Chinese and 45.4 percent of the Indian health burden are projected to be borne by older adults, a population with high levels of noncommunicable diseases. Smoking (26 percent in both China and India) and inadequate physical activity (10 percent and 17.7 percent, respectively) are highly prevalent. Health policy and interventions informed by appropriate data will be needed to avert this burden.",
"title": ""
},
{
"docid": "c4ccb674a07ba15417f09b81c1255ba8",
"text": "Real world environments are characterized by high levels of linguistic and numerical uncertainties. A Fuzzy Logic System (FLS) is recognized as an adequate methodology to handle the uncertainties and imprecision available in real world environments and applications. Since the invention of fuzzy logic, it has been applied with great success to numerous real world applications such as washing machines, food processors, battery chargers, electrical vehicles, and several other domestic and industrial appliances. The first generation of FLSs were type-1 FLSs in which type-1 fuzzy sets were employed. Later, it was found that using type-2 FLSs can enable the handling of higher levels of uncertainties. Recent works have shown that interval type-2 FLSs can outperform type-1 FLSs in the applications which encompass high uncertainty levels. However, the majority of interval type-2 FLSs handle the linguistic and input numerical uncertainties using singleton interval type-2 FLSs that mix the numerical and linguistic uncertainties to be handled only by the linguistic labels type-2 fuzzy sets. This ignores the fact that if input numerical uncertainties were present, they should affect the incoming inputs to the FLS. Even in the papers that employed non-singleton type-2 FLSs, the input signals were assumed to have a predefined shape (mostly Gaussian or triangular) which might not reflect the real uncertainty distribution which can vary with the associated measurement. In this paper, we will present a new approach which is based on an adaptive non-singleton interval type-2 FLS where the numerical uncertainties will be modeled and handled by non-singleton type-2 fuzzy inputs and the linguistic uncertainties will be handled by interval type-2 fuzzy sets to represent the antecedents’ linguistic labels. The non-singleton type-2 fuzzy inputs are dynamic and they are automatically generated from data and they do not assume a specific shape about the distribution associated with the given sensor. We will present several real world experiments using a real world robot which will show how the proposed type-2 non-singleton type-2 FLS will produce a superior performance to its singleton type-1 and type-2 counterparts when encountering high levels of uncertainties.",
"title": ""
},
{
"docid": "9a8901f5787bf6db6900ad2b4b6291c5",
"text": "MOTIVATION\nAs biological inquiry produces ever more network data, such as protein-protein interaction networks, gene regulatory networks and metabolic networks, many algorithms have been proposed for the purpose of pairwise network alignment-finding a mapping from the nodes of one network to the nodes of another in such a way that the mapped nodes can be considered to correspond with respect to both their place in the network topology and their biological attributes. This technique is helpful in identifying previously undiscovered homologies between proteins of different species and revealing functionally similar subnetworks. In the past few years, a wealth of different aligners has been published, but few of them have been compared with one another, and no comprehensive review of these algorithms has yet appeared.\n\n\nRESULTS\nWe present the problem of biological network alignment, provide a guide to existing alignment algorithms and comprehensively benchmark existing algorithms on both synthetic and real-world biological data, finding dramatic differences between existing algorithms in the quality of the alignments they produce. Additionally, we find that many of these tools are inconvenient to use in practice, and there remains a need for easy-to-use cross-platform tools for performing network alignment.",
"title": ""
},
{
"docid": "df3cad5eb68df1bc5d6770f4f700ac65",
"text": "Substrate integrated waveguide (SIW) cavity-backed antenna arrays have advantages of low-profile, high-gain and low-cost fabrication. However, traditional SIW cavity-backed antenna arrays usually load with extra feeding networks, which make the whole arrays larger and more complex. A novel 4 × 4 SIW cavity-backed antenna array without using individual feeding network is presented in this letter. The proposed antenna array consists of sixteen SIW cavities connected by inductive windows as feeding network and wide slots on the surface of each cavity as radiating part. Without loading with extra feeding network, the array is compact.",
"title": ""
},
{
"docid": "710febdd18f40c9fc82f8a28039362cc",
"text": "The paper deals with engineering an electric wheelchair from a common wheelchair and then developing a Brain Computer Interface (BCI) between the electric wheelchair and the human brain. A portable EEG headset and firmware signal processing together facilitate the movement of the wheelchair integrating mind activity and frequency of eye blinks of the patient sitting on the wheelchair with the help of Microcontroller Unit (MCU). The target population for the mind controlled wheelchair is the patients who are paralyzed below the neck and are unable to use conventional wheelchair interfaces. This project aims at creating a cost efficient solution, later intended to be distributed as an add-on conversion unit for a common manual wheelchair. A Neurosky mind wave headset is used to pick up EEG signals from the brain. This is a commercialized version of the Open-EEG Project. The signal obtained from EEG sensor is processed by the ARM microcontroller FRDM KL-25Z, a Freescale board. The microcontroller takes decision for determining the direction of motion of wheelchair based on floor detection and obstacle avoidance sensors mounted on wheelchair’s footplate. The MCU shows real time information on a color LCD interfaced to it. Joystick control of the wheelchair is also provided as an additional interface option that can be chosen from the menu system of the project.",
"title": ""
},
{
"docid": "e30ae0b5cd90d091223ab38596de3109",
"text": "1 Abstract We describe a consistent hashing algorithm which performs multiple lookups per key in a hash table of nodes. It requires no additional storage beyond the hash table, and achieves a peak-to-average load ratio of 1 + ε with just 1 + 1 ε lookups per key.",
"title": ""
},
{
"docid": "60cfdc554e1078263370514ec3f04a90",
"text": "Stylistic variation is critical to render the utterances generated by conversational agents natural and engaging. In this paper, we focus on sequence-to-sequence models for open-domain dialogue response generation and propose a new method to evaluate the extent to which such models are able to generate responses that reflect different personality traits.",
"title": ""
},
{
"docid": "3d2666ab3b786fd02bb15e81b0eaeb37",
"text": "BACKGROUND\n The analysis of nursing errors in clinical management highlighted that clinical handover plays a pivotal role in patient safety. Changes to handover including conducting handover at the bedside and the use of written handover summary sheets were subsequently implemented.\n\n\nAIM\n The aim of the study was to explore nurses' perspectives on the introduction of bedside handover and the use of written handover sheets.\n\n\nMETHOD\n Using a qualitative approach, data were obtained from six focus groups containing 30 registered and enrolled (licensed practical) nurses. Thematic analysis revealed several major themes.\n\n\nFINDINGS\n Themes identified included: bedside handover and the strengths and weaknesses; patient involvement in handover, and good communication is about good communicators. Finally, three sources of patient information and other issues were also identified as key aspects.\n\n\nCONCLUSIONS\n How bedside handover is delivered should be considered in relation to specific patient caseloads (patients with cognitive impairments), the shift (day, evening or night shift) and the model of service delivery (team versus patient allocation).\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\n Flexible handover methods are implicit within clinical setting issues especially in consideration to nursing teamwork. Good communication processes continue to be fundamental for successful handover processes.",
"title": ""
},
{
"docid": "b6eeb0f99ae856acb1bf2fef4d73c517",
"text": "We propose a probabilistic matrix factorization model for collaborative filtering that learns from data that is missing not at random (MNAR). Matrix factorization models exhibit state-of-the-art predictive performance in collaborative filtering. However, these models usually assume that the data is missing at random (MAR), and this is rarely the case. For example, the data is not MAR if users rate items they like more than ones they dislike. When the MAR assumption is incorrect, inferences are biased and predictive performance can suffer. Therefore, we model both the generative process for the data and the missing data mechanism. By learning these two models jointly we obtain improved performance over state-of-the-art methods when predicting the ratings and when modeling the data observation process. We present the first viable MF model for MNAR data. Our results are promising and we expect that further research on NMAR models will yield large gains in collaborative filtering.",
"title": ""
},
{
"docid": "cc6c485fdd8d4d61c7b68bfd94639047",
"text": "Passive geolocaton of communication emitters provides great benefits to military and civilian surveillance and security operations. Time Difference of Arrival (TDOA) and Frequency Difference of Arrival (FDOA) measurement combination for stationary emitters may be obtained by sensors mounted on mobile platforms, for example on a pair of UAVs. Complex Ambiguity Function (CAF) of received complex signals can be efficiently calculated to provide required TDOA / FDOA measurement combination. TDOA and FDOA measurements are nonlinear in the sense that the emitter uncertainty given measurements in the Cartesian domain is non-Gaussian. Multiple non-linear measurements of emitter location need to be fused to provide the geolocation estimates. Gaussian Mixture Measurement (GMM) filter fuses nonlinear measurements as long as the uncertainty of each measurement in the surveillance (Cartesian) space is modeled by a Gaussian Mixture. Simulation results confirm this approach and compare it with geolocation using Bearings Only (BO) measurements.",
"title": ""
},
{
"docid": "f91e1638e4812726ccf96f410da2624b",
"text": "We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent’s policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and Dueling agents (entropy reward and -greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance.",
"title": ""
}
] |
scidocsrr
|
6ac2ce9b4ff1957ba459881dd4b625f8
|
Data Storage Security and Privacy in Cloud Computing : A Comprehensive Survey
|
[
{
"docid": "02564434d1dab0031718a10400a59593",
"text": "The advent of cloud computing, data owners are motivated to outsource their complex data management systems from local sites to commercial public cloud for great flexibility and economic savings. But for protecting data privacy, sensitive data has to be encrypted before outsourcing, which obsoletes traditional data utilization based on plaintext keyword search. Thus, enabling an encrypted cloud data search service is of paramount importance. Considering the large number of data users and documents in cloud, it is crucial for the search service to allow multi-keyword query and provide result similarity ranking to meet the effective data retrieval need. Related works on searchable encryption focus on single keyword search or Boolean keyword search, and rarely differentiate the search results. In this paper, for the first time, we define and solve the challenging problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE), and establish a set of strict privacy requirements for such a secure cloud data utilization system to become a reality. Among various multi-keyword semantics, we choose the efficient principle of \" coordinate matching \" , i.e., as many matches as possible, to capture the similarity between search query and data documents, and further use \" inner product similarity \" to quantitatively formalize such principle for similarity measurement. We first propose a basic MRSE scheme using secure inner product computation, and then significantly improve it to meet different privacy requirements in two levels of threat models. Thorough analysis investigating privacy and efficiency guarantees of proposed schemes is given, and experiments on the real-world dataset further show proposed schemes indeed introduce low overhead on computation and communication. INTRODUCTION Due to the rapid expansion of data, the data owners tend to store their data into the cloud to release the burden of data storage and maintenance [1]. However, as the cloud customers and the cloud server are not in the same trusted domain, our outsourced data may be under the exposure to the risk. Thus, before sent to the cloud, the sensitive data needs to be encrypted to protect for data privacy and combat unsolicited accesses. Unfortunately, the traditional plaintext search methods cannot be directly applied to the encrypted cloud data any more. The traditional information retrieval (IR) has already provided multi-keyword ranked search for the data user. In the same way, the cloud server needs provide the data user with the similar function, while protecting data and search privacy. It …",
"title": ""
}
] |
[
{
"docid": "c39fe902027ba5cb5f0fa98005596178",
"text": "Twitter is used extensively in the United States as well as globally, creating many opportunities to augment decision support systems with Twitterdriven predictive analytics. Twitter is an ideal data source for decision support: its users, who number in the millions, publicly discuss events, emotions, and innumerable other topics; its content is authored and distributed in real time at no charge; and individual messages (also known as tweets) are often tagged with precise spatial and temporal coordinates. This article presents research investigating the use of spatiotemporally tagged tweets for crime prediction. We use Twitter-specific linguistic analysis and statistical topic modeling to automatically identify discussion topics across a major city in the United States. We then incorporate these topics into a crime prediction model and show that, for 19 of the 25 crime types we studied, the addition of Twitter data improves crime prediction performance versus a standard approach based on kernel density estimation. We identify a number of performance bottlenecks that could impact the use of Twitter in an actual decision support system. We also point out important areas of future work for this research, including deeper semantic analysis of message con∗Email address: [email protected]; Tel.: 1+ 434 924 5397; Fax: 1+ 434 982 2972 Preprint submitted to Decision Support Systems January 14, 2014 tent, temporal modeling, and incorporation of auxiliary data sources. This research has implications specifically for criminal justice decision makers in charge of resource allocation for crime prevention. More generally, this research has implications for decision makers concerned with geographic spaces occupied by Twitter-using individuals.",
"title": ""
},
{
"docid": "a90a20f66d3e73947fbc28dc60bcee24",
"text": "It is well known that the performance of speech recognition algorithms degrade in the presence of adverse environments where a speaker is under stress, emotion, or Lombard effect. This study evaluates the effectiveness of traditional features in recognition of speech under stress and formulates new features which are shown to improve stressed speech recognition. The focus is on formulating robust features which are less dependent on the speaking conditions rather than applying compensation or adaptation techniques. The stressed speaking styles considered are simulated angry and loud, Lombard effect speech, and noisy actual stressed speech from the SUSAS database which is available on CD-ROM through the NATO IST/TG-01 research group and LDC1 . In addition, this study investigates the immunity of linear prediction power spectrum and fast Fourier transform power spectrum to the presence of stress. Our results show that unlike fast Fourier transform’s (FFT) immunity to noise, the linear prediction power spectrum is more immune than FFT to stress as well as to a combination of a noisy and stressful environment. Finally, the effect of various parameter processing such as fixed versus variable preemphasis, liftering, and fixed versus cepstral mean normalization are studied. Two alternative frequency partitioning methods are proposed and compared with traditional mel-frequency cepstral coefficients (MFCC) features for stressed speech recognition. It is shown that the alternate filterbank frequency partitions are more effective for recognition of speech under both simulated and actual stressed conditions.",
"title": ""
},
{
"docid": "ce8262364b1a1b840e50f876c6d959fe",
"text": "Architectural styles, object-oriented design, and design patterns all hold promise as approaches that simplify software design and reuse by capturing and exploiting system design knowledge. This article explores the capabilities and roles of the various approaches, their strengths, and their limitations. oftware system builders increasingly recognize the importance of exploiting design knowledge in the engineering of new systems. Several distinct but related approaches hold promise. One approach is to focus on the architectural level of system design—the gross structure of a system as a composition of interacting parts. Architectural designs illuminate such key issues as scaling and portability, the assignment of functionality to design elements, interaction protocols between elements, and global system properties such as processing rates, end-to-end capacities, and overall performance.1 Architectural descriptions tend to be informal and idiosyncratic: box-and-line diagrams convey essential system structure, with accompanying prose explaining the meaning of the symbols. Nonetheless, they provide a critical staging point for determining whether a system can meet its essential requirements, and they guide implementers in constructing the system. More recently, architectural descriptions have been used for codifying and reusing design knowledge. Much of their power comes from use of idiomatic architectural terms, such as “clientserver system,” “layered system,” or “blackboard organization.”",
"title": ""
},
{
"docid": "d2521791d515b69d5a4a8c9ea02e3d17",
"text": "In this paper, four-wheel active steering (4WAS), which can control the front wheel steering angle and rear wheel steering angle independently, has been investigated based on the analysis of deficiency of conventional four wheel steering (4WS). A model following control structure is adopted to follow the desired yaw rate and vehicle sideslip angle, which consists of feedforward and feedback controller. The feedback controller is designed based on the optimal control theory, minimizing the tracking errors between the outputs of actual vehicle model and that of linear reference model. Finally, computer simulations are performed to evaluate the proposed control system via the co-simulation of Matlab/Simulink and CarSim. Simulation results show that the designed 4WAS controller can achieve the good response performance and improve the vehicle handling and stability.",
"title": ""
},
{
"docid": "09d4f38c87d6cc0e2cb6b1a7caad10f8",
"text": "Semidefinite programs (SDPs) can be solved in polynomial time by interior point methods, but scalability can be an issue. To address this shortcoming, over a decade ago, Burer and Monteiro proposed to solve SDPs with few equality constraints via rank-restricted, non-convex surrogates. Remarkably, for some applications, local optimization methods seem to converge to global optima of these non-convex surrogates reliably. Although some theory supports this empirical success, a complete explanation of it remains an open question. In this paper, we consider a class of SDPs which includes applications such as max-cut, community detection in the stochastic block model, robust PCA, phase retrieval and synchronization of rotations. We show that the low-rank Burer–Monteiro formulation of SDPs in that class almost never has any spurious local optima. This paper was corrected on April 9, 2018. Theorems 2 and 4 had the assumption that M (1) is a manifold. From this assumption it was stated that TYM = {Ẏ ∈ Rn×p : A(Ẏ Y >+ Y Ẏ >) = 0}, which is not true in general. To ensure this identity, the theorems now make the stronger assumption that gradients of the constraintsA(Y Y >) = b are linearly independent for all Y inM. All examples treated in the paper satisfy this assumption. Appendix D gives details.",
"title": ""
},
{
"docid": "065466185ba541472ae84e0b5cf5e864",
"text": "A significant challenge for crowdsourcing has been increasing worker engagement and output quality. We explore the effects of social, learning, and financial strategies, and their combinations, on increasing worker retention across tasks and change in the quality of worker output. Through three experiments, we show that 1) using these strategies together increased workers' engagement and the quality of their work; 2) a social strategy was most effective for increasing engagement; 3) a learning strategy was most effective in improving quality. The findings of this paper provide strategies for harnessing the crowd to perform complex tasks, as well as insight into crowd workers' motivation.",
"title": ""
},
{
"docid": "d984489b4b71eabe39ed79fac9cf27a1",
"text": "Remote sensing from airborne and spaceborne platforms provides valuable data for mapping, environmental monitoring, disaster management and civil and military intelligence. However, to explore the full value of these data, the appropriate information has to be extracted and presented in standard format to import it into geo-information systems and thus allow efficient decision processes. The object-oriented approach can contribute to powerful automatic and semiautomatic analysis for most remote sensing applications. Synergetic use to pixel-based or statistical signal processing methods explores the rich information contents. Here, we explain principal strategies of object-oriented analysis, discuss how the combination with fuzzy methods allows implementing expert knowledge and describe a representative example for the proposed workflow from remote sensing imagery to GIS. The strategies are demonstrated using the first objectoriented image analysis software on the market, eCognition, which provides an appropriate link between remote sensing",
"title": ""
},
{
"docid": "0d28ddef1fa86942da679aec23dff890",
"text": "Electronic patient records remain a rather unexplored, but potentially rich data source for discovering correlations between diseases. We describe a general approach for gathering phenotypic descriptions of patients from medical records in a systematic and non-cohort dependent manner. By extracting phenotype information from the free-text in such records we demonstrate that we can extend the information contained in the structured record data, and use it for producing fine-grained patient stratification and disease co-occurrence statistics. The approach uses a dictionary based on the International Classification of Disease ontology and is therefore in principle language independent. As a use case we show how records from a Danish psychiatric hospital lead to the identification of disease correlations, which subsequently can be mapped to systems biology frameworks.",
"title": ""
},
{
"docid": "106fefb169c7e95999fb411b4e07954e",
"text": "Additional contents in web pages, such as navigation panels, advertisements, copyrights and disclaimer notices, are typically not related to the main subject and may hamper the performance of Web data mining. They are traditionally taken as noises and need to be removed properly. To achieve this, two intuitive and crucial kinds of information—the textual information and the visual information of web pages—is considered in this paper. Accordingly, Text Density and Visual Importance are defined for the Document Object Model (DOM) nodes of a web page. Furthermore, a content extraction method with these measured values is proposed. It is a fast, accurate and general method for extracting content from diverse web pages. And with the employment of DOM nodes, the original structure of the web page can be preserved. Evaluated with the CleanEval benchmark and with randomly selected pages from well-known Web sites, where various web domains and styles are tested, the effect of the method is demonstrated. The average F1-scores with our method were 8.7 % higher than the best scores among several alternative methods.",
"title": ""
},
{
"docid": "be9cea5823779bf5ced592f108816554",
"text": "Undoubtedly, bioinformatics is one of the fastest developing scientific disciplines in recent years. Bioinformatics is the development and application of computer methods for management, analysis, interpretation, and prediction, as well as for the design of experiments. There is already a significant number of books on bioinformatics. Some are introductory and require almost no prior experience in biology or computer science: “Bioinformatics Basics Applications in Biological Science and Medicine” and “Introduction to Bioinformatics.” Others are targeted to biologists entering the field of bioinformatics: “Developing Bioinformatics Computer Skills.” Some more specialized books are: “An Introduction to Support Vector Machines : And Other Kernel-Based Learning Methods”, “Biological Sequence Analysis : Probabilistic Models of Proteins and Nucleic Acids”, “Pattern Discovery in Bimolecular Data : Tools, Techniques, and Applications”, “Computational Molecular Biology: An Algorithmic Approach.” The book subject of this review has a broad scope. “Bioinformatics: The machine learning approach” is aimed at two types of researchers and students. First are the biologists and biochemists who need to understand new data-driven algorithms, such as neural networks and hidden Markov",
"title": ""
},
{
"docid": "221d346a3ef1821438d388335c2d3a13",
"text": "Integrating data mining into business processes becomes crucial for business today. Modern business process management frameworks provide great support for flexible design, deployment and management of business processes. However, integrating complex data mining services into such frameworks is not trivial due to unclear definitions of user roles and missing flexible data mining services as well as missing standards and methods for the deployment of data mining solutions. This work contributes an integrated view on the definition of user roles for business, IT and data mining and discusses the integration of data mining in business processes and its evaluation in the context of BPR.",
"title": ""
},
{
"docid": "ba8886a9e251492ec0dca0512d6994be",
"text": "In this paper, we consider various moment inequalities for sums of random matrices—which are well–studied in the functional analysis and probability theory literature—and demonstrate how they can be used to obtain the best known performance guarantees for several problems in optimization. First, we show that the validity of a recent conjecture of Nemirovski is actually a direct consequence of the so–called non–commutative Khintchine’s inequality in functional analysis. Using this result, we show that an SDP–based algorithm of Nemirovski, which is developed for solving a class of quadratic optimization problems with orthogonality constraints, has a logarithmic approximation guarantee. This improves upon the polynomial approximation guarantee established earlier by Nemirovski. Furthermore, we obtain improved safe tractable approximations of a certain class of chance constrained linear matrix inequalities. Secondly, we consider a recent result of Delage and Ye on the so–called data–driven distributionally robust stochastic programming problem. One of the assumptions in the Delage–Ye result is that the underlying probability distribution has bounded support. However, using a suitable moment inequality, we show that the result in fact holds for a much larger class of probability distributions. Given the close connection between the behavior of sums of random matrices and the theoretical properties of various optimization problems, we expect that the moment inequalities discussed in this paper will find further applications in optimization.",
"title": ""
},
{
"docid": "cb18b8d464261ac4b46587e6a31efce0",
"text": "This paper critically analyses the foundations of three widely advocated information security management standards (BS7799, GASPP and SSE-CMM). The analysis reveals several fundamental problems related to these standards, casting serious doubts on their validity. The implications for research and practice, in improving information security management standards, are considered.",
"title": ""
},
{
"docid": "f9dc4cfb42a5ec893f5819e03c64d4bc",
"text": "For human pose estimation in monocular images, joint occlusions and overlapping upon human bodies often result in deviated pose predictions. Under these circumstances, biologically implausible pose predictions may be produced. In contrast, human vision is able to predict poses by exploiting geometric constraints of joint inter-connectivity. To address the problem by incorporating priors about the structure of human bodies, we propose a novel structure-aware convolutional network to implicitly take such priors into account during training of the deep network. Explicit learning of such constraints is typically challenging. Instead, we design discriminators to distinguish the real poses from the fake ones (such as biologically implausible ones). If the pose generator (G) generates results that the discriminator fails to distinguish from real ones, the network successfully learns the priors.,,To better capture the structure dependency of human body joints, the generator G is designed in a stacked multi-task manner to predict poses as well as occlusion heatmaps. Then, the pose and occlusion heatmaps are sent to the discriminators to predict the likelihood of the pose being real. Training of the network follows the strategy of conditional Generative Adversarial Networks (GANs). The effectiveness of the proposed network is evaluated on two widely used human pose estimation benchmark datasets. Our approach significantly outperforms the state-of-the-art methods and almost always generates plausible human pose predictions.",
"title": ""
},
{
"docid": "6fbd64c7b38493c432bb140c544f3235",
"text": "It is well-known that people love food. However, an insane diet can cause problems in the general health of the people. Since health is strictly linked to the diet, advanced computer vision tools to recognize food images (e.g. acquired with mobile/wearable cameras), as well as their properties (e.g., calories), can help the diet monitoring by providing useful information to the experts (e.g., nutritionists) to assess the food intake of patients (e.g., to combat obesity). The food recognition is a challenging task since the food is intrinsically deformable and presents high variability in appearance. Image representation plays a fundamental role. To properly study the peculiarities of the image representation in the food application context, a benchmark dataset is needed. These facts motivate the work presented in this paper. In this work we introduce the UNICT-FD889 dataset. It is the first food image dataset composed by over 800 distinct plates of food which can be used as benchmark to design and compare representation models of food images. We exploit the UNICT-FD889 dataset for Near Duplicate Image Retrieval (NDIR) purposes by comparing three standard state-of-the-art image descriptors: Bag of Textons, PRICoLBP and SIFT. Results confirm that both textures and colors are fundamental properties in food representation. Moreover the experiments point out that the Bag of Textons representation obtained considering the color domain is more accurate than the other two approaches for NDIR.",
"title": ""
},
{
"docid": "6f68ed77668f21696051947a8ccc4f56",
"text": "Most discussions of computer security focus on control of disclosure. In Particular, the U.S. Department of Defense has developed a set of criteria for computer mechanisms to provide control of classified information. However, for that core of data processing concerned with business operation and control of assets, the primary security concern is data integrity. This paper presents a policy for data integrity based on commercial data processing practices, and compares the mechanisms needed for this policy with the mechanisms needed to enforce the lattice model for information security. We argue that a lattice model is not sufficient to characterize integrity policies, and that distinct mechanisms are needed to Control disclosure and to provide integrity.",
"title": ""
},
{
"docid": "448dc3c1c5207e606f1bd3b386f8bbde",
"text": "Variational autoencoders (VAE) are a powerful and widely-used class of models to learn complex data distributions in an unsupervised fashion. One important limitation of VAEs is the prior assumption that latent sample representations are independent and identically distributed. However, for many important datasets, such as time-series of images, this assumption is too strong: accounting for covariances between samples, such as those in time, can yield to a more appropriate model specification and improve performance in downstream tasks. In this work, we introduce a new model, the Gaussian Process (GP) Prior Variational Autoencoder (GPPVAE), to specifically address this issue. The GPPVAE aims to combine the power of VAEs with the ability to model correlations afforded by GP priors. To achieve efficient inference in this new class of models, we leverage structure in the covariance matrix, and introduce a new stochastic backpropagation strategy that allows for computing stochastic gradients in a distributed and low-memory fashion. We show that our method outperforms conditional VAEs (CVAEs) and an adaptation of standard VAEs in two image data applications.",
"title": ""
},
{
"docid": "3fae9d0778c9f9df1ae51ad3b5f62a05",
"text": "This paper argues for the utility of back-end driven onloading to the edge as a way to address bandwidth use and latency challenges for future device-cloud interactions. Supporting such edge functions (EFs) requires solutions that can provide (i) fast and scalable EF provisioning and (ii) strong guarantees for the integrity of the EF execution and confidentiality of the state stored at the edge. In response to these goals, we (i) present a detailed design space exploration of the current technologies that can be leveraged in the design of edge function platforms (EFPs), (ii) develop a solution to address security concerns of EFs that leverages emerging hardware support for OS agnostic trusted execution environments such as Intel SGX enclaves, and (iii) propose and evaluate AirBox, a platform for fast, scalable and secure onloading of edge functions.",
"title": ""
},
{
"docid": "89460f94140b9471b120674ddd904948",
"text": "Cross-disciplinary research on collective intelligence considers that groups, like individuals, have a certain level of intelligence. For example, the study by Woolley et al. (2010) indicates that groups which perform well on one type of task will perform well on others. In a pair of empirical studies of groups interacting face-to-face, they found evidence of a collective intelligence factor, a measure of consistent group performance across a series of tasks, which was highly predictive of performance on a subsequent, more complex task. This collective intelligence factor differed from the individual intelligence of group members, and was significantly predicted by members’ social sensitivity – the ability to understand the emotions of others based on visual facial cues (Baron-Cohen et al. 2001).",
"title": ""
},
{
"docid": "1c83671ad725908b2d4a6467b23fc83f",
"text": "Although many IT and business managers today may be lured into business intelligence (BI) investments by the promise of predictive analytics and emerging BI trends, creating an enterprise-wide BI capability is a journey that takes time. This article describes Norfolk Southern Railway’s BI journey, which began in the early 1990s with departmental reporting, evolved into data warehousing and analytic applications, and has resulted in a company that today uses BI to support corporate strategy. We describe how BI at Norfolk Southern evolved over several decades, with the company developing strong BI foundations and an effective enterprise-wide BI capability. We also identify the practices that kept the BI journey “on track.” These practices can be used by other IT and business leaders as they plan and develop BI capabilities in their own organizations.",
"title": ""
}
] |
scidocsrr
|
859de6f75fd982136341046da15cecea
|
Optimal Cluster Preserving Embedding of Nonmetric Proximity Data
|
[
{
"docid": "5d247482bb06e837bf04c04582f4bfa2",
"text": "This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.",
"title": ""
}
] |
[
{
"docid": "1635b235c59cc57682735202c0bb2e0d",
"text": "The introduction of structural imaging of the brain by computed tomography (CT) scans and magnetic resonance imaging (MRI) has further refined classification of head injury for prognostic, diagnosis, and treatment purposes. We describe a new classification scheme to be used both as a research and a clinical tool in association with other predictors of neurologic status.",
"title": ""
},
{
"docid": "7f04ef4eb5dc53cbfa6c8b5379a95e0e",
"text": "Memory scanning is an essential component in detecting and deactivating malware while the malware is still active in memory. The content here is confined to user-mode memory scanning for malware on 32-bit and 64-bit Windows NT based systems that are memory resident and/or persistent over reboots. Malware targeting 32-bit Windows are being created and deployed at an alarming rate today. While there are not many malware targeting 64-bit Windows yet, many of the existing Win32 malware for 32-bit Windows will work fine on 64-bit Windows due to the underlying WoW64 subsystem. Here, we will present an approach to implement user-mode memory scanning for Windows. This essentially means scanning the virtual address space of all processes in memory. In case of an infection, while the malware is still active in memory, it can significantly limit detection and disinfection. The real challenge hence actually lies in fully disinfecting the machine and restoring back to its clean state. Today’s malware apply complex anti-disinfection techniques making the task of restoring the machine to a clean state extremely difficult. Here, we will discuss some of these techniques with examples from real-world malware scenarios. Practical approaches for user-mode disinfection will be presented. By leveraging the abundance of redundant information available via various Win32 and Native API from user-mode, certain techniques to detect hidden processes will also be presented. Certain challenges in porting the memory scanner to 64-bit Windows and Vista will be discussed. The advantages and disadvantages of implementing a memory scanner in user-mode (rather than kernel-mode) will also be discussed.",
"title": ""
},
{
"docid": "8d3e7a6032d6e017537b68b47c4dae38",
"text": "With the increasing complexity of modern radar system and the increasing number of devices used in the radar system, it would be highly desirable to model the complete radar system including hardware and software by a single tool. This paper presents a novel software-based simulation method for modern radar system which here is automotive radar application. Various functions of automotive radar, like target speed, distance and azimuth and elevation angle detection, are simulated in test case and the simulation results are compared with the measurement results.",
"title": ""
},
{
"docid": "054fcf065915118bbfa3f12759cb6912",
"text": "Automatization of the diagnosis of any kind of disease is of great importance and its gaining speed as more and more deep learning solutions are applied to different problems. One of such computer-aided systems could be a decision support tool able to accurately differentiate between different types of breast cancer histological images – normal tissue or carcinoma (benign, in situ or invasive). In this paper authors present a deep learning solution, based on convolutional capsule network, for classification of four types of images of breast tissue biopsy when hematoxylin and eosin staining is applied. The crossvalidation accuracy, averaged over four classes, was achieved to be 87 % with equally high sensitivity.",
"title": ""
},
{
"docid": "ed4178ec9be6f4f8e87a50f0bf1b9a41",
"text": "PURPOSE\nTo report a case of central retinal artery occlusion (CRAO) in a patient with biopsy-verified Wegener's granulomatosis (WG) with positive C-ANCA.\n\n\nMETHODS\nA 55-year-old woman presented with a 3-day history of acute painless bilateral loss of vision; she also complained of fever and weight loss. Examination showed a CRAO in the left eye and angiographically documented choroidal ischemia in both eyes.\n\n\nRESULTS\nThe possibility of systemic vasculitis was not kept in mind until further studies were carried out; methylprednisolone pulse therapy was then started. Renal biopsy disclosed focal and segmental necrotizing vasculitis of the medium-sized arteries, supporting the diagnosis of WG, and cyclophosphamide pulse therapy was administered with gradual improvement, but there was no visual recovery.\n\n\nCONCLUSION\nCRAO as presenting manifestation of WG, in the context of retinal vasculitis, is very uncommon, but we should be aware of WG in the etiology of CRAO. This report shows the difficulty of diagnosing Wegener's granulomatosis; it requires a high index of suspicion, and we should obtain an accurate medical history and repeat serological and histopathological examinations. It emphasizes that inflammation of arteries leads to irreversible retinal infarction, and visual loss may occur.",
"title": ""
},
{
"docid": "6f9186944cdeab30da7a530a942a5b3d",
"text": "In this work, we perform a comparative analysis of the impact of substrate technologies on the performance of 28 GHz antennas for 5G applications. For this purpose, we model, simulate, analyze and compare 2×2 patch antenna arrays on five substrate technologies typically used for manufacturing integrated antennas. The impact of these substrates on the impedance bandwidth, efficiency and gain of the antennas is quantified. Finally, the antennas are fabricated and measured. Excellent correlation is obtained between measurement and simulation results.",
"title": ""
},
{
"docid": "09a6f724e5b2150a39f89ee1132a33e9",
"text": "This paper concerns a deep learning approach to relevance ranking in information retrieval (IR). Existing deep IR models such as DSSM and CDSSM directly apply neural networks to generate ranking scores, without explicit understandings of the relevance. According to the human judgement process, a relevance label is generated by the following three steps: 1) relevant locations are detected; 2) local relevances are determined; 3) local relevances are aggregated to output the relevance label. In this paper we propose a new deep learning architecture, namely DeepRank, to simulate the above human judgment process. Firstly, a detection strategy is designed to extract the relevant contexts. Then, a measure network is applied to determine the local relevances by utilizing a convolutional neural network (CNN) or two-dimensional gated recurrent units (2D-GRU). Finally, an aggregation network with sequential integration and term gating mechanism is used to produce a global relevance score. DeepRank well captures important IR characteristics, including exact/semantic matching signals, proximity heuristics, query term importance, and diverse relevance requirement. Experiments on both benchmark LETOR dataset and a large scale clickthrough data show that DeepRank can significantly outperform learning to ranking methods, and existing deep learning methods.",
"title": ""
},
{
"docid": "b531674f21e88ac82071583531e639c6",
"text": "OBJECTIVE\nTo evaluate use of, satisfaction with, and social adjustment with adaptive devices compared with prostheses in young people with upper limb reduction deficiencies.\n\n\nMETHODS\nCross-sectional study of 218 young people with upper limb reduction deficiencies (age range 2-20 years) and their parents. A questionnaire was used to evaluate participants' characteristics, difficulties encountered, and preferred solutions for activities, use satisfaction, and social adjustment with adaptive devices vs prostheses. The Quebec User Evaluation of Satisfaction with assistive Technology and a subscale of Trinity Amputation and Prosthesis Experience Scales were used.\n\n\nRESULTS\nOf 218 participants, 58% were boys, 87% had transversal upper limb reduction deficiencies, 76% with past/present use of adaptive devices and 37% with past/present use of prostheses. Young people (> 50%) had difficulties in performing activities. Of 360 adaptive devices, 43% were used for self-care (using cutlery), 28% for mobility (riding a bicycle) and 5% for leisure activities. Prostheses were used for self-care (4%), mobility (9%), communication (3%), recreation and leisure (6%) and work/employment (4%). The preferred solution for difficult activities was using unaffected and affected arms/hands and other body parts (> 60%), adaptive devices (< 48%) and prostheses (< 9%). Satisfaction and social adjustment with adaptive devices were greater than with prostheses (p < 0.05).\n\n\nCONCLUSION\nYoung people with upper limb reduction deficiencies are satisfied and socially well-adjusted with adaptive devices. Adaptive devices are good alternatives to prostheses.",
"title": ""
},
{
"docid": "5bee78694f3428d3882e27000921f501",
"text": "We introduce a new approach to perform background subtraction in moving camera scenarios. Unlike previous treatments of the problem, we do not restrict the camera motion or the scene geometry. The proposed approach relies on Bayesian selection of the transformation that best describes the geometric relation between consecutive frames. Based on the selected transformation, we propagate a set of learned background and foreground appearance models using a single or a series of homography transforms. The propagated models are subjected to MAP-MRF optimization framework that combines motion, appearance, spatial, and temporal cues; the optimization process provides the final background/foreground labels. Extensive experimental evaluation with challenging videos shows that the proposed method outperforms the baseline and state-of-the-art methods in most cases.",
"title": ""
},
{
"docid": "5c7080162c4df9fdd7d5f385c4005bd3",
"text": "The placebo effect is very well known, being replicated in many scientific studies. At the same time, its exact mechanisms still remain unknown. Quite a few hypothetical explanations for the placebo effect have been suggested, including faith, belief, hope, classical conditioning, conscious/subconscious expectation, endorphins, and the meaning response. This article argues that all these explanations may boil down to autosuggestion, in the sense of \"communication with the subconscious.\" An important implication of this is that the placebo effect can in principle be used effectively without the placebo itself, through a direct use of autosuggestion. The benefits of such a strategy are clear: fewer side effects from medications, huge cost savings, no deception of patients, relief of burden on the physician's time, and healing in domains where medication or other therapies are problematic.",
"title": ""
},
{
"docid": "4d4a09c7cef74e9be52844a61ca57bef",
"text": "The key of zero-shot learning (ZSL) is how to find the information transfer model for bridging the gap between images and semantic information (texts or attributes). Existing ZSL methods usually construct the compatibility function between images and class labels with the consideration of the relevance on the semantic classes (the manifold structure of semantic classes). However, the relationship of image classes (the manifold structure of image classes) is also very important for the compatibility model construction. It is difficult to capture the relationship among image classes due to unseen classes, so that the manifold structure of image classes often is ignored in ZSL. To complement each other between the manifold structure of image classes and that of semantic classes information, we propose structure propagation (SP) for improving the performance of ZSL for classification. SP can jointly consider the manifold structure of image classes and that of semantic classes for approximating to the intrinsic structure of object classes. Moreover, the SP can describe the constrain condition between the compatibility function and these manifold structures for balancing the influence of the structure propagation iteration. The SP solution provides not only unseen class labels but also the relationship of two manifold structures that encode the positive transfer in structure propagation. Experimental results demonstrate that SP can attain the promising results on the AwA, CUB, Dogs and SUN databases.",
"title": ""
},
{
"docid": "2a2db7ff8bb353143ca2bb9ad8ec2d7d",
"text": "A revision of the genus Leptoplana Ehrenberg, 1831 in the Mediterranean basin is undertaken. This revision deals with the distribution and validity of the species of Leptoplana known for the area. The Mediterranean sub-species polyclad, Leptoplana tremellaris forma mediterranea Bock, 1913 is elevated to the specific level. Leptoplana mediterranea comb. nov. is redescribed from the Lake of Tunis, Tunisia. This flatworm is distinguished from Leptoplana tremellaris mainly by having a prostatic vesicle provided with a long diverticulum attached ventrally to the seminal vesicle, a genital pit closer to the male pore than to the female one and a twelve-eyed hatching juvenile instead of the four-eyed juvenile of L. tremellaris. The direct development in L. mediterranea is described at 15 °C.",
"title": ""
},
{
"docid": "259972cd20a1f763b07bef4619dc7f70",
"text": "This paper proposes an Interactive Chinese Character Learning System (ICCLS) based on pictorial evolution as an edutainment concept in computer-based learning of language. The advantage of the language origination itself is taken as a learning platform due to the complexity in Chinese language as compared to other types of languages. Users especially children enjoy more by utilize this learning system because they are able to memories the Chinese Character easily and understand more of the origin of the Chinese character under pleasurable learning environment, compares to traditional approach which children need to rote learning Chinese Character under un-pleasurable environment. Skeletonization is used as the representation of Chinese character and object with an animated pictograph evolution to facilitate the learning of the language. Shortest skeleton path matching technique is employed for fast and accurate matching in our implementation. User is required to either write a word or draw a simple 2D object in the input panel and the matched word and object will be displayed as well as the pictograph evolution to instill learning. The target of computer-based learning system is for pre-school children between 4 to 6 years old to learn Chinese characters in a flexible and entertaining manner besides utilizing visual and mind mapping strategy as learning methodology.",
"title": ""
},
{
"docid": "eb99d3fb9f6775453ac25861cb05f04c",
"text": "Hate content in social media is ever increasing. While Facebook, Twitter, Google have attempted to take several steps to tackle this hate content, they most often risk the violation of freedom of speech. Counterspeech, on the other hand, provides an effective way of tackling the online hate without the loss of freedom of speech. Thus, an alternative strategy for these platforms could be to promote counterspeech as a defense against hate content. However, in order to have a successful promotion of such counterspeech, one has to have a deep understanding of its dynamics in the online world. Lack of carefully curated data largely inhibits such understanding. In this paper, we create and release the first ever dataset for counterspeech using comments from YouTube. The data contains 9438 manually annotated comments where the labels indicate whether a comment is a counterspeech or not. This data allows us to perform a rigorous measurement study characterizing the linguistic structure of counterspeech for the first time. This analysis results in various interesting insights such as: the counterspeech comments receive double the likes received by the non-counterspeech comments, for certain communities majority of the non-counterspeech comments tend to be hate speech, the different types of counterspeech are not all equally effective and the language choice of users posting counterspeech is largely different from those posting noncounterspeech as revealed by a detailed psycholinguistic analysis. Finally, we build a set of machine learning models that are able to automatically detect counterspeech in YouTube videos with an F1-score of 0.73.",
"title": ""
},
{
"docid": "59e0bdccc5d983350ef7a53cfd953c07",
"text": "1,2 Computer Studies Department , Faculty of Science, The Polytechnic, Ibadan Oyo State, Nigeria. ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Patient identification is the foundation of effective healthcare: the correct care needs to be delivered to the correct patient. However, relying on manual identification processes such as demographic searches and social security numbers often results in patient misidentification hence, the needs for electronic medical records (EMR). . It was discovered that many medical systems switching to electronic health records in order to explore the advantages of electronic medical records (EMR) creates new problems by producing more targets for medical data to be hacked. Hackers are believed to have gained access to up to 80 million records that contained Social Security numbers, birthdays, postal addresses, and e-mail addresses.",
"title": ""
},
{
"docid": "1ab0974dc10f84c6e1fc80ac3f251ac3",
"text": "The optimisation of a printed circuit board assembly line is mainly influenced by the constraints of the surface mount device placement (SMD) machine and the characteristics of the production environment. Hence, this paper surveys the various machine technologies and characteristics and proposes five categories of machines based on their specifications and operational methods. These are dual-delivery, multi-station, turret-type, multi-head and sequential pick-and-place SMD placement machines. We attempt to associate the assembly machine technologies with heuristic methods; and address the scheduling issues of each category of machine. This grouping aims to guide future researchers in this field to have a better understanding of the various SMD placement machine specifications and operational methods, so that they can subsequently use them to apply, or even design heuristics, which are more appropriate to the machine characteristics and the operational methods. We also discuss our experiences in solving the pick-and-place sequencing problem of the theoretical and real machine problem, and highlight some of the important practical issues that should be considered in solving real SMD placement machine problems.",
"title": ""
},
{
"docid": "57c090eaab37e615b564ef8451412962",
"text": "Variational inference is an umbrella term for algorithms which cast Bayesian inference as optimization. Classically, variational inference uses the Kullback-Leibler divergence to define the optimization. Though this divergence has been widely used, the resultant posterior approximation can suffer from undesirable statistical properties. To address this, we reexamine variational inference from its roots as an optimization problem. We use operators, or functions of functions, to design variational objectives. As one example, we design a variational objective with a Langevin-Stein operator. We develop a black box algorithm, operator variational inference (opvi), for optimizing any operator objective. Importantly, operators enable us to make explicit the statistical and computational tradeoffs for variational inference. We can characterize different properties of variational objectives, such as objectives that admit data subsampling—allowing inference to scale to massive data—as well as objectives that admit variational programs—a rich class of posterior approximations that does not require a tractable density. We illustrate the benefits of opvi on a mixture model and a generative model of images.",
"title": ""
},
{
"docid": "a27a05cb00d350f9021b5c4f609d772c",
"text": "Traffic light detection from a moving vehicle is an important technology both for new safety driver assistance functions as well as for autonomous driving in the city. In this paper we present a machine learning framework for detection of traffic lights that can handle in realtime both day and night situations in a unified manner. A semantic segmentation method is employed to generate traffic light candidates, which are then confirmed and classified by a geometric and color features based classifier. Temporal consistency is enforced by using a tracking by detection method. We evaluate our method on a publicly available dataset recorded at daytime in order to compare to existing methods and we show similar performance. We also present an evaluation on two additional datasets containing more than 50 intersections with multiple traffic lights recorded both at day and during nighttime and we show that our method performs consistently in those situations.",
"title": ""
},
{
"docid": "73c3b82e723b5e76a6e9c3a556888c48",
"text": "In this paper we present the first large-scale scene attribute database. First, we perform crowd-sourced human studies to find a taxonomy of 102 discriminative attributes. Next, we build the “SUN attribute database” on top of the diverse SUN categorical database. Our attribute database spans more than 700 categories and 14,000 images and has potential for use in high-level scene understanding and fine-grained scene recognition. We use our dataset to train attribute classifiers and evaluate how well these relatively simple classifiers can recognize a variety of attributes related to materials, surface properties, lighting, functions and affordances, and spatial envelope properties.",
"title": ""
},
{
"docid": "5dc4dfc2d443c31332c70a56c2d70c7d",
"text": "Sentiment analysis or opinion mining is an important type of text analysis that aims to support decision making by extracting and analyzing opinion oriented text, identifying positive and negative opinions, and measuring how positively or negatively an entity (i.e., people, organization, event, location, product, topic, etc.) is regarded. As more and more users express their political and religious views on Twitter, tweets become valuable sources of people's opinions. Tweets data can be efficiently used to infer people's opinions for marketing or social studies. This paper proposes a Tweets Sentiment Analysis Model (TSAM) that can spot the societal interest and general people's opinions in regard to a social event. In this paper, Australian federal election 2010 event was taken as an example for sentiment analysis experiments. We are primarily interested in the sentiment of the specific political candidates, i.e., two primary minister candidates - Julia Gillard and Tony Abbot. Our experimental results demonstrate the effectiveness of the system.",
"title": ""
}
] |
scidocsrr
|
e575c94fe31fa474cd80a6b4518101bc
|
Multi-DOF counterbalance mechanism for low-cost, safe and easy-usable robot arm
|
[
{
"docid": "dbc09474868212acf3b29e49a6facbce",
"text": "In this paper, we propose a sophisticated design of human symbiotic robots that provide physical supports to the elderly such as attendant care with high-power and kitchen supports with dexterity while securing contact safety even if physical contact occurs with them. First of all, we made clear functional requirements for such a new generation robot, amounting to fifteen items to consolidate five significant functions such as “safety”, “friendliness”, “dexterity”, “high-power” and “mobility”. In addition, we set task scenes in daily life where support by robot is useful for old women living alone, in order to deduce specifications for the robot. Based on them, we successfully developed a new generation of human symbiotic robot, TWENDY-ONE that has a head, trunk, dual arms with a compact passive mechanism, anthropomorphic dual hands with mechanical softness in joints and skins and an omni-wheeled vehicle. Evaluation experiments focusing on attendant care and kitchen supports using TWENDY-ONE indicate that this new robot will be extremely useful to enhance quality of life for the elderly in the near future where human and robot co-exist.",
"title": ""
}
] |
[
{
"docid": "95cd9d6572700e2b118c7cb0ffba549a",
"text": "Non-volatile main memory (NVRAM) has the potential to fundamentally change the persistency of software. Applications can make their state persistent by directly placing data structures on NVRAM instead of volatile DRAM. However, the persistent nature of NVRAM requires significant changes for memory allocators that are now faced with the additional tasks of data recovery and failure-atomicity. In this paper, we present nvm malloc, a general-purpose memory allocator concept for the NVRAM era as a basic building block for persistent applications. We introduce concepts for managing named allocations for simplified recovery and using volatile and non-volatile memory in combination to provide both high performance and failure-atomic allocations.",
"title": ""
},
{
"docid": "278fd51fd028f1a4211e5f618ca3cc99",
"text": "Decades ago, discussion of an impending global pandemic of obesity was thought of as heresy. But in the 1970s, diets began to shift towards increased reliance upon processed foods, increased away-from-home food intake, and increased use of edible oils and sugar-sweetened beverages. Reductions in physical activity and increases in sedentary behavior began to be seen as well. The negative effects of these changes began to be recognized in the early 1990s, primarily in low- and middle-income populations, but they did not become clearly acknowledged until diabetes, hypertension, and obesity began to dominate the globe. Now, rapid increases in the rates of obesity and overweight are widely documented, from urban and rural areas in the poorest countries of sub-Saharan Africa and South Asia to populations in countries with higher income levels. Concurrent rapid shifts in diet and activity are well documented as well. An array of large-scale programmatic and policy measures are being explored in a few countries; however, few countries are engaged in serious efforts to prevent the serious dietary challenges being faced.",
"title": ""
},
{
"docid": "ff1a745ee6f9f618f44a970a8d210236",
"text": "A stepwise-voltage-generation circuit was devised that is based on a capacitor bank and that dissipates no energy when a stepwise voltage is generated. The stepwise voltage is generated spontaneously, and depends neither on the initial voltages to the capacitors nor on the switching order. A new adiabatic-charging circuit based on this circuit was also devised that increases the voltage in a stepwise fashion. The total capacitance of the capacitors in the regenerator is much smaller than a load capacitance, which enables construction of a very small adiabatic regenerator. This regenerator cannot be made with a conventional circuit, which uses a tank capacitor that is much larger than a load capacitor for adiabatic charging.",
"title": ""
},
{
"docid": "919ce1951d219970a05086a531b9d796",
"text": "Anti-neutrophil cytoplasmic autoantibodies (ANCA) and anti-glomerular basement membrane (GBM) necrotizing and crescentic glomerulonephritis are aggressive and destructive glomerular diseases that are associated with and probably caused by circulating ANCA and anti-GBM antibodies. These necrotizing lesions are manifested by acute nephritis and deteriorating kidney function often accompanied by distinctive clinical features of systemic disease. Prompt diagnosis requires clinical acumen that allows for the prompt institution of therapy aimed at removing circulating autoantibodies and quelling the inflammatory process. Continuing exploration of the etiology and pathogenesis of these aggressive inflammatory diseases have gradually uncovered new paradigms for the cause of and more specific therapy for these particular glomerular disorders and for autoimmune glomerular diseases in general.",
"title": ""
},
{
"docid": "e1e878c5df90a96811f885935ac13888",
"text": "Multiple-input-multiple-output (MIMO) wireless systems use multiple antenna elements at transmit and receive to offer improved capacity over single antenna topologies in multipath channels. In such systems, the antenna properties as well as the multipath channel characteristics play a key role in determining communication performance. This paper reviews recent research findings concerning antennas and propagation in MIMO systems. Issues considered include channel capacity computation, channel measurement and modeling approaches, and the impact of antenna element properties and array configuration on system performance. Throughout the discussion, outstanding research questions in these areas are highlighted.",
"title": ""
},
{
"docid": "02a73147f948a4441529100f0a8acfdc",
"text": "Organizations in virtually every industry are facing unprecedented pressures from many external forces. In an environment characterized by more regulatory mandates, more customer demands for better products and services, and an accelerated pace of technological change, some executive teams are turning to enterprise architecture (EA) to help their organizations better leverage their IT investments. The results of our study show there is a positive relationship between the stage of EA maturity and three areas of IT value: (1) ability to manage external relationships, (2) ability to lower operational costs, and (3) strategic agility. We also found positive relationships between EA maturity and improved business-IT alignment and risk management. Although these findings are based on responses from 140 CIOs working in a single industry that has been slower than others to leverage IT (U.S. hospitals), we believe they provide useful guidelines to help organizations in all industries increase the value from their IT investments.",
"title": ""
},
{
"docid": "789a024e39a832071ffee9e368b7a191",
"text": "In this paper, we propose a new deep learning approach, called neural association model (NAM), for probabilistic reasoning in artificial intelligence. We propose to use neural networks to model association between any two events in a domain. Neural networks take one event as input and compute a conditional probability of the other event to model how likely these two events are associated. The actual meaning of the conditional probabilities varies between applications and depends on how the models are trained. In this work, as two case studies, we have investigated two NAM structures, namely deep neural networks (DNN) and relationmodulated neural nets (RMNN), on several probabilistic reasoning tasks in AI, including recognizing textual entailment, triple classification in multirelational knowledge bases and common-sense reasoning. Experimental results on several popular data sets derived from WordNet, FreeBase and ConceptNet have all demonstrated that both DNN and RMNN perform equally well and they can significantly outperform the conventional methods available for these reasoning tasks. Moreover, compared with DNN, RMNN are superior in knowledge transfer, where a pre-trained model can be quickly extended to an unseen relation after observing only a few training samples.",
"title": ""
},
{
"docid": "c346820b43f99aa6714900c5b110db13",
"text": "BACKGROUND\nDiabetes Mellitus (DM) is a chronic disease that is considered a global public health problem. Education and self-monitoring by diabetic patients help to optimize and make possible a satisfactory metabolic control enabling improved management and reduced morbidity and mortality. The global growth in the use of mobile phones makes them a powerful platform to help provide tailored health, delivered conveniently to patients through health apps.\n\n\nOBJECTIVE\nThe aim of our study was to evaluate the efficacy of mobile apps through a systematic review and meta-analysis to assist DM patients in treatment.\n\n\nMETHODS\nWe conducted searches in the electronic databases MEDLINE (Pubmed), Cochrane Register of Controlled Trials (CENTRAL), and LILACS (Latin American and Caribbean Health Sciences Literature), including manual search in references of publications that included systematic reviews, specialized journals, and gray literature. We considered eligible randomized controlled trials (RCTs) conducted after 2008 with participants of all ages, patients with DM, and users of apps to help manage the disease. The meta-analysis of glycated hemoglobin (HbA1c) was performed in Review Manager software version 5.3.\n\n\nRESULTS\nThe literature search identified 1236 publications. Of these, 13 studies were included that evaluated 1263 patients. In 6 RCTs, there were a statistical significant reduction (P<.05) of HbA1c at the end of studies in the intervention group. The HbA1c data were evaluated by meta-analysis with the following results (mean difference, MD -0.44; CI: -0.59 to -0.29; P<.001; I²=32%).The evaluation favored the treatment in patients who used apps without significant heterogeneity.\n\n\nCONCLUSIONS\nThe use of apps by diabetic patients could help improve the control of HbA1c. In addition, the apps seem to strengthen the perception of self-care by contributing better information and health education to patients. Patients also become more self-confident to deal with their diabetes, mainly by reducing their fear of not knowing how to deal with potential hypoglycemic episodes that may occur.",
"title": ""
},
{
"docid": "4d11eca5601f5128801a8159a154593a",
"text": "Polymorphic malware belong to the class of host based threats which defy signature based detection mechanisms. Threat actors use various code obfuscation methods to hide the code details of the polymorphic malware and each dynamic iteration of the malware bears different and new signatures therefore makes its detection harder by signature based antimalware programs. Sandbox based detection systems perform syntactic analysis of the binary files to find known patterns from the un-encrypted segment of the malware file. Anomaly based detection systems can detect polymorphic threats but generate enormous false alarms. In this work, authors present a novel cognitive framework using semantic features to detect the presence of polymorphic malware inside a Microsoft Windows host using a process tree based temporal directed graph. Fractal analysis is performed to find cognitively distinguishable patterns of the malicious processes containing polymorphic malware executables. The main contributions of this paper are; the presentation of a graph theoretic approach for semantic characterization of polymorphism in the operating system's process tree, and the cognitive feature extraction of the polymorphic behavior for detection over a temporal process space.",
"title": ""
},
{
"docid": "19f74e217a42f306dff22a08b46e0ede",
"text": "This paper describes a pattern-based approach to building packet classifiers. One novelty of the approach is that it can be implemented efficiently in both software and hardware. A performance study shows that the software implementation is about twice as fast as existing mechanisms, and that the hardware implementation is currently able to keep up with OC-12 (622Mbps) network links and is likely to operate at gigabit speeds in the near future.",
"title": ""
},
{
"docid": "74de053230e7b96ee4e1aee844813723",
"text": "OBJECTIVE\nTo investigate the immediate effects of Kinesio Taping® (KT) on sit-to-stand (STS) movement, balance and dynamic postural control in children with cerebral palsy (CP).\n\n\nMETHODS\nFour children diagnosed with left hemiplegic CP level I by the Gross Motor Function Classification System were evaluated under conditions without taping as control condition (CC); and with KT as kinesio condition. A motion analysis system was used to measure total duration of STS movement and angular movements of each joint. Clinical instruments such as Pediatric Balance Scale (PBS) and Timed up and Go (TUG) were also applied.\n\n\nRESULTS\nCompared to CC, decreased total duration of STS, lower peak ankle flexion, higher knee extension at the end of STS, and decreased total time in TUG; but no differences were obtained on PBS score in KT.\n\n\nCONCLUSION\nNeuromuscular taping seems to be beneficial on dynamic activities, but not have the same performance in predominantly static activities studied.",
"title": ""
},
{
"docid": "118db394bb1000f64154573b2b77b188",
"text": "Question answering requires access to a knowledge base to check facts and reason about information. Knowledge in the form of natural language text is easy to acquire, but difficult for automated reasoning. Highly-structured knowledge bases can facilitate reasoning, but are difficult to acquire. In this paper we explore tables as a semi-structured formalism that provides a balanced compromise to this tradeoff. We first use the structure of tables to guide the construction of a dataset of over 9000 multiple-choice questions with rich alignment annotations, easily and efficiently via crowd-sourcing. We then use this annotated data to train a semistructured feature-driven model for question answering that uses tables as a knowledge base. In benchmark evaluations, we significantly outperform both a strong unstructured retrieval baseline and a highlystructured Markov Logic Network model.",
"title": ""
},
{
"docid": "062ef386998d3c47e1f3845dec55499c",
"text": "The purpose of this study was to examine the effectiveness of the Brain Breaks® Physical Activity Solutions in changing attitudes toward physical activity of school children in a community in Poland. In 2015, a sample of 326 pupils aged 9-11 years old from 19 classes at three selected primary schools were randomly assigned to control and experimental groups within the study. During the classes, children in the experimental group performed physical activities two times per day in three to five minutes using Brain Breaks® videos for four months, while the control group did not use the videos during the test period. Students' attitudes toward physical activities were assessed before and after the intervention using the \"Attitudes toward Physical Activity Scale\". Repeated measures of ANOVA were used to examine the change from pre- to post-intervention. Overall, a repeated measures ANOVA indicated time-by-group interaction effects in 'Self-efficacy on learning with video exercises', F(1.32) = 75.28, p = 0.00, η2 = 0.19. Although the changes are minor, there were benefits of the intervention. It may be concluded that HOPSports Brain Breaks® Physical Activity Program contributes to better self-efficacy on learning while using video exercise of primary school children.",
"title": ""
},
{
"docid": "7ec225f2fd4993feddcf996b576d140f",
"text": "Conventional network representation learning (NRL) models learn low-dimensional vertex representations by simply regarding each edge as a binary or continuous value. However, there exists rich semantic information on edges and the interactions between vertices usually preserve distinct meanings, which are largely neglected by most existing NRL models. In this work, we present a novel Translation-based NRL model, TransNet, by regarding the interactions between vertices as a translation operation. Moreover, we formalize the task of Social Relation Extraction (SRE) to evaluate the capability of NRL methods on modeling the relations between vertices. Experimental results on SRE demonstrate that TransNet significantly outperforms other baseline methods by 10% to 20% on hits@1. The source code and datasets can be obtained from https: //github.com/thunlp/TransNet.",
"title": ""
},
{
"docid": "ec905fd77dee3b5fbf24b7e73905bfb8",
"text": "The effects of exposure to violent video games on automatic associations with the self were investigated in a sample of 121 students. Playing the violent video game Doom led participants to associate themselves with aggressive traits and actions on the Implicit Association Test. In addition, self-reported prior exposure to violent video games predicted automatic aggressive self-concept, above and beyond self-reported aggression. Results suggest that playing violent video games can lead to the automatic learning of aggressive self-views.",
"title": ""
},
{
"docid": "fc74dadf88736675c860109a95fcdda1",
"text": "This paper presents the preliminary work done towards the development of a Gender Recognition System that can be incorporated into the Hindi Automatic Speech Recognition (ASR) System. Gender Recognition (GR) can help in the development of speaker-independent speech recognition systems. This paper presents a general approach to identifying feature vectors that effectively distinguish gender of a speaker from Hindi phoneme utterances. 10 vowels and 5 nasals of the Hindi language were studied for their effectiveness in identifying gender of the speaker. All the 10 vowel Phonemes performed well, while b] bZ] Å] ,] ,s] vks and vkS showed excellent gender distinction performance. All five nasals 3] ́] .k] u and e which were tested, showed a recognition accuracy of almost 100%. The Mel Frequency Cepstral Coefficients (MFCC) are widely used in ASR. The choice of MFCC as features in Gender Recognition will avoid additional computation. The effect of the MFCC feature vector dimension on the GR accuracy was studied and the findings presented. General Terms Automatic speech recognition in Hindi",
"title": ""
},
{
"docid": "9b53d96025c26254b38a4325c9d2da15",
"text": "The parameter spaces of hierarchical systems such as multilayer perceptrons include singularities due to the symmetry and degeneration of hidden units. A parameter space forms a geometrical manifold, called the neuromanifold in the case of neural networks. Such a model is identified with a statistical model, and a Riemannian metric is given by the Fisher information matrix. However, the matrix degenerates at singularities. Such a singular structure is ubiquitous not only in multilayer perceptrons but also in the gaussian mixture probability densities, ARMA time-series model, and many other cases. The standard statistical paradigm of the Cramr-Rao theorem does not hold, and the singularity gives rise to strange behaviors in parameter estimation, hypothesis testing, Bayesian inference, model selection, and in particular, the dynamics of learning from examples. Prevailing theories so far have not paid much attention to the problem caused by singularity, relying only on ordinary statistical theories developed for regular (nonsingular) models. Only recently have researchers remarked on the effects of singularity, and theories are now being developed. This article gives an overview of the phenomena caused by the singularities of statistical manifolds related to multilayer perceptrons and gaussian mixtures. We demonstrate our recent results on these problems. Simple toy models are also used to show explicit solutions. We explain that the maximum likelihood estimator is no longer subject to the gaussian distribution even asymptotically, because the Fisher information matrix degenerates, that the model selection criteria such as AIC, BIC, and MDL fail to hold in these models, that a smooth Bayesian prior becomes singular in such models, and that the trajectories of dynamics of learning are strongly affected by the singularity, causing plateaus or slow manifolds in the parameter space. The natural gradient method is shown to perform well because it takes the singular geometrical structure into account. The generalization error and the training error are studied in some examples.",
"title": ""
},
{
"docid": "3065ea79429f69f01d86f393ca451491",
"text": "Transition is a concept of interest to nurse researchers, clinicians, and theorists. This article builds on earlier theoretical work on transitions by providing evidence from the nursing literature. A review and synthesis of the nursing literature (1986-1992) supports the claim of the centrality of transitions in nursing. Universal properties of transitions are process, direction, and change in fundamental life patterns. At the individual and family levels, changes occurring in identities, roles, relationships, abilities, and patterns of behavior constitute transitions. At the organizational level, transitional change is that occurring in structure, function, or dynamics. Conditions that may influence the quality of the transition experience and the consequences of transitions are meanings, expectations, level of knowledge and skill, environment, level of planning, and emotional and physical well-being. Indicators of successful transitions are subjective well-being, role mastery, and the well-being of relationships. Three types of nursing therapeutics are discussed. A framework for further work is described.",
"title": ""
},
{
"docid": "b4fca94e4c13cecfce5aabee910d5b02",
"text": "We present a narrow-size multiband inverted-F antenna (IFA), which can easily fit inside the housing of display units of ultra-slim laptops. The narrowness of the antenna is achieved by allowing some of its metallic parts to extend over the sidewalls of the dielectric substrate. The antenna is aimed to operate in all the allocated WiFi and WiMAX frequency bands while providing near-omnidirectional coverage in the horizontal plane. The multiband performance of the proposed antenna and its omnidirectionality are validated by measurements.",
"title": ""
},
{
"docid": "17fb585ff12cff879febb32c2a16b739",
"text": "An electroencephalography (EEG) based Brain Computer Interface (BCI) enables people to communicate with the outside world by interpreting the EEG signals of their brains to interact with devices such as wheelchairs and intelligent robots. More specifically, motor imagery EEG (MI-EEG), which reflects a subject's active intent, is attracting increasing attention for a variety of BCI applications. Accurate classification of MI-EEG signals while essential for effective operation of BCI systems is challenging due to the significant noise inherent in the signals and the lack of informative correlation between the signals and brain activities. In this paper, we propose a novel deep neural network based learning framework that affords perceptive insights into the relationship between the MI-EEG data and brain activities. We design a joint convolutional recurrent neural network that simultaneously learns robust high-level feature presentations through low-dimensional dense embeddings from raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various artifacts such as background activities. The proposed approach has been evaluated extensively on a large-scale public MI-EEG dataset and a limited but easy-to-deploy dataset collected in our lab. The results show that our approach outperforms a series of baselines and the competitive state-of-the-art methods, yielding a classification accuracy of 95.53%. The applicability of our proposed approach is further demonstrated with a practical BCI system for typing.",
"title": ""
}
] |
scidocsrr
|
8edf649b3885eb4378612ba7d8186701
|
Deep Convolutional Compressed Sensing for LiDAR Depth Completion
|
[
{
"docid": "7af26168ae1557d8633a062313d74b78",
"text": "This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.",
"title": ""
},
{
"docid": "b3a85b88e4a557fcb7f0efb6ba628418",
"text": "We present the bilateral solver, a novel algorithm for edgeaware smoothing that combines the flexibility and speed of simple filtering approaches with the accuracy of domain-specific optimization algorithms. Our technique is capable of matching or improving upon state-of-the-art results on several different computer vision tasks (stereo, depth superresolution, colorization, and semantic segmentation) while being 10-1000× faster than baseline techniques with comparable accuracy, and producing lower-error output than techniques with comparable runtimes. The bilateral solver is fast, robust, straightforward to generalize to new domains, and simple to integrate into deep learning pipelines.",
"title": ""
},
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
}
] |
[
{
"docid": "c664918193470b20af2ce2ecf0c8e1c7",
"text": "The exceptional electronic properties of graphene, with its charge carriers mimicking relativistic quantum particles and its formidable potential in various applications, have ensured a rapid growth of interest in this new material. We report on electron transport in quantum dot devices carved entirely from graphene. At large sizes (>100 nanometers), they behave as conventional single-electron transistors, exhibiting periodic Coulomb blockade peaks. For quantum dots smaller than 100 nanometers, the peaks become strongly nonperiodic, indicating a major contribution of quantum confinement. Random peak spacing and its statistics are well described by the theory of chaotic neutrino billiards. Short constrictions of only a few nanometers in width remain conductive and reveal a confinement gap of up to 0.5 electron volt, demonstrating the possibility of molecular-scale electronics based on graphene.",
"title": ""
},
{
"docid": "c20549d78c2b5d393a59fa83718e1004",
"text": "This paper studies gradient-based schemes for image denoising and deblurring problems based on the discretized total variation (TV) minimization model with constraints. We derive a fast algorithm for the constrained TV-based image deburring problem. To achieve this task, we combine an acceleration of the well known dual approach to the denoising problem with a novel monotone version of a fast iterative shrinkage/thresholding algorithm (FISTA) we have recently introduced. The resulting gradient-based algorithm shares a remarkable simplicity together with a proven global rate of convergence which is significantly better than currently known gradient projections-based methods. Our results are applicable to both the anisotropic and isotropic discretized TV functionals. Initial numerical results demonstrate the viability and efficiency of the proposed algorithms on image deblurring problems with box constraints.",
"title": ""
},
{
"docid": "c95894477d7279deb7ddbb365030c34e",
"text": "Among mammals living in social groups, individuals form communication networks where they signal their identity and social status, facilitating social interaction. In spite of its importance for understanding of mammalian societies, the coding of individual-related information in the vocal signals of non-primate mammals has been relatively neglected. The present study focuses on the spotted hyena Crocuta crocuta, a social carnivore known for its complex female-dominated society. We investigate if and how the well-known hyena's laugh, also known as the giggle call, encodes information about the emitter. By analyzing acoustic structure in both temporal and frequency domains, we show that the hyena's laugh can encode information about age, individual identity and dominant/subordinate status, providing cues to receivers that could enable assessment of the social position of an emitting individual. The range of messages encoded in the hyena's laugh is likely to play a role during social interactions. This call, together with other vocalizations and other sensory channels, should ensure an array of communication signals that support the complex social system of the spotted hyena. Experimental studies are now needed to decipher precisely the communication network of this species.",
"title": ""
},
{
"docid": "ca3c3dec83821747896d44261ba2f9ad",
"text": "Building discriminative representations for 3D data has been an important task in computer graphics and computer vision research. Convolutional Neural Networks (CNNs) have shown to operate on 2D images with great success for a variety of tasks. Lifting convolution operators to 3D (3DCNNs) seems like a plausible and promising next step. Unfortunately, the computational complexity of 3D CNNs grows cubically with respect to voxel resolution. Moreover, since most 3D geometry representations are boundary based, occupied regions do not increase proportionately with the size of the discretization, resulting in wasted computation. In this work, we represent 3D spaces as volumetric fields, and propose a novel design that employs field probing filters to efficiently extract features from them. Each field probing filter is a set of probing points — sensors that perceive the space. Our learning algorithm optimizes not only the weights associated with the probing points, but also their locations, which deforms the shape of the probing filters and adaptively distributes them in 3D space. The optimized probing points sense the 3D space “intelligently”, rather than operating blindly over the entire domain. We show that field probing is significantly more efficient than 3DCNNs, while providing state-of-the-art performance, on classification tasks for 3D object recognition benchmark datasets.",
"title": ""
},
{
"docid": "31122e142e02b7e3b99c52c8f257a92e",
"text": "Impervious surface has been recognized as a key indicator in assessing urban environments. However, accurate impervious surface extraction is still a challenge. Effectiveness of impervious surface in urban land-use classification has not been well addressed. This paper explored extraction of impervious surface information from Landsat Enhanced Thematic Mapper data based on the integration of fraction images from linear spectral mixture analysis and land surface temperature. A new approach for urban land-use classification, based on the combined use of impervious surface and population density, was developed. Five urban land-use classes (i.e., low-, medium-, high-, and very-high-intensity residential areas, and commercial/industrial/transportation uses) were developed in the city of Indianapolis, Indiana, USA. Results showed that the integration of fraction images and surface temperature provided substantially improved impervious surface image. Accuracy assessment indicated that the rootmean-square error and system error yielded 9.22% and 5.68%, respectively, for the impervious surface image. The overall classification accuracy of 83.78% for five urban land-use classes was obtained. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "32ac9999de3228809233c85ca9b8ecb8",
"text": "With rapid development of Internet of things (IoT), there exists an ever-growing demand for ubiquitous connectivity to integrate multiple heterogeneous networks, such as Zigbee Ad hoc network, wireless LAN, cable network, etc. Some typical applications including environmental monitoring, intelligent transportation, medical care, smart home, industry control, safety defense, etc. require a smart gateway to provide high data-rate, end-to-end connectivity utilizing the higher bandwidth of multi-hop networks among those heterogeneous networks. This paper proposes a novel configurable smart IoT gateway which has three important benefits. Firstly, the gateway has plug gable architecture, whose modules with different communication protocols can be customized and plugged in according to different networks. Secondly, it has unified external interfaces which are fit for flexible software development. Finally, it has flexible protocol to translate different sensor data into a uniform format. Compared with similar research, the gateway has better scalability, flexibility and lower cost.",
"title": ""
},
{
"docid": "fe903498e0c3345d7e5ebc8bf3407c2f",
"text": "This paper describes a general continuous-time framework for visual-inertial simultaneous localization and mapping and calibration. We show how to use a spline parameterization that closely matches the torque-minimal motion of the sensor. Compared to traditional discrete-time solutions, the continuous-time formulation is particularly useful for solving problems with high-frame rate sensors and multiple unsynchronized devices. We demonstrate the applicability of the method for multi-sensor visual-inertial SLAM and calibration by accurately establishing the relative pose and internal parameters of multiple unsynchronized devices. We also show the advantages of the approach through evaluation and uniform treatment of both global and rolling shutter cameras within visual and visual-inertial SLAM systems.",
"title": ""
},
{
"docid": "d0985c38f3441ca0d69af8afaf67c998",
"text": "In this paper we discuss the importance of ambiguity, uncertainty and limited information on individuals’ decision making in situations that have an impact on their privacy. We present experimental evidence from a survey study that demonstrates the impact of framing a marketing offer on participants’ willingness to accept when the consequences of the offer are uncertain and highly ambiguous.",
"title": ""
},
{
"docid": "207e90cebdf23fb37f10b5ed690cb4fc",
"text": "In the scientific digital libraries, some papers from different research communities can be described by community-dependent keywords even if they share a semantically similar topic. Articles that are not tagged with enough keyword variations are poorly indexed in any information retrieval system which limits potentially fruitful exchanges between scientific disciplines. In this paper, we introduce a novel experimentally designed pipeline for multi-label semantic-based tagging developed for open-access metadata digital libraries. The approach starts by learning from a standard scientific categorization and a sample of topic tagged articles to find semantically relevant articles and enrich its metadata accordingly. Our proposed pipeline aims to enable researchers reaching articles from various disciplines that tend to use different terminologies. It allows retrieving semantically relevant articles given a limited known variation of search terms. In addition to achieving an accuracy that is higher than an expanded query based method using a topic synonym set extracted from a semantic network, our experiments also show a higher computational scalability versus other comparable techniques. We created a new benchmark extracted from the open-access metadata of a scientific digital library and published it along with the experiment code to allow further research in the topic.",
"title": ""
},
{
"docid": "78f2e1fc79a9c774e92452631d6bce7a",
"text": "Adders are basic integral part of arithmetic circuits. The adders have been realized with two styles: fixed stage size and variable stage size. In this paper, fixed stage and variable stage carry skip adder configurations have been analyzed and then a new 16-bit high speed variable stage carry skip adder is proposed by modifying the existing structure. The proposed adder has seven stages where first and last stage are of 1 bit each, it keeps increasing steadily till the middle stage which is the bulkiest and hence is the nucleus stage. The delay and power consumption in the proposed adder is reduced by 61.75% and 8% respectively. The proposed adder is implemented and simulated using 90 nm CMOS technology in Cadence Virtuoso. It is pertinent to mention that the delay improvement in the proposed adder has been achieved without increase in any power consumption and circuit complexity. The adder proposed in this work is suitable for high speed and low power VLSI based arithmetic circuits.",
"title": ""
},
{
"docid": "f3b1e1c9effb7828a62187e9eec5fba7",
"text": "Histone modifications and chromatin-associated protein complexes are crucially involved in the control of gene expression, supervising cell fate decisions and differentiation. Many promoters in embryonic stem (ES) cells harbor a distinctive histone modification signature that combines the activating histone H3 Lys 4 trimethylation (H3K4me3) mark and the repressive H3K27me3 mark. These bivalent domains are considered to poise expression of developmental genes, allowing timely activation while maintaining repression in the absence of differentiation signals. Recent advances shed light on the establishment and function of bivalent domains; however, their role in development remains controversial, not least because suitable genetic models to probe their function in developing organisms are missing. Here, we explore avenues to and from bivalency and propose that bivalent domains and associated chromatin-modifying complexes safeguard proper and robust differentiation.",
"title": ""
},
{
"docid": "8fd928d49bdcc0571db99812da3fbe41",
"text": "Many programs being implemented by US employers, insurers, and health care providers use incentives to encourage patients to take better care of themselves. We critically review a range of these efforts and show that many programs, although well-meaning, are unlikely to have much impact because they require information, expertise, and self-control that few patients possess. As a result, benefits are likely to accrue disproportionately to patients who already are taking adequate care of their health. We show how these programs could be made more effective through the use of insights from behavioral economics. For example, incentive programs that offer patients small and frequent payments for behavior that would benefit the patients, such as medication adherence, can be more effective than programs with incentives that are far less visible because they are folded into a paycheck or used to reduce a monthly premium. Deploying more-nuanced insights from behavioral economics can lead to policies with the potential to increase patient engagement and deliver dividends for patients and favorable cost-effectiveness ratios for insurers, employers, and other relevant commercial entities.",
"title": ""
},
{
"docid": "61801d62bfb0afe664a1cb374461f8ec",
"text": "Methodical studies on Smart-shoe-based gait detection systems have become an influential constituent in decreasing elderly injuries due to fall. This paper proposes smartphone-based system for analyzing characteristics of gait by using a wireless Smart-shoe. The system employs four force sensitive resistors (FSR) to measure the pressure distribution underneath a foot. Data is collected via a Wi-Fi communication network between the Smart-shoe and smartphone for further processing in the phone. Experimentation and verification is conducted on 10 subjects with different gait including free gait. The sensor outputs, with gait analysis acquired from the experiment, are presented in this paper.",
"title": ""
},
{
"docid": "806eb562d4e2f1c8c45a08d7a8e7ce31",
"text": "We study admissibility of inference rules and unification with parameters in transitive modal logics (extensions of K4), in particular we generalize various results on parameterfree admissibility and unification to the setting with parameters. Specifically, we give a characterization of projective formulas generalizing Ghilardi’s characterization in the parameter-free case, leading to new proofs of Rybakov’s results that admissibility with parameters is decidable and unification is finitary for logics satisfying suitable frame extension properties (called cluster-extensible logics in this paper). We construct explicit bases of admissible rules with parameters for cluster-extensible logics, and give their semantic description. We show that in the case of finitely many parameters, these logics have independent bases of admissible rules, and determine which logics have finite bases. As a sideline, we show that cluster-extensible logics have various nice properties: in particular, they are finitely axiomatizable, and have an exponential-size model property. We also give a rather general characterization of logics with directed (filtering) unification. In the sequel, we will use the same machinery to investigate the computational complexity of admissibility and unification with parameters in cluster-extensible logics, and we will adapt the results to logics with unique top cluster (e.g., S4.2) and superintuitionistic logics.",
"title": ""
},
{
"docid": "72ecb9f6b7e88e92dfd68631f0992c63",
"text": "This paper presents a novel omnidirectional wheel mechanism, referred to as MY wheel-II, based on a sliced ball structure. The wheel consists of two balls of equal diameter on a common shaft and both balls are sliced into four spherical crowns. The two sets of spherical crowns are mounted at 45° from each other to produce a combined circular profile. Compared with previous MY wheel mechanism, this improved wheel mechanism not only is more insensitive to fragments and irregularities on the floor but also has a higher payload capacity. A kinematic model of a three-wheeled prototype platform is also derived, and the problem of wheel angular velocity fluctuations caused by the specific mechanical structure is studied. The optimal scale factor (OSF) leading to a minimum of trajectory error is adopted to solve this problem. The factors influencing the OSF are investigated through simulation. In addition, the methods used for determining the OSF are discussed briefly.",
"title": ""
},
{
"docid": "165fbade7d495ce47a379520697f0d75",
"text": "Neutral-point-clamped (NPC) inverters are the most widely used topology of multilevel inverters in high-power applications (several megawatts). This paper presents in a very simple way the basic operation and the most used modulation and control techniques developed to date. Special attention is paid to the loss distribution in semiconductors, and an active NPC inverter is presented to overcome this problem. This paper discusses the main fields of application and presents some technological problems such as capacitor balance and losses.",
"title": ""
},
{
"docid": "22e677f2073599d6ffc9eadf6f3a833f",
"text": "Statistical inference in psychology has traditionally relied heavily on p-value significance testing. This approach to drawing conclusions from data, however, has been widely criticized, and two types of remedies have been advocated. The first proposal is to supplement p values with complementary measures of evidence, such as effect sizes. The second is to replace inference with Bayesian measures of evidence, such as the Bayes factor. The authors provide a practical comparison of p values, effect sizes, and default Bayes factors as measures of statistical evidence, using 855 recently published t tests in psychology. The comparison yields two main results. First, although p values and default Bayes factors almost always agree about what hypothesis is better supported by the data, the measures often disagree about the strength of this support; for 70% of the data sets for which the p value falls between .01 and .05, the default Bayes factor indicates that the evidence is only anecdotal. Second, effect sizes can provide additional evidence to p values and default Bayes factors. The authors conclude that the Bayesian approach is comparatively prudent, preventing researchers from overestimating the evidence in favor of an effect.",
"title": ""
},
{
"docid": "4f52223cb3150b1b7a7079147bcb3bc2",
"text": "MAX NEUENDORF,1 AES Member, MARKUS MULTRUS,1 AES Member, NIKOLAUS RETTELBACH1, GUILLAUME FUCHS1, JULIEN ROBILLIARD1, JÉRÉMIE LECOMTE1, STEPHAN WILDE1, STEFAN BAYER,10 AES Member, SASCHA DISCH1, CHRISTIAN HELMRICH10, ROCH LEFEBVRE,2 AES Member, PHILIPPE GOURNAY2, BRUNO BESSETTE2, JIMMY LAPIERRE,2 AES Student Member, KRISTOFER KJÖRLING3, HEIKO PURNHAGEN,3 AES Member, LARS VILLEMOES,3 AES Associate Member, WERNER OOMEN,4 AES Member, ERIK SCHUIJERS4, KEI KIKUIRI5, TORU CHINEN6, TAKESHI NORIMATSU1, KOK SENG CHONG7, EUNMI OH,8 AES Member, MIYOUNG KIM8, SCHUYLER QUACKENBUSH,9 AES Fellow, AND BERNHARD GRILL1",
"title": ""
},
{
"docid": "f4d85ad52e37bd81058bfff830a52f0a",
"text": "A number of antioxidants and trace minerals have important roles in immune function and may affect health in transition dairy cows. Vitamin E and beta-carotene are important cellular antioxidants. Selenium (Se) is involved in the antioxidant system via its role in the enzyme glutathione peroxidase. Inadequate dietary vitamin E or Se decreases neutrophil function during the perpariturient period. Supplementation of vitamin E and/or Se has reduced the incidence of mastitis and retained placenta, and reduced duration of clinical symptoms of mastitis in some experiments. Research has indicated that beta-carotene supplementation may enhance immunity and reduce the incidence of retained placenta and metritis in dairy cows. Marginal copper deficiency resulted in reduced neutrophil killing and decreased interferon production by mononuclear cells. Copper supplementation of a diet marginal in copper reduced the peak clinical response during experimental Escherichia coli mastitis. Limited research indicated that chromium supplementation during the transition period may increase immunity and reduce the incidence of retained placenta.",
"title": ""
}
] |
scidocsrr
|
a17978c2b85c8efb21ea7c0c5172f9cf
|
System Characteristics, Satisfaction and E-Learning Usage: A Structural Equation Model (SEM)
|
[
{
"docid": "1c0efa706f999ee0129d21acbd0ef5ab",
"text": "Ten years ago, we presented the DeLone and McLean Information Systems (IS) Success Model as a framework and model for measuring the complexdependent variable in IS research. In this paper, we discuss many of the important IS success research contributions of the last decade, focusing especially on research efforts that apply, validate, challenge, and propose enhancements to our original model. Based on our evaluation of those contributions, we propose minor refinements to the model and propose an updated DeLone and McLean IS Success Model. We discuss the utility of the updated model for measuring e-commerce system success. Finally, we make a series of recommendations regarding current and future measurement of IS success. 10 DELONE AND MCLEAN",
"title": ""
},
{
"docid": "49db1291f3f52a09037d6cfd305e8b5f",
"text": "This paper examines cognitive beliefs and affect influencing ones intention to continue using (continuance) information systems (IS). Expectationconfirmation theory is adapted from the consumer behavior literature and integrated with theoretical and empirical findings from prior IS usage research to theorize a model of IS continuance. Five research hypotheses derived from this model are empirically validated using a field survey of online banking users. The results suggest that users continuance intention is determined by their satisfaction with IS use and perceived usefulness of continued IS use. User satisfaction, in turn, is influenced by their confirmation of expectation from prior IS use and perceived usefulness. Postacceptance perceived usefulness is influenced by Ron Weber was the accepting senior editor for this paper. users confirmation level. This study draws attention to the substantive differences between acceptance and continuance behaviors, theorizes and validates one of the earliest theoretical models of IS continuance, integrates confirmation and user satisfaction constructs within our current understanding of IS use, conceptualizes and creates an initial scale for measuring IS continuance, and offers an initial explanation for the acceptancediscontinuance anomaly.",
"title": ""
},
{
"docid": "4f51f8907402f9859a77988f967c755f",
"text": "As a promising solution, electronic learning (e-learning) has been widely adopted by many companies to offer learning-on-demand opportunities to individual employees for reducing training time and cost. While information systems (IS) success models have received much attention among researchers, little research has been conducted to assess the success and/or effectiveness of e-learning systems in an organizational context. Whether traditional information systems success models can be extended to investigating e-learning systems success is rarely addressed. Based on the previous IS success literature, this study develops and validates a multidimensional model for assessing e-learning systems success (ELSS) from employee (e-learner) perspectives. The procedures used in conceptualizing an ELSS construct, generating items, collecting data, and validating a multiple-item scale for measuring ELSS are described. This paper presents evidence of the scale’s factor structure, reliability, content validity, criterion-related validity, convergent validity, and discriminant validity on the basis of analyzing data from a sample of 206 respondents. Theoretical and managerial implications of our results are then discussed. This empirically validated instrument will be useful to researchers in developing and testing e-learning systems theories, as well as to organizations in implementing successful e-learning systems.",
"title": ""
}
] |
[
{
"docid": "b41d8ca866268133f2af88495dad6482",
"text": "Text clustering is an important area of interest in the field of Text summarization, sentiment analysis etc. There have been a lot of algorithms experimented during the past years, which have a wide range of performances. One of the most popular method used is k-means, where an initial assumption is made about k, which is the number of clusters to be generated. Now a new method is introduced where the number of clusters is found using a modified spectral bisection and then the output is given to a genetic algorithm where the final solution is obtained. Keywords— Cluster, Spectral Bisection, Genetic Algorithm, kmeans.",
"title": ""
},
{
"docid": "d59b64b96cc79a2e21e705c021473f2a",
"text": "Bovine colostrum (first milk) contains very high concentrations of IgG, and on average 1 kg (500 g/liter) of IgG can be harvested from each immunized cow immediately after calving. We used a modified vaccination strategy together with established production systems from the dairy food industry for the large-scale manufacture of broadly neutralizing HIV-1 IgG. This approach provides a low-cost mucosal HIV preventive agent potentially suitable for a topical microbicide. Four cows were vaccinated pre- and/or postconception with recombinant HIV-1 gp140 envelope (Env) oligomers of clade B or A, B, and C. Colostrum and purified colostrum IgG were assessed for cross-clade binding and neutralization against a panel of 27 Env-pseudotyped reporter viruses. Vaccination elicited high anti-gp140 IgG titers in serum and colostrum with reciprocal endpoint titers of up to 1 × 10(5). While nonimmune colostrum showed some intrinsic neutralizing activity, colostrum from 2 cows receiving a longer-duration vaccination regimen demonstrated broad HIV-1-neutralizing activity. Colostrum-purified polyclonal IgG retained gp140 reactivity and neutralization activity and blocked the binding of the b12 monoclonal antibody to gp140, showing specificity for the CD4 binding site. Colostrum-derived anti-HIV antibodies offer a cost-effective option for preparing the substantial quantities of broadly neutralizing antibodies that would be needed in a low-cost topical combination HIV-1 microbicide.",
"title": ""
},
{
"docid": "5623321fb6c3a7c0b22980ce663632cd",
"text": "Vector representations for language have been shown to be useful in a number of Natural Language Processing (NLP) tasks. In this thesis, we aim to investigate the effectiveness of word vector representations for the research problem of Aspect-Based Sentiment Analysis (ABSA), which attempts to capture both semantic and sentiment information encoded in user generated content such as product reviews. In particular, we target three ABSA sub-tasks: aspect term extraction, aspect category detection, and aspect sentiment prediction. We investigate the effectiveness of vector representations over different text data, and evaluate the quality of domain-dependent vectors. We utilize vector representations to compute various vector-based features and conduct extensive experiments to demonstrate their effectiveness. Using simple vector-based features, we achieve F1 scores of 79.9% for aspect term extraction, 86.7% for category detection, and 72.3% for aspect sentiment prediction. Co Thesis Supervisor: James Glass Title: Senior Research Scientist Co Thesis Supervisor: Mitra Mohtarami Title: Postdoctoral Associate 3",
"title": ""
},
{
"docid": "f8c7fcba6d0cb889836dc868f3ba12c8",
"text": "This article reviews dominant media portrayals of mental illness, the mentally ill and mental health interventions, and examines what social, emotional and treatment-related effects these may have. Studies consistently show that both entertainment and news media provide overwhelmingly dramatic and distorted images of mental illness that emphasise dangerousness, criminality and unpredictability. They also model negative reactions to the mentally ill, including fear, rejection, derision and ridicule. The consequences of negative media images for people who have a mental illness are profound. They impair self-esteem, help-seeking behaviours, medication adherence and overall recovery. Mental health advocates blame the media for promoting stigma and discrimination toward people with a mental illness. However, the media may also be an important ally in challenging public prejudices, initiating public debate, and projecting positive, human interest stories about people who live with mental illness. Media lobbying and press liaison should take on a central role for mental health professionals, not only as a way of speaking out for patients who may not be able to speak out for themselves, but as a means of improving public education and awareness. Also, given the consistency of research findings in this field, it may now be time to shift attention away from further cataloguing of media representations of mental illness to the more challenging prospect of how to use the media to improve the life chances and recovery possibilities for the one in four people living with mental disorders.",
"title": ""
},
{
"docid": "8d30afbccfa76492b765f69d34cd6634",
"text": "Commonsense knowledge is vital to many natural language processing tasks. In this paper, we present a novel open-domain conversation generation model to demonstrate how large-scale commonsense knowledge can facilitate language understanding and generation. Given a user post, the model retrieves relevant knowledge graphs from a knowledge base and then encodes the graphs with a static graph attention mechanism, which augments the semantic information of the post and thus supports better understanding of the post. Then, during word generation, the model attentively reads the retrieved knowledge graphs and the knowledge triples within each graph to facilitate better generation through a dynamic graph attention mechanism. This is the first attempt that uses large-scale commonsense knowledge in conversation generation. Furthermore, unlike existing models that use knowledge triples (entities) separately and independently, our model treats each knowledge graph as a whole, which encodes more structured, connected semantic information in the graphs. Experiments show that the proposed model can generate more appropriate and informative responses than stateof-the-art baselines.",
"title": ""
},
{
"docid": "fb31665935c1a0964e70c864af8ff46f",
"text": "In the context of object and scene recognition, state-of-the-art performances are obtained with visual Bag-of-Words (BoW) models of mid-level representations computed from dense sampled local descriptors (e.g., Scale-Invariant Feature Transform (SIFT)). Several methods to combine low-level features and to set mid-level parameters have been evaluated recently for image classification. In this chapter, we study in detail the different components of the BoW model in the context of image classification. Particularly, we focus on the coding and pooling steps and investigate the impact of the main parameters of the BoW pipeline. We show that an adequate combination of several low (sampling rate, multiscale) and mid-level (codebook size, normalization) parameters is decisive to reach good performances. Based on this analysis, we propose a merging scheme that exploits the specificities of edge-based descriptors. Low and high contrast regions are pooled separately and combined to provide a powerful representation of images. We study the impact on classification performance of the contrast threshold that determines whether a SIFT descriptor corresponds to a low contrast region or a high contrast region. Successful experiments are provided on the Caltech-101 and Scene-15 datasets. M. T. Law (B) · N. Thome · M. Cord LIP6, UPMC—Sorbonne University, Paris, France e-mail: [email protected] N. Thome e-mail: [email protected] M. Cord e-mail: [email protected] B. Ionescu et al. (eds.), Fusion in Computer Vision, Advances in Computer 29 Vision and Pattern Recognition, DOI: 10.1007/978-3-319-05696-8_2, © Springer International Publishing Switzerland 2014",
"title": ""
},
{
"docid": "ed5b6ea3b1ccc22dff2a43bea7aaf241",
"text": "Testing is an important process that is performed to support quality assurance. Testing activities support quality assurance by gathering information about the nature of the software being studied. These activities consist of designing test cases, executing the software with those test cases, and examining the results produced by those executions. Studies indicate that more than fifty percent of the cost of software development is devoted to testing, with the percentage for testing critical software being even higher. As software becomes more pervasive and is used more often to perform critical tasks, it will be required to be of higher quality. Unless we can find efficient ways to perform effective testing, the percentage of development costs devoted to testing will increase significantly. This report briefly assesses the state of the art in software testing, outlines some future directions in software testing, and gives some pointers to software testing resources.",
"title": ""
},
{
"docid": "a602a532a7b95eae050d084e10606951",
"text": "Municipal solid waste management has emerged as one of the greatest challenges facing environmental protection agencies in developing countries. This study presents the current solid waste management practices and problems in Nigeria. Solid waste management is characterized by inefficient collection methods, insufficient coverage of the collection system and improper disposal. The waste density ranged from 280 to 370 kg/m3 and the waste generation rates ranged from 0.44 to 0.66 kg/capita/day. The common constraints faced environmental agencies include lack of institutional arrangement, insufficient financial resources, absence of bylaws and standards, inflexible work schedules, insufficient information on quantity and composition of waste, and inappropriate technology. The study suggested study of institutional, political, social, financial, economic and technical aspects of municipal solid waste management in order to achieve sustainable and effective solid waste management in Nigeria.",
"title": ""
},
{
"docid": "1c66d84dfc8656a23e2a4df60c88ab51",
"text": "Our method aims at reasoning over natural language questions and visual images. Given a natural language question about an image, our model updates the question representation iteratively by selecting image regions relevant to the query and learns to give the correct answer. Our model contains several reasoning layers, exploiting complex visual relations in the visual question answering (VQA) task. The proposed network is end-to-end trainable through back-propagation, where its weights are initialized using pre-trained convolutional neural network (CNN) and gated recurrent unit (GRU). Our method is evaluated on challenging datasets of COCO-QA [19] and VQA [2] and yields state-of-the-art performance.",
"title": ""
},
{
"docid": "6eca26209b9fcca8a9df76307108a3a8",
"text": "Transform-based lossy compression has a huge potential for hyperspectral data reduction. Hyperspectral data are 3-D, and the nature of their correlation is different in each dimension. This calls for a careful design of the 3-D transform to be used for compression. In this paper, we investigate the transform design and rate allocation stage for lossy compression of hyperspectral data. First, we select a set of 3-D transforms, obtained by combining in various ways wavelets, wavelet packets, the discrete cosine transform, and the Karhunen-Loegraveve transform (KLT), and evaluate the coding efficiency of these combinations. Second, we propose a low-complexity version of the KLT, in which complexity and performance can be balanced in a scalable way, allowing one to design the transform that better matches a specific application. Third, we integrate this, as well as other existing transforms, in the framework of Part 2 of the Joint Photographic Experts Group (JPEG) 2000 standard, taking advantage of the high coding efficiency of JPEG 2000, and exploiting the interoperability of an international standard. We introduce an evaluation framework based on both reconstruction fidelity and impact on image exploitation, and evaluate the proposed algorithm by applying this framework to AVIRIS scenes. It is shown that the scheme based on the proposed low-complexity KLT significantly outperforms previous schemes as to rate-distortion performance. As for impact on exploitation, we consider multiclass hard classification, spectral unmixing, binary classification, and anomaly detection as benchmark applications",
"title": ""
},
{
"docid": "d2836880ac69bf35e53f5bc6de8bc5dc",
"text": "There is currently significant interest in freeform, curve-based authoring of graphic images. In particular, \"diffusion curves\" facilitate graphic image creation by allowing an image designer to specify naturalistic images by drawing curves and setting colour values along either side of those curves. Recently, extensions to diffusion curves based on the biharmonic equation have been proposed which provide smooth interpolation through specified colour values and allow image designers to specify colour gradient constraints at curves. We present a Boundary Element Method (BEM) for rendering diffusion curve images with smooth interpolation and gradient constraints, which generates a solved boundary element image representation. The diffusion curve image can be evaluated from the solved representation using a novel and efficient line-by-line approach. We also describe \"curve-aware\" upsampling, in which a full resolution diffusion curve image can be upsampled from a lower resolution image using formula evaluated orrections near curves. The BEM solved image representation is compact. It therefore offers advantages in scenarios where solved image representations are transmitted to devices for rendering and where PDE solving at the device is undesirable due to time or processing constraints.",
"title": ""
},
{
"docid": "235e1f328a847fa7b6e074a58defed0b",
"text": "A stemming algorithm, a procedure to reduce all words with the same stem to a common form, is useful in many areas of computational linguistics and information-retrieval work. While the form of the algorithm varies with its application, certain linguistic problems are common to any stemming procedure. As a basis for evaluation of previous attempts to deal with these problems, this paper first discusses the theoretical and practical attributes of stemming algorithms. Then a new version of a context-sensitive, longest-match stemming algorithm for English is proposed; though developed for use in a library information transfer system, it is of general application. A major linguistic problem in stemming, variation in spelling of stems, is discussed in some detail and several feasible programmed solutions are outlined, along with sample results of one of these methods.",
"title": ""
},
{
"docid": "8e50613e8aab66987d650cd8763811e5",
"text": "Along with the great increase of internet and e-commerce, the use of credit card is an unavoidable one. Due to the increase of credit card usage, the frauds associated with this have also increased. There are a lot of approaches used to detect the frauds. In this paper, behavior based classification approach using Support Vector Machines are employed and efficient feature extraction method also adopted. If any discrepancies occur in the behaviors transaction pattern then it is predicted as suspicious and taken for further consideration to find the frauds. Generally credit card fraud detection problem suffers from a large amount of data, which is rectified by the proposed method. Achieving finest accuracy, high fraud catching rate and low false alarms are the main tasks of this approach.",
"title": ""
},
{
"docid": "4a9474c0813646708400fc02c344a976",
"text": "Over the years, the Web has shrunk the world, allowing individuals to share viewpoints with many more people than they are able to in real life. At the same time, however, it has also enabled anti-social and toxic behavior to occur at an unprecedented scale. Video sharing platforms like YouTube receive uploads from millions of users, covering a wide variety of topics and allowing others to comment and interact in response. Unfortunately, these communities are periodically plagued with aggression and hate attacks. In particular, recent work has showed how these attacks often take place as a result of “raids,” i.e., organized efforts coordinated by ad-hoc mobs from third-party communities. Despite the increasing relevance of this phenomenon, online services often lack effective countermeasures to mitigate it. Unlike well-studied problems like spam and phishing, coordinated aggressive behavior both targets and is perpetrated by humans, making defense mechanisms that look for automated activity unsuitable. Therefore, the de-facto solution is to reactively rely on user reports and human reviews. In this paper, we propose an automated solution to identify videos that are likely to be targeted by coordinated harassers. First, we characterize and model YouTube videos along several axes (metadata, audio transcripts, thumbnails) based on a ground truth dataset of raid victims. Then, we use an ensemble of classifiers to determine the likelihood that a video will be raided with high accuracy (AUC up to 94%). Overall, our work paves the way for providing video platforms like YouTube with proactive systems to detect and mitigate coordinated hate attacks.",
"title": ""
},
{
"docid": "733a7a024f5e408323f9b037828061bb",
"text": "Hidden Markov model (HMM) is one of the popular techniques for story segmentation, where hidden Markov states represent the topics, and the emission distributions of n-gram language model (LM) are dependent on the states. Given a text document, a Viterbi decoder finds the hidden story sequence, with a change of topic indicating a story boundary. In this paper, we propose a discriminative approach to story boundary detection. In the HMM framework, we use deep neural network (DNN) to estimate the posterior probability of topics given the bag-ofwords in the local context. We call it the DNN-HMM approach. We consider the topic dependent LM as a generative modeling technique, and the DNN-HMM as the discriminative solution. Experiments on topic detection and tracking (TDT2) task show that DNN-HMM outperforms traditional n-gram LM approach significantly and achieves state-of-the-art performance.",
"title": ""
},
{
"docid": "3ab4c2383569fc02f0395e79070dc16d",
"text": "A report released last week by the US National Academies makes recommendations for tackling the issues surrounding the era of petabyte science.",
"title": ""
},
{
"docid": "f006fff7ddfaed4b6016d59377144b7a",
"text": "In this paper I consider whether traditional behaviors of animals, like traditions of humans, are transmitted by imitation learning. Review of the literature on problem solving by captive primates, and detailed consideration of two widely cited instances of purported learning by imitation and of culture in free-living primates (sweet-potato washing by Japanese macaques and termite fishing by chimpanzees), suggests that nonhuman primates do not learn to solve problems by imitation. It may, therefore, be misleading to treat animal traditions and human culture as homologous (rather than analogous) and to refer to animal traditions as cultural.",
"title": ""
},
{
"docid": "745451b3ca65f3388332232b370ea504",
"text": "This article develops a framework that applies to single securities to test whether asset pricing models can explain the size, value, and momentum anomalies. Stock level beta is allowed to vary with firm-level size and book-to-market as well as with macroeconomic variables. With constant beta, none of the models examined capture any of the market anomalies. When beta is allowed to vary, the size and value effects are often explained, but the explanatory power of past return remains robust. The past return effect is captured by model mispricing that varies with macroeconomic variables.",
"title": ""
},
{
"docid": "a00acd7a9a136914bf98478ccd85e812",
"text": "Deep-learning has proved in recent years to be a powerful tool for image analysis and is now widely used to segment both 2D and 3D medical images. Deep-learning segmentation frameworks rely not only on the choice of network architecture but also on the choice of loss function. When the segmentation process targets rare observations, a severe class imbalance is likely to occur between candidate labels, thus resulting in sub-optimal performance. In order to mitigate this issue, strategies such as the weighted cross-entropy function, the sensitivity function or the Dice loss function, have been proposed. In this work, we investigate the behavior of these loss functions and their sensitivity to learning rate tuning in the presence of different rates of label imbalance across 2D and 3D segmentation tasks. We also propose to use the class re-balancing properties of the Generalized Dice overlap, a known metric for segmentation assessment, as a robust and accurate deep-learning loss function for unbalanced tasks.",
"title": ""
},
{
"docid": "26aee4feb558468d571138cd495f51d3",
"text": "A 300-MHz, custom 64-bit VLSI, second-generation Alpha CPU chip has been developed. The chip was designed in a 0.5-um CMOS technology using four levels of metal. The die size is 16.5 mm by 18.1 mm, contains 9.3 million transistors, operates at 3.3 V, and supports 3.3-V/5.0-V interfaces. Power dissipation is 50 W. It contains an 8-KB instruction cache; an 8-KB data cache; and a 96-KB unified second-level cache. The chip can issue four instructions per cycle and delivers 1,200 mips/600 MFLOPS (peak). Several noteworthy circuit and implementation techniques were used to attain the target operating frequency.",
"title": ""
}
] |
scidocsrr
|
c6a6366c26bdb392b16132f6b3ffe71b
|
Summarizing Answers in Non-Factoid Community Question-Answering
|
[
{
"docid": "7e06f62814a2aba7ddaff47af62c13b4",
"text": "Natural language conversation is widely regarded as a highly difficult problem, which is usually attacked with either rule-based or learning-based models. In this paper we propose a retrieval-based automatic response model for short-text conversation, to exploit the vast amount of short conversation instances available on social media. For this purpose we introduce a dataset of short-text conversation based on the real-world instances from Sina Weibo (a popular Chinese microblog service), which will be soon released to public. This dataset provides rich collection of instances for the research on finding natural and relevant short responses to a given short text, and useful for both training and testing of conversation models. This dataset consists of both naturally formed conversations, manually labeled data, and a large repository of candidate responses. Our preliminary experiments demonstrate that the simple retrieval-based conversation model performs reasonably well when combined with the rich instances in our dataset.",
"title": ""
}
] |
[
{
"docid": "ffe3a7171dccfb51ff22b41b4612b125",
"text": "Several models of myopia predict that growth of axial length is stimulated by blur. Accommodative lag has been suggested as an important source of blur in the development of myopia and this study has modeled how cross-link interactions between accommodation and convergence might interact with uncorrected distance heterophoria and refractive error to influence accommodative lag. Accommodative lag was simulated with two models of interactions between accommodation and convergence (one with and one without adaptable tonic elements). Simulations of both models indicate that both uncorrected hyperopia and esophoria increase the lag of accommodative and uncorrected myopia and exophoria decrease the lag or introduce a lead of accommodation in response to the near (40 cm) stimulus. These effects were increased when gain of either cross-link, accommodative convergence (AC/A) or convergence accommodation (CA/C), was increased within a moderate range of values while the other was fixed at a normal value (clamped condition). These effects were exaggerated when both the AC/A and CA/C ratios were increased (covaried condition) and affects of cross-link gain were negated when an increase of one cross-link (e.g. AC/A) was accompanied by a reduction of the other cross-link (e.g. CA/C) (reciprocal condition). The inclusion of tonic adaptation in the model reduced steady state errors of accommodation for all conditions except when the AC/A ratio was very high (2 MA/D). Combinations of cross-link interactions between accommodation and convergence that resemble either clamped or reciprocal patterns occur naturally in clinical populations. Simulations suggest that these two patterns of abnormal cross-link interactions could affect the progression of myopia differently. Adaptable tonic accommodation and tonic vergence could potentially reduce the progression of myopia by reducing the lag of accommodation.",
"title": ""
},
{
"docid": "060167f774d43cd41476de531ded40ad",
"text": "In this study, we proposed a research model to investigate the factors influencing users’ continuance intention to use Twitter. Building on the uses and gratification framework, we have proposed four types of gratifications for Twitter usage, including content gratification, technology gratification, process gratification, and social gratification. We conducted an online survey and collected 124 responses. The data was analyzed using Partial Least Squares. Our results showed that content gratifications and new technology gratification are the two key types of gratifications affecting the continuance intention to use Twitter. We conclude with a discussion of theoretical and practical implications. We believe that this study will provide important insights for future research on Twitter.",
"title": ""
},
{
"docid": "166b16222ecc15048972e535dbf4cb38",
"text": "Fingerprint matching systems generally use four types of representation schemes: grayscale image, phase image, skeleton image, and minutiae, among which minutiae-based representation is the most widely adopted one. The compactness of minutiae representation has created an impression that the minutiae template does not contain sufficient information to allow the reconstruction of the original grayscale fingerprint image. This belief has now been shown to be false; several algorithms have been proposed that can reconstruct fingerprint images from minutiae templates. These techniques try to either reconstruct the skeleton image, which is then converted into the grayscale image, or reconstruct the grayscale image directly from the minutiae template. However, they have a common drawback: Many spurious minutiae not included in the original minutiae template are generated in the reconstructed image. Moreover, some of these reconstruction techniques can only generate a partial fingerprint. In this paper, a novel fingerprint reconstruction algorithm is proposed to reconstruct the phase image, which is then converted into the grayscale image. The proposed reconstruction algorithm not only gives the whole fingerprint, but the reconstructed fingerprint contains very few spurious minutiae. Specifically, a fingerprint image is represented as a phase image which consists of the continuous phase and the spiral phase (which corresponds to minutiae). An algorithm is proposed to reconstruct the continuous phase from minutiae. The proposed reconstruction algorithm has been evaluated with respect to the success rates of type-I attack (match the reconstructed fingerprint against the original fingerprint) and type-II attack (match the reconstructed fingerprint against different impressions of the original fingerprint) using a commercial fingerprint recognition system. Given the reconstructed image from our algorithm, we show that both types of attacks can be successfully launched against a fingerprint recognition system.",
"title": ""
},
{
"docid": "6d925c32d3900512e0fd0ed36b683c69",
"text": "This paper presents a detailed design process of an ultra-high speed, switched reluctance machine for micro machining. The performance goal of the machine is to reach a maximum rotation speed of 750,000 rpm with an output power of 100 W. The design of the rotor involves reducing aerodynamic drag, avoiding mechanical resonance, and mitigating excessive stress. The design of the stator focuses on meeting the torque requirement while minimizing core loss and copper loss. The performance of the machine and the strength of the rotor structure are both verified through finite-element simulations The final design is a 6/4 switched reluctance machine with a 6mm diameter rotor that is wrapped in a carbon fiber sleeve and exhibits 13.6 W of viscous loss. The stator has shoeless poles and exhibits 19.1 W of electromagnetic loss.",
"title": ""
},
{
"docid": "4e0735c47fba93e77bc33eee689ed03e",
"text": "Word-of-mouth (WOM) has been recognized as one of the most influential resources of information transmission. However, conventional WOM communication is only effective within limited social contact boundaries. The advances of information technology and the emergence of online social network sites have changed the way information is transmitted and have transcended the traditional limitations of WOM. This paper describes online interpersonal influence or electronic word of mouth (eWOM) because it plays a significant role in consumer purchase decisions.",
"title": ""
},
{
"docid": "f264d5b90dfb774e9ec2ad055c4ebe62",
"text": "Automatic citation recommendation can be very useful for authoring a paper and is an AI-complete problem due to the challenge of bridging the semantic gap between citation context and the cited paper. It is not always easy for knowledgeable researchers to give an accurate citation context for a cited paper or to find the right paper to cite given context. To help with this problem, we propose a novel neural probabilistic model that jointly learns the semantic representations of citation contexts and cited papers. The probability of citing a paper given a citation context is estimated by training a multi-layer neural network. We implement and evaluate our model on the entire CiteSeer dataset, which at the time of this work consists of 10,760,318 citation contexts from 1,017,457 papers. We show that the proposed model significantly outperforms other stateof-the-art models in recall, MAP, MRR, and nDCG.",
"title": ""
},
{
"docid": "2e864dcde57ea1716847f47977af0140",
"text": "I focus on the role of case studies in developing causal explanations. I distinguish between the theoretical purposes of case studies and the case selection strategies or research designs used to advance those objectives. I construct a typology of case studies based on their purposes: idiographic (inductive and theory-guided), hypothesis-generating, hypothesis-testing, and plausibility probe case studies. I then examine different case study research designs, including comparable cases, most and least likely cases, deviant cases, and process tracing, with attention to their different purposes and logics of inference. I address the issue of selection bias and the “single logic” debate, and I emphasize the utility of multi-method research.",
"title": ""
},
{
"docid": "413112cc78df9fac45a254c74049f724",
"text": "We are developing compact, high-power chargers for rapid charging of energy storage capacitors. The main application is presently rapid charging of the capacitors inside of compact Marx generators for reprated operation. Compact Marx generators produce output pulses with amplitudes above 300 kV with ns or subns rise-times. A typical application is the generation of high power microwaves. Initially all energy storage capacitors in a Marx generator are charged in parallel. During the so-called erection cycle, the capacitors are connected in series. The charging voltage in the parallel configuration is around 40-50 kV. The input voltage of our charger is in the range of several hundred volts. Rapid charging of the capacitors in the parallel configuration will enable a high pulse repetition-rate of the compact Marx generator. The high power charger uses state-of-the-art IGBTs (isolated gate bipolar transistors) in an H-bridge topology and a compact, high frequency transformer. The IGBTs and the associated controls are packaged for minimum weight and maximum power density. The packaging and device selection makes use of burst mode operation (thermal inertia) of the charger. The present charger is considerably smaller than the one presented in Giesselmann, M et al., (2001).",
"title": ""
},
{
"docid": "8df49a873585755ec3a23a314846e851",
"text": "We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.",
"title": ""
},
{
"docid": "b22fff6f567db717b00e67a63fa23ca9",
"text": "This paper was originally published in Biological Procedures Online (BPO) on March 23, 2006. It was brought to the attention of the journal and authors that reference 74 was incorrect. The original citation for reference 74, “Stanford V. Biosignals offer potential for direct interfaces and health monitoring. Pervasive Computing, IEEE 2004; 3(1):99–103.” should read “Costanza E, Inverso SA, Allen R. ‘Toward Subtle Intimate Interfaces for Mobile Devices Using an EMG Controller’ in Proc CHI2005, April 2005, Portland, OR, USA.”",
"title": ""
},
{
"docid": "5c7a66c440b73b9ff66cd73c8efb3718",
"text": "Image captioning is a crucial task in the interaction of computer vision and natural language processing. It is an important way that help human understand the world better. There are many studies on image English captioning, but little work on image Chinese captioning because of the lack of the corresponding datasets. This paper focuses on image Chinese captioning by using abundant English datasets for the issue. In this paper, a method of adding English information to image Chinese captioning is proposed. We validate the use of English information with state-of-the art performance on the datasets: Flickr8K-CN.",
"title": ""
},
{
"docid": "8d092dfa88ba239cf66e5be35fcbfbcc",
"text": "We present VideoWhisper, a novel approach for unsupervised video representation learning. Based on the observation that the frame sequence encodes the temporal dynamics of a video (e.g., object movement and event evolution), we treat the frame sequential order as a self-supervision to learn video representations. Unlike other unsupervised video feature learning methods based on frame-level feature reconstruction that is sensitive to visual variance, VideoWhisper is driven by a novel video “sequence-to-whisper” learning strategy. Specifically, for each video sequence, we use a prelearned visual dictionary to generate a sequence of high-level semantics, dubbed “whisper,” which can be considered as the language describing the video dynamics. In this way, we model VideoWhisper as an end-to-end sequence-to-sequence learning model using attention-based recurrent neural networks. This model is trained to predict the whisper sequence and hence it is able to learn the temporal structure of videos. We propose two ways to generate video representation from the model. Through extensive experiments on two real-world video datasets, we demonstrate that video representation learned by V ideoWhisper is effective to boost fundamental multimedia applications such as video retrieval and event classification.",
"title": ""
},
{
"docid": "dad658b04712fe4ff03b356ed842e637",
"text": "Recurrent neural network language models (RNNLMs) have recently produced improvements on language processing tasks ranging from machine translation to word tagging and speech recognition. To date, however, the computational expense of RNNLMs has hampered their application to first pass decoding. In this paper, we show that by restricting the RNNLM calls to those words that receive a reasonable score according to a n-gram model, and by deploying a set of caches, we can reduce the cost of using an RNNLM in the first pass to that of using an additional n-gram model. We compare this scheme to lattice rescoring, and find that they produce comparable results for a Bing Voice search task. The best performance results from rescoring a lattice that is itself created with a RNNLM in the first pass.",
"title": ""
},
{
"docid": "3f467988a35ecb7b6b9feef049407bb2",
"text": "Semantic parsing of large-scale 3D point clouds is an important research topic in computer vision and remote sensing fields. Most existing approaches utilize hand-crafted features for each modality independently and combine them in a heuristic manner. They often fail to consider the consistency and complementary information among features adequately, which makes them difficult to capture high-level semantic structures. The features learned by most of the current deep learning methods can obtain high-quality image classification results. However, these methods are hard to be applied to recognize 3D point clouds due to unorganized distribution and various point density of data. In this paper, we propose a 3DCNN-DQN-RNN method which fuses the 3D convolutional neural network (CNN), Deep Q-Network (DQN) and Residual recurrent neural network (RNN)for an efficient semantic parsing of large-scale 3D point clouds. In our method, an eye window under control of the 3D CNN and DQN can localize and segment the points of the object's class efficiently. The 3D CNN and Residual RNN further extract robust and discriminative features of the points in the eye window, and thus greatly enhance the parsing accuracy of large-scale point clouds. Our method provides an automatic process that maps the raw data to the classification results. It also integrates object localization, segmentation and classification into one framework. Experimental results demonstrate that the proposed method outperforms the state-of-the-art point cloud classification methods.",
"title": ""
},
{
"docid": "b34db00c8a84eab1c7b1a6458fc6cd97",
"text": "The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. We survey the literature on visual interpretation of hand gestures in the context of its role in HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. Important differences in the gesture interpretation approaches arise depending on whether a 3D model of the human hand or an image appearance model of the human hand is used. 3D hand models offer a way of more elaborate modeling of hand gestures but lead to computational hurdles that have not been overcome given the real-time requirements of HCI. Appearance-based models lead to computationally efficient “purposive” approaches that work well under constrained situations but seem to lack the generality desirable for HCI. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. Although the current progress is encouraging, further theoretical as well as computational advances are needed before gestures can be widely used for HCI. We discuss directions of future research in gesture recognition, including its integration with other natural modes of humancomputer interaction. Index Terms —Vision-based gesture recognition, gesture analysis, hand tracking, nonrigid motion analysis, human-computer",
"title": ""
},
{
"docid": "d1ab78928c003109eda9e02384e7ca3f",
"text": "Code-switching is commonly used in the free-form text environment, such as social media, and it is especially favored in emotion expressions. Emotions in codeswitching texts differ from monolingual texts in that they can be expressed in either monolingual or bilingual forms. In this paper, we first utilize two kinds of knowledge, i.e. bilingual and sentimental information to bridge the gap between different languages. Moreover, we use a term-document bipartite graph to incorporate both bilingual and sentimental information, and propose a label propagation based approach to learn and predict in the bipartite graph. Empirical studies demonstrate the effectiveness of our proposed approach in detecting emotion in code-switching texts.",
"title": ""
},
{
"docid": "7f2acf667a66f2812023c26c4ca95cf1",
"text": "Vehicle-IT convergence technology is a rapidly rising paradigm of modern vehicles, in which an electronic control unit (ECU) is used to control the vehicle electrical systems, and the controller area network (CAN), an in-vehicle network, is commonly used to construct an efficient network of ECUs. Unfortunately, security issues have not been treated properly in CAN, although CAN control messages could be life-critical. With the appearance of the connected car environment, in-vehicle networks (e.g., CAN) are now connected to external networks (e.g., 3G/4G mobile networks), enabling an adversary to perform a long-range wireless attack using CAN vulnerabilities. In this paper we show that a long-range wireless attack is physically possible using a real vehicle and malicious smartphone application in a connected car environment. We also propose a security protocol for CAN as a countermeasure designed in accordance with current CAN specifications. We evaluate the feasibility of the proposed security protocol using CANoe software and a DSP-F28335 microcontroller. Our results show that the proposed security protocol is more efficient than existing security protocols with respect to authentication delay and communication load.",
"title": ""
},
{
"docid": "701fe507d3efe69f82f040967d6e246f",
"text": "The performance of the brain is constrained by wiring length and maintenance costs. The apparently inverse relationship between number of neurons in the various interneuron classes and the spatial extent of their axon trees suggests a mathematically definable organization, reminiscent of 'small-world' or scale-free networks observed in other complex systems. The wiring-economy-based classification of cortical inhibitory interneurons is supported by the distinct physiological patterns of class members in the intact brain. The complex wiring of diverse interneuron classes could represent an economic solution for supporting global synchrony and oscillations at multiple timescales with minimum axon length.",
"title": ""
},
{
"docid": "0dfc905792374c8224cbe2d34fb51fe5",
"text": "Randomized direct search algorithms for continuous domains, such as evolution strategies, are basic tools in machine learning. They are especially needed when the gradient of an objective function (e.g., loss, energy, or reward function) cannot be computed or estimated efficiently. Application areas include supervised and reinforcement learning as well as model selection. These randomized search strategies often rely on normally distributed additive variations of candidate solutions. In order to efficiently search in non-separable and ill-conditioned landscapes the covariance matrix of the normal distribution must be adapted, amounting to a variable metric method. Consequently, covariance matrix adaptation (CMA) is considered state-of-the-art in evolution strategies. In order to sample the normal distribution, the adapted covariance matrix needs to be decomposed, requiring in general Θ(n 3) operations, where n is the search space dimension. We propose a new update mechanism which can replace a rank-one covariance matrix update and the computationally expensive decomposition of the covariance matrix. The newly developed update rule reduces the computational complexity of the rank-one covariance matrix adaptation to Θ(n 2) without resorting to outdated distributions. We derive new versions of the elitist covariance matrix adaptation evolution strategy (CMA-ES) and the multi-objective CMA-ES. These algorithms are equivalent to the original procedures except that the update step for the variable metric distribution scales better in the problem dimension. We also introduce a simplified variant of the non-elitist CMA-ES with the incremental covariance matrix update and investigate its performance. Apart from the reduced time-complexity of the distribution update, the algebraic computations involved in all new algorithms are simpler compared to the original versions. The new update rule improves the performance of the CMA-ES for large scale machine learning problems in which the objective function can be evaluated fast.",
"title": ""
}
] |
scidocsrr
|
94c86a6bf4f940b48664d566a6115783
|
An operational semantics for Stateflow
|
[
{
"docid": "38ac14820c7116046fd99f8995f3cbe4",
"text": "Statecharts is a visual language for specifying reactive system behavior. The formalism extends traditional finite-state machines with notions of hierarchy and concurrency, and it is used in many popular software design notations. A large part of the appeal of Statecharts derives from its basis in state machines, with their intuitive operational interpretation. The classical semantics of Statecharts, however, suffers from a serious defect; it is not compositional, meaning that the behavior of system descriptions cannot be inferred from the behavior of their subsystems. Compositionality is a prerequisite for exploiting the modular structure of Statecharts for simulation, verification, and code generation, and it also provides the necessary foundation for reusability.\nThis paper suggests a new compositional approach to formalizing Statecharts semantics as flattened labeled transition systems in which transitions represent system steps. The approach builds on ideas developed for timed process calculi and employs structural operational rules to define the transitions of a Statecharts expression in terms of the transitions of its subexpressions. It is first presented for a simple dialect of Statecharts, with respect to a variant of Pnueli and Shalev's semantics, and is illustrated by means of a small example. To demonstrate its flexibility, the proposed approach is then extended to deal with practically useful features available in many Statecharts variants, namely state references, history states, and priority concepts along state hierarchies.",
"title": ""
},
{
"docid": "c3160b191c67099072405cdf454f3676",
"text": "This paper presetits a method for automatically generating test cases to structural coverage criteria. We show how a niodel checker can be used to autoniutically generate complete test sequetices that will provide a predefined coverage of uti? soffivare developnietit artifact that can be represented as a ffiriitr state niodel. Our goal is to help reduce the high cost of developitig test cases f o r safep-critical sojfivare applications that require a certain level of coveruge for certijicatioti, f o r example, safep-critical avionics sxstenis that need to denlotistrate MC/DC (modijied cotidition arid decision) coverage of the code. We deftie aJmiial franiework suitable for modeling soft\\care artifacts, like, reqitirenients models, software spec$cations, or inipletnetitatiotis. We then show how various structural coverage criteria can be formalized and used to make a triode1 checker provide test sequences to achieve this coverqe. To illustrate our approach, we demonstrate, for the first titiie, how a niodel checker can be used to generate test sequerice.sfor MUDC coverage of a m a l l case example.",
"title": ""
}
] |
[
{
"docid": "ffef016fba37b3dc167a1afb7e7766f0",
"text": "We show that the Thompson Sampling algorithm achieves logarithmic expected regret for the Bernoulli multi-armed bandit problem. More precisely, for the two-armed bandit problem, the expected regret in time T is O( lnT ∆ + 1 ∆3 ). And, for the N -armed bandit problem, the expected regret in time T is O( [ ( ∑N i=2 1 ∆i ) ] lnT ). Our bounds are optimal but for the dependence on ∆i and the constant factors in big-Oh.",
"title": ""
},
{
"docid": "00bc7c810946fa30bf1fdc66e8fb7fc2",
"text": "Voluntary motor commands produce two kinds of consequences. Initially, a sensory consequence is observed in terms of activity in our primary sensory organs (e.g., vision, proprioception). Subsequently, the brain evaluates the sensory feedback and produces a subjective measure of utility or usefulness of the motor commands (e.g., reward). As a result, comparisons between predicted and observed consequences of motor commands produce two forms of prediction error. How do these errors contribute to changes in motor commands? Here, we considered a reach adaptation protocol and found that when high quality sensory feedback was available, adaptation of motor commands was driven almost exclusively by sensory prediction errors. This form of learning had a distinct signature: as motor commands adapted, the subjects altered their predictions regarding sensory consequences of motor commands, and generalized this learning broadly to neighboring motor commands. In contrast, as the quality of the sensory feedback degraded, adaptation of motor commands became more dependent on reward prediction errors. Reward prediction errors produced comparable changes in the motor commands, but produced no change in the predicted sensory consequences of motor commands, and generalized only locally. Because we found that there was a within subject correlation between generalization patterns and sensory remapping, it is plausible that during adaptation an individual's relative reliance on sensory vs. reward prediction errors could be inferred. We suggest that while motor commands change because of sensory and reward prediction errors, only sensory prediction errors produce a change in the neural system that predicts sensory consequences of motor commands.",
"title": ""
},
{
"docid": "3de4922096e2d9bf04ba1ea89b3b3ff1",
"text": "Events of various sorts make up an important subset of the entities relevant not only in knowledge representation but also in natural language processing and numerous other fields and tasks. How to represent these in a homogeneous yet expressive, extensive, and extensible way remains a challenge. In this paper, we propose an approach based on FrameBase, a broad RDFS-based schema consisting of frames and roles. The concept of a frame, which is a very general one, can be considered as subsuming existing definitions of events. This ensures a broad coverage and a uniform representation of various kinds of events, thus bearing the potential to serve as a unified event model. We show how FrameBase can represent events from several different sources and domains. These include events from a specific taxonomy related to organized crime, events captured using schema.org, and events from DBpedia.",
"title": ""
},
{
"docid": "c8be82cceec30a4aa72cc23b844546df",
"text": "SVM is extensively used in pattern recognition because of its capability to classify future unseen data and its’ good generalization performance. Several algorithms and models have been proposed for pattern recognition that uses SVM for classification. These models proved the efficiency of SVM in pattern recognition. Researchers have compared their results for SVM with other traditional empirical risk minimization techniques, such as Artificial Neural Network, Decision tree, etc. Comparison results show that SVM is superior to these techniques. Also, different variants of SVM are developed for enhancing the performance. In this paper, SVM is briefed and some of the pattern recognition applications of SVM are surveyed and briefly summarized. Keyword Hyperplane, Pattern Recognition, Quadratic Programming Problem, Support Vector Machines.",
"title": ""
},
{
"docid": "3baf11f31351e92c7ff56b066434ae2c",
"text": "Unlike images which are represented in regular dense grids, 3D point clouds are irregular and unordered, hence applying convolution on them can be difficult. In this paper, we extend the dynamic filter to a new convolution operation, named PointConv. PointConv can be applied on point clouds to build deep convolutional networks. We treat convolution kernels as nonlinear functions of the local coordinates of 3D points comprised of weight and density functions. With respect to a given point, the weight functions are learned with multi-layer perceptron networks and the density functions through kernel density estimation. A novel reformulation is proposed for efficiently computing the weight functions, which allowed us to dramatically scale up the network and significantly improve its performance. The learned convolution kernel can be used to compute translation-invariant and permutation-invariant convolution on any point set in the 3D space. Besides, PointConv can also be used as deconvolution operators to propagate features from a subsampled point cloud back to its original resolution. Experiments on ModelNet40, ShapeNet, and ScanNet show that deep convolutional neural networks built on PointConv are able to achieve state-ofthe-art on challenging semantic segmentation benchmarks on 3D point clouds. Besides, our experiments converting CIFAR-10 into a point cloud showed that networks built on PointConv can match the performance of convolutional networks in 2D images of a similar structure.",
"title": ""
},
{
"docid": "b91f80bc17de9c4e15ec80504e24b045",
"text": "Motivated by the design of the well-known Enigma machine, we present a novel ultra-lightweight encryption scheme, referred to as Hummingbird, and its applications to a privacy-preserving identification and mutual authentication protocol for RFID applications. Hummingbird can provide the designed security with a small block size and is therefore expected to meet the stringent response time and power consumption requirements described in the ISO protocol without any modification of the current standard. We show that Hummingbird is resistant to the most common attacks such as linear and differential cryptanalysis. Furthermore, we investigate some properties for integrating the Hummingbird into a privacypreserving identification and mutual authentication protocol.",
"title": ""
},
{
"docid": "dae9a30b5deb97825ca87c1ca65e8285",
"text": "\"Is there a need for fuzzy logic?\" is an issue which is associated with a long history of spirited discussions and debates. There are many misconceptions about fuzzy logic. Fuzzy logic is not fuzzy. Basically, fuzzy logic is a precise logic of imprecision and approximate reasoning. More specifically, fuzzy logic may be viewed as an attempt at formalization/mechanization of two remarkable human capabilities. First, the capability to converse, reason and make rational decisions in an environment of imprecision, uncertainty, incompleteness of information, conflicting information, partiality of truth and partiality of possibility- in short, in an environment of imperfect information. And second, the capability to perform a wide variety of physical and mental tasks without any measurements and any computations (Zadeh 1999, 2001). In fact, one of the principal contributions of fuzzy logic-a contribution which is widely unrecognized-is its high power of precisiation. Fuzzy logic is much more than a logical system. It has many facets. The principal facets are: logical, fuzzy-set-theoretic, epistemic and relational. Most of the practical applications of fuzzy logic are associated with its relational facet. In this paper, fuzzy logic is viewed in a nonstandard perspective. In this perspective, the cornerstones of fuzzy logic-and its principal distinguishing features-are: graduation, granulation, precisiation and the concept of a generalized constraint.",
"title": ""
},
{
"docid": "2cebd9275e30da41a97f6d77207cc793",
"text": "Cyber-physical systems, such as mobile robots, must respond adaptively to dynamic operating conditions. Effective operation of these systems requires that sensing and actuation tasks are performed in a timely manner. Additionally, execution of mission specific tasks such as imaging a room must be balanced against the need to perform more general tasks such as obstacle avoidance. This problem has been addressed by maintaining relative utilization of shared resources among tasks near a user-specified target level. Producing optimal scheduling strategies requires complete prior knowledge of task behavior, which is unlikely to be available in practice. Instead, suitable scheduling strategies must be learned online through interaction with the system. We consider the sample complexity of reinforcement learning in this domain, and demonstrate that while the problem state space is countably infinite, we may leverage the problem’s structure to guarantee efficient learning.",
"title": ""
},
{
"docid": "6097315ac2e4475e8afd8919d390babf",
"text": "This paper presents an origami-inspired technique which allows the application of 2-D fabrication methods to build 3-D robotic systems. The ability to design robots as origami structures introduces a fast and low-cost fabrication method to modern, real-world robotic applications. We employ laser-machined origami patterns to build a new class of robotic systems for mobility and manipulation. Origami robots use only a flat sheet as the base structure for building complicated bodies. An arbitrarily complex folding pattern can be used to yield an array of functionalities, in the form of actuated hinges or active spring elements. For actuation, we use compact NiTi coil actuators placed on the body to move parts of the structure on-demand. We demonstrate, as a proof-of-concept case study, the end-to-end fabrication and assembly of a simple mobile robot that can undergo worm-like peristaltic locomotion.",
"title": ""
},
{
"docid": "5b1241edf4a9853614a18139323f74eb",
"text": "This paper presents a W-band SPDT switch implemented using PIN diodes in a new 90 nm SiGe BiCMOS technology. The SPDT switch achieves a minimum insertion loss of 1.4 dB and an isolation of 22 dB at 95 GHz, with less than 2 dB insertion loss from 77-134 GHz, and greater than 20 dB isolation from 79-129 GHz. The input and output return losses are greater than 10 dB from 73-133 GHz. By reverse biasing the off-state PIN diodes, the P1dB is larger than +24 dBm. To the authors' best knowledge, these results demonstrate the lowest loss and highest power handling capability achieved by a W-band SPDT switch in any silicon-based technology reported to date.",
"title": ""
},
{
"docid": "b2f6c6b4e14824dcd78cdc28547503c8",
"text": "This paper describes the design of digital tracking loops for GPS receivers in a high dynamics environment, without external aiding. We adopted the loop structure of a frequency-locked loop (FLL)-assisted phase-locked loop (PLL) and design it to track accelerations steps, as those occurring in launching vehicles. We used a completely digital model of the loop where the FLL and PLL parts are jointly designed, as opposed to the classical discretized analog model with separately designed FLL and PLL. The new approach does not increase the computational burden. We performed simulations and real RF signal experiments of a fixed-point implementation of the loop, showing that reliable tracking of steps up to 40 g can be achieved",
"title": ""
},
{
"docid": "e35994d3f2cb82666115a001dbd002d0",
"text": "Filtering relevant documents with respect to entities is an essential task in the context of knowledge base construction and maintenance. It entails processing a time-ordered stream of documents that might be relevant to an entity in order to select only those that contain vital information. State-of-the-art approaches to document filtering for popular entities are entity-dependent: they rely on and are also trained on the specifics of differentiating features for each specific entity. Moreover, these approaches tend to use so-called extrinsic information such as Wikipedia page views and related entities which is typically only available only for popular head entities. Entity-dependent approaches based on such signals are therefore ill-suited as filtering methods for long-tail entities. In this paper we propose a document filtering method for long-tail entities that is entity-independent and thus also generalizes to unseen or rarely seen entities. It is based on intrinsic features, i.e., features that are derived from the documents in which the entities are mentioned. We propose a set of features that capture informativeness, entity-saliency, and timeliness. In particular, we introduce features based on entity aspect similarities, relation patterns, and temporal expressions and combine these with standard features for document filtering. Experiments following the TREC KBA 2014 setup on a publicly available dataset show that our model is able to improve the filtering performance for long-tail entities over several baselines. Results of applying the model to unseen entities are promising, indicating that the model is able to learn the general characteristics of a vital document. The overall performance across all entities---i.e., not just long-tail entities---improves upon the state-of-the-art without depending on any entity-specific training data.",
"title": ""
},
{
"docid": "ed5a9e452b4875434207f16737c29e27",
"text": "Social Networking Sites (SNSs) are applications that allow users to create personal profiles to interact with friends or public and to share data such as photos and short videos. The amount of these personal disclosures has raised issues and concerns regarding SNSs' privacy. Users' attitudes toward privacy and their sharing behaviours are inconsistent because they are concerned about privacy, but continue sharing personal information. Also, the existing privacy settings are not flexible enough to prevent privacy risks. In this paper, we propose a novel model called Privacy Settings Model (PSM) that can lead users to understand, control, and update SNSs' privacy settings. We believe that this model will enhance their privacy behaviours toward SNSs' privacy settings and reduce privacy risks.",
"title": ""
},
{
"docid": "4ef861b705c207c95d93687571caea89",
"text": "Mounting of the acute inflammatory response is crucial for host defense and pivotal to the development of chronic inflammation, fibrosis, or abscess formation versus the protective response and the need of the host tissues to return to homeostasis. Within self-limited acute inflammatory exudates, novel families of lipid mediators are identified, named resolvins (Rv), protectins, and maresins, which actively stimulate cardinal signs of resolution, namely, cessation of leukocytic infiltration, counterregulation of proinflammatory mediators, and the uptake of apoptotic neutrophils and cellular debris. The biosynthesis of these resolution-phase mediators in sensu stricto is initiated during lipid-mediator class switching, in which the classic initiators of acute inflammation, prostaglandins and leukotrienes (LTs), switch to produce specialized proresolving mediators (SPMs). In this work, we review recent evidence on the structure and functional roles of these novel lipid mediators of resolution. Together, these show that leukocyte trafficking and temporal spatial signals govern the resolution of self-limited inflammation and stimulate homeostasis.",
"title": ""
},
{
"docid": "7d4d0e4d99b5dfe675f5f4eff5e5679f",
"text": "Remote work and intensive use of Information Technologies (IT) are increasingly common in organizations. At the same time, professional stress seems to develop. However, IS research has paid little attention to the relationships between these two phenomena. The purpose of this research in progress is to present a framework that introduces the influence of (1) new spatial and temporal constraints and of (2) intensive use of IT on employee emotions at work. Specifically, this paper relies on virtuality (e.g. Chudoba et al. 2005) and media richness (Daft and Lengel 1984) theories to determine the emotional consequences of geographically distributed work.",
"title": ""
},
{
"docid": "6e2239ebdf662f33b81b665b20516eec",
"text": "We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.",
"title": ""
},
{
"docid": "b4c25df52a0a5f6ab23743d3ca9a3af2",
"text": "Measuring similarity between texts is an important task for several applications. Available approaches to measure document similarity are inadequate for document pairs that have non-comparable lengths, such as a long document and its summary. This is because of the lexical, contextual and the abstraction gaps between a long document of rich details and its concise summary of abstract information. In this paper, we present a document matching approach to bridge this gap, by comparing the texts in a common space of hidden topics. We evaluate the matching algorithm on two matching tasks and find that it consistently and widely outperforms strong baselines. We also highlight the benefits of incorporating domain knowledge to text matching.",
"title": ""
},
{
"docid": "35792db324d1aaf62f19bebec6b1e825",
"text": "Keyphrases: Global Vectors for Word Representation (GloVe). Intrinsic and extrinsic evaluations. Effect of hyperparameters on analogy evaluation tasks. Correlation of human judgment with word vector distances. Dealing with ambiguity in word using contexts. Window classification. This set of notes first introduces the GloVe model for training word vectors. Then it extends our discussion of word vectors (interchangeably called word embeddings) by seeing how they can be evaluated intrinsically and extrinsically. As we proceed, we discuss the example of word analogies as an intrinsic evaluation technique and how it can be used to tune word embedding techniques. We then discuss training model weights/parameters and word vectors for extrinsic tasks. Lastly we motivate artificial neural networks as a class of models for natural language processing tasks.",
"title": ""
},
{
"docid": "f981f9a15062f4187dfa7ac71f19d54a",
"text": "Background\nSoccer is one of the most widely played sports in the world. However, soccer players have an increased risk of lower limb injury. These injuries may be caused by both modifiable and non-modifiable factors, justifying the adoption of an injury prevention program such as the Fédération Internationale de Football Association (FIFA) 11+. The purpose of this study was to evaluate the efficacy of the FIFA 11+ injury prevention program for soccer players.\n\n\nMethodology\nThis meta-analysis was based on the PRISMA 2015 protocol. A search using the keywords \"FIFA,\" \"injury prevention,\" and \"football\" found 183 articles in the PubMed, MEDLINE, LILACS, SciELO, and ScienceDirect databases. Of these, 6 studies were selected, all of which were randomized clinical trials.\n\n\nResults\nThe sample consisted of 6,344 players, comprising 3,307 (52%) in the intervention group and 3,037 (48%) in the control group. The FIFA 11+ program reduced injuries in soccer players by 30%, with an estimated relative risk of 0.70 (95% confidence interval, 0.52-0.93, p = 0.01). In the intervention group, 779 (24%) players had injuries, while in the control group, 1,219 (40%) players had injuries. However, this pattern was not homogeneous throughout the studies because of clinical and methodological differences in the samples. This study showed no publication bias.\n\n\nConclusion\nThe FIFA 11+ warm-up program reduced the risk of injury in soccer players by 30%.",
"title": ""
}
] |
scidocsrr
|
ba1654541205060a856c737a6566d740
|
Generating Digital Twin Models using Knowledge Graphs for Industrial Production Lines
|
[
{
"docid": "18ec689bc3dcbb076beabaff3bdc43de",
"text": "Much attention has recently been given to the creation of large knowledge bases that contain millions of facts about people, things, and places in the world. These knowledge bases have proven to be incredibly useful for enriching search results, answering factoid questions, and training semantic parsers and relation extractors. The way the knowledge base is actually used in these systems, however, is somewhat shallow—they are treated most often as simple lookup tables, a place to find a factoid answer given a structured query, or to determine whether a sentence should be a positive or negative training example for a relation extraction model. Very little is done in the way of reasoning with these knowledge bases or using them to improve machine reading. This is because typical probabilistic reasoning systems do not scale well to collections of facts as large as modern knowledge bases, and because it is difficult to incorporate information from a knowledge base into typical natural language processing models. In this thesis we present methods for reasoning over very large knowledge bases, and we show how to apply these methods to models of machine reading. The approaches we present view the knowledge base as a graph and extract characteristics of that graph to construct a feature matrix for use in machine learning models. The graph characteristics that we extract correspond to Horn clauses and other logic statements over knowledge base predicates and entities, and thus our methods have strong ties to prior work on logical inference. We show through experiments in knowledge base completion, relation extraction, and question answering that our methods can successfully incorporate knowledge base information into machine learning models of natural language.",
"title": ""
}
] |
[
{
"docid": "4bce532be92d68a39dd07b6f3e799721",
"text": "Most so-called “errors” in probabilistic reasoning are in fact not violations of probability theory. Examples of such “errors” include overconfi dence bias, conjunction fallacy, and base-rate neglect. Researchers have relied on a very narrow normative view, and have ignored conceptual distinctions—for example, single case versus relative frequency—fundamental to probability theory. By recognizing and using these distinctions, however, we can make apparently stable “errors” disappear, reappear, or even invert. I suggest what a reformed understanding of judgments under uncertainty might look like.",
"title": ""
},
{
"docid": "d2bf33fcd8d1de5cca697ef97e774feb",
"text": "The accuracy of Automated Speech Recognition (ASR) technology has improved, but it is still imperfect in many settings. Researchers who evaluate ASR performance often focus on improving the Word Error Rate (WER) metric, but WER has been found to have little correlation with human-subject performance on many applications. We propose a new captioning-focused evaluation metric that better predicts the impact of ASR recognition errors on the usability of automatically generated captions for people who are Deaf or Hard of Hearing (DHH). Through a user study with 30 DHH users, we compared our new metric with the traditional WER metric on a caption usability evaluation task. In a side-by-side comparison of pairs of ASR text output (with identical WER), the texts preferred by our new metric were preferred by DHH participants. Further, our metric had significantly higher correlation with DHH participants' subjective scores on the usability of a caption, as compared to the correlation between WER metric and participant subjective scores. This new metric could be used to select ASR systems for captioning applications, and it may be a better metric for ASR researchers to consider when optimizing ASR systems.",
"title": ""
},
{
"docid": "742dbd75ad995d5c51c4cbce0cc7f8cc",
"text": "Grasping objects under uncertainty remains an open problem in robotics research. This uncertainty is often due to noisy or partial observations of the object pose or shape. To enable a robot to react appropriately to unforeseen effects, it is crucial that it continuously takes sensor feedback into account. While visual feedback is important for inferring a grasp pose and reaching for an object, contact feedback offers valuable information during manipulation and grasp acquisition. In this paper, we use model-free deep reinforcement learning to synthesize control policies that exploit contact sensing to generate robust grasping under uncertainty. We demonstrate our approach on a multi-fingered hand that exhibits more complex finger coordination than the commonly used twofingered grippers. We conduct extensive experiments in order to assess the performance of the learned policies, with and without contact sensing. While it is possible to learn grasping policies without contact sensing, our results suggest that contact feedback allows for a significant improvement of grasping robustness under object pose uncertainty and for objects with a complex shape.",
"title": ""
},
{
"docid": "66b104459bdfc063cf7559c363c5802f",
"text": "We present a new local strategy to solve incremental learning tasks. Applied to Support Vector Machines based on local kernel, it allows to avoid re-learning of all the parameters by selecting a working subset where the incremental learning is performed. Automatic selection procedure is based on the estimation of generalization error by using theoretical bounds that involve the margin notion. Experimental simulation on three typical datasets of machine learning give promising results.",
"title": ""
},
{
"docid": "84a9af22a0fa5a755b750ddf914360f9",
"text": "Pancreatic cancer has one of the worst survival rates amongst all forms of cancer because its symptoms manifest later into the progression of the disease. One of those symptoms is jaundice, the yellow discoloration of the skin and sclera due to the buildup of bilirubin in the blood. Jaundice is only recognizable to the naked eye in severe stages, but a ubiquitous test using computer vision and machine learning can detect milder forms of jaundice. We propose BiliScreen, a smartphone app that captures pictures of the eye and produces an estimate of a person's bilirubin level, even at levels normally undetectable by the human eye. We test two low-cost accessories that reduce the effects of external lighting: (1) a 3D-printed box that controls the eyes' exposure to light and (2) paper glasses with colored squares for calibration. In a 70-person clinical study, we found that BiliScreen with the box achieves a Pearson correlation coefficient of 0.89 and a mean error of -0.09 ± 2.76 mg/dl in predicting a person's bilirubin level. As a screening tool, BiliScreen identifies cases of concern with a sensitivity of 89.7% and a specificity of 96.8% with the box accessory.",
"title": ""
},
{
"docid": "be1fdb17f240295bd614ce6053acfe8b",
"text": "OBJECTIVES\nAesthetic improvement and psychological enhancement have been cited as justifications for orthodontic treatment. This paper reviews the evidence that relates malocclusion to psychological health and quality of life and explores whether this evidence supports the most commonly used aesthetic Orthodontic Treatment Need Indices (OTNI).\n\n\nMATERIALS AND METHODS\nThe relevant cited material from the MEDLINE, Web of Science, Scopus, Cochrane databases, and scientific textbooks were used. The citation rate was confirmed by using the Google Scholar.\n\n\nRESULTS\nThe subjective nature of aesthetic indices and the variable perception of attractiveness between clinicians and patients, and among various cultures or countries are a few limitations of aesthetic OTNI. The available evidence of mainly cross-sectional studies on the link between malocclusion and either psychosocial well-being or quality of life is not conclusive, and sometimes contradictory, to suggest these characteristics are affected by malocclusions. Further, the long-term longitudinal studies did not suggest that people with malocclusion are disadvantaged psychologically, or their quality of life would be worse off, which challenges using aesthetic OTNI to assess the social and psychological implications of malocclusion.\n\n\nCONCLUSION\nThe subjective nature of aesthetic OTNI and the minor contributory role of malocclusion in psychosocial health or quality of life undermine using aesthetic indices to assess the likely social and psychological implications of malocclusion. Consequently, using aesthetic OTNI, as a method to quantify malocclusion remains open to debate. Various soft and hard-tissue analyses are used before formulating a treatment plan (i.e., assessment of sagittal and vertical skeletal relationships). The addition of a shortened version of these analyses to the aesthetic OTNI can be a good substitute for the aesthetic components of OTNI, if an assessment of the aesthetic aspects of malocclusion is intended. This reduces subjectivity and improves the validity of the OTNI that incorporate an aesthetic component.",
"title": ""
},
{
"docid": "8a634e7bf127f2a90227c7502df58af0",
"text": "A convex channel surface with Si0.8Ge0.2 is proposed to enhance the retention time of a capacitorless DRAM Generation 2 type of capacitorless DRAM cell. This structure provides a physical well together with an electrostatic barrier to more effectively store holes and thereby achieve larger sensing margin as well as retention time. The advantages of this new cell design as compared with the planar cell design are assessed via twodimensional device simulations. The results indicate that the convex heterojunction channel design is very promising for future capacitorless DRAM. Keywords-Capacitorless DRAM; Retention Time; Convex Channel; Silicon Germanium;",
"title": ""
},
{
"docid": "a94f294b09dfb190433c0f80d08de67f",
"text": "The in vitro antibacterial activities of 29 traditional medicinal plants used in respiratory ailments were assessed on multidrug resistant Gram-positive and Gram-negative bacteria isolated from the sore throat patients and two reference strains. The methanolic, n-hexane, and aqueous extracts were screened by the agar well diffusion assay. Bioactive fractions of effective extracts were identified on TLC coupled with bioautography, while their toxicity was determined using haemolytic assay against human erythrocytes. Qualitative and quantitative phytochemical analysis of effective extracts was also performed. Methanolic extract of 18 plants showed antimicrobial activity against test strains. Adhatoda vasica (ZI = 17-21 mm, MIC: 7.12-62.5 μg/mL), Althaea officinalis (ZI = 16-20 mm, MIC: 15.62-31.25 μg/mL), Cordia latifolia (ZI = 16-20 mm, MIC: 12.62-62.5 μg/mL), Origanum vulgare (ZI = 20-22 mm, MIC: 3-15.62 μg/mL), Thymus vulgaris (ZI = 21-25 mm, MIC: 7.81-31.25 μg/mL), and Ziziphus jujuba (ZI = 14-20 mm, MIC: 7.81-31.25 μg/mL) showed significant antibacterial activity. Alkaloid fractions of Adhatoda vasica, Cordia latifolia, and Origanum vulgare and flavonoid fraction of the Althaea officinalis, Origanum vulgare, Thymus Vulgaris, and Ziziphus jujuba exhibited antimicrobial activity. Effective plant extracts show 0.93-0.7% erythrocyte haemolysis. The results obtained from this study provide a scientific rationale for the traditional use of these herbs and laid the basis for future studies to explore novel antimicrobial compounds.",
"title": ""
},
{
"docid": "eae04aa2942bfd3752fb596f645e2c2e",
"text": "PURPOSE\nHigh fasting blood glucose (FBG) can lead to chronic diseases such as diabetes mellitus, cardiovascular and kidney diseases. Consuming probiotics or synbiotics may improve FBG. A systematic review and meta-analysis of controlled trials was conducted to clarify the effect of probiotic and synbiotic consumption on FBG levels.\n\n\nMETHODS\nPubMed, Scopus, Cochrane Library, and Cumulative Index to Nursing and Allied Health Literature databases were searched for relevant studies based on eligibility criteria. Randomized or non-randomized controlled trials which investigated the efficacy of probiotics or synbiotics on the FBG of adults were included. Studies were excluded if they were review articles and study protocols, or if the supplement dosage was not clearly mentioned.\n\n\nRESULTS\nA total of fourteen studies (eighteen trials) were included in the analysis. Random-effects meta-analyses were conducted for the mean difference in FBG. Overall reduction in FBG observed from consumption of probiotics and synbiotics was borderline statistically significant (-0.18 mmol/L 95 % CI -0.37, 0.00; p = 0.05). Neither probiotic nor synbiotic subgroup analysis revealed a significant reduction in FBG. The result of subgroup analysis for baseline FBG level ≥7 mmol/L showed a reduction in FBG of 0.68 mmol/L (-1.07, -0.29; ρ < 0.01), while trials with multiple species of probiotics showed a more pronounced reduction of 0.31 mmol/L (-0.58, -0.03; ρ = 0.03) compared to single species trials.\n\n\nCONCLUSION\nThis meta-analysis suggests that probiotic and synbiotic supplementation may be beneficial in lowering FBG in adults with high baseline FBG (≥7 mmol/L) and that multispecies probiotics may have more impact on FBG than single species.",
"title": ""
},
{
"docid": "3fe9e38a41d422367da1fce31579eef2",
"text": "While desktop virtual reality (VR) offers a way to visualize structure in large information sets, there have been relatively few empirical investigations of visualization designs in this domain. This thesis reports the development and testing of a series of prototype desktop vR worlds, which were designed to support navigation during information visudization and retrievd. Four rnethods were used for data collection: search task scoring, subjective questionnaires, navigationai activity logging ruid analysis, and administration of tests for spatid and structure-learning ability. The combination of these research methods revealed significant effects of user abilities, information environment designs, and task learning. The first of four studies compared three versions of a stmctured virtuai landscape, finding significant differences in sense of presence, ease of use, and overall enjoyment; there was, however, no significant difference in performance among the three landscape versions. The second study found a hypertext interface to be superior to a VR interface for task performance, ease of use, and rated efficiency; nevertheless, the VR interface was rated as more enjoyable. The third study used a new layout aigorithrn; the resulting prototype was rated as easier to use and more efficient than the previous VR version. In the fourth study, a zoomable, rnap-like view of the newest VR prototype was developed. Experimental participants found the map-view superior to the 3D-view for task performance and rated efficiency. Overall, this research did not find a performance advantage for using 3D versions of VR. In addition, the results of the fourth study found that people in the lowest quartile of spatial ability had significantly lower search performance (relative to the highest three quartiles) in a VR world. This finding suggests that individual differences for traits such as spatial ability may be important in detennining the usability and acceptability of VR environments. In addition to the experimental results summarized above, this thesis dso developed and refined a methodology for investigating tasks, users, and software in 3D environments. This methodology included tests for spatial and structure-learning abilities, as well as logging and analysis of a user's navigational activi-",
"title": ""
},
{
"docid": "a13ff1e2192c9a7e4bcfdf5e1ac39538",
"text": "Before graduating from X as Waymo, Google's self-driving car project had been using custom lidars for several years. In their latest revision, the lidars are designed to meet the challenging requirements we discovered in autonomously driving 2 million highly-telemetered miles on public roads. Our goal is to approach price points required for advanced driver assistance systems (ADAS) while meeting the performance needed for safe self-driving. This talk will review some history of the project and describe a few use-cases for lidars on Waymo cars. Out of that will emerge key differences between lidars for self-driving and traditional applications (e.g. mapping) which may provide opportunities for semiconductor lasers.",
"title": ""
},
{
"docid": "a93405a92bb75e459ffb102c1d394d09",
"text": "OBJECTIVE\nTo compare the stability of lengthened sacro-iliac screw and sacro-iliac screw for the treatment of bilateral vertical sacral fractures to provide reference for clinical application.\n\n\nMETHODS\nA finite element model of Tile C pelvic ring injury (bilateral type Denis II fracture of sacrum) was produced. (Tile and Denis are surgeons, who put forward the classifications of pelvic ring injury and sacral fracture respectively.) The bilateral sacral fractures were fixed with a lengthened sacro-iliac screw and a sacro-iliac screw in seven types of models, respectively. The translation and angular displacement of the superior surface of the sacrum in the case of standing on both feet were measured and compared.\n\n\nRESULTS\nThe stability of one lengthened sacro-iliac screw fixation in the S1 or S2 segment is superior to that of two bidirectional sacro-iliac screws in the same sacral segment; the stability of one lengthened sacro-iliac screw fixation in S1 and S2 segments, respectively, is superior to that of two bidirectional sacro-iliac screw fixation in S1 and S2 segments, respectively; the stability of one lengthened sacro-iliac screw fixation in S1 and S2 segments, respectively, is superior to that of one lengthened sacro-iliac screw fixation in the S1 or S2 segment; the stability of two bidirectional sacro-iliac screw fixation in S1 and S2 segments, respectively, is markedly superior to that of two bidirectional sacro-iliac screw fixation in the S1 or S2 segment and is also markedly superior to that of one sacro-iliac screw fixation in the S1 segment and one sacro-iliac screw fixation in the S2 segment; the vertical stability of the lengthened sacro-iliac screw or the sacro-iliac screw fixation in S2 is superior to that of S1. The rotational stability of the lengthened sacro-iliac screw or sacro-iliac screw fixation in S1 is superior to that of S2.\n\n\nCONCLUSION\nS1 and S2 lengthened sacro-iliac screws should be used for the fixation in bilateral sacral fractures of Tile C pelvic ring injury as far as possible and the most stable fixation is the combination of the lengthened sacro-iliac screws of S1 and S2 segments. Even if lengthened sacro-iliac screws cannot be used due to limited conditions, two bidirectional sacro-iliac screw fixation in S1 and S2 segments, respectively, is recommended. No matter which kind of sacro-iliac screw is applied, the fixation combination of S1 and S2 segments is strongly recommended to maximise the stability of the pelvic posterior ring.",
"title": ""
},
{
"docid": "ff49d7f47a957b69ad6fc28dd567590e",
"text": "Although graphical user interfaces (GUIs) constitute a large part of the software being developed today and are typically created using rapid prototyping, there are no effective regression testing techniques for GUIs. The needs of GUI regression testing differ from those of traditional software. When the structure of a GUI is modified, test cases from the original GUI's suite are either reusable or unusable on the modified GUI. Because GUI test case generation is expensive, our goal is to make the unusable test cases usable, thereby helping to retain the suite's event coverage. The idea of reusing these unusable (obsolete) test cases has not been explored before. This article shows that a large number of test cases become unusable for GUIs. It presents a new GUI regression testing technique that first automatically determines the usable and unusable test cases from a test suite after a GUI modification, then determines the unusable test cases that can be repaired so that they can execute on the modified GUI, and finally uses repairing transformations to repair the test cases. This regression testing technique along with four repairing transformations has been implemented. An empirical study for four open-source applications demonstrates that (1) this approach is effective in that many of the test cases can be repaired, and is practical in terms of its time performance, (2) certain types of test cases are more prone to becoming unusable, and (3) certain types of “dominator” events, when modified, make a large number of test cases unusable.",
"title": ""
},
{
"docid": "554a0628270978757eda989c67ac3416",
"text": "An accurate rainfall forecasting is very important for agriculture dependent countries like India. For analyzing the crop productivity, use of water resources and pre-planning of water resources, rainfall prediction is important. Statistical techniques for rainfall forecasting cannot perform well for long-term rainfall forecasting due to the dynamic nature of climate phenomena. Artificial Neural Networks (ANNs) have become very popular, and prediction using ANN is one of the most widely used techniques for rainfall forecasting. This paper provides a detailed survey and comparison of different neural network architectures used by researchers for rainfall forecasting. The paper also discusses the issues while applying different neural networks for yearly/monthly/daily rainfall forecasting. Moreover, the paper also presents different accuracy measures used by researchers for evaluating performance of ANN.",
"title": ""
},
{
"docid": "43cdcbfaca6c69cdb8652761f7e8b140",
"text": "Aggregation of local features is a well-studied approach for image as well as 3D model retrieval (3DMR). A carefully designed local 3D geometric feature is able to describe detailed local geometry of 3D model, often with invariance to geometric transformations that include 3D rotation of local 3D regions. For efficient 3DMR, these local features are aggregated into a feature per 3D model. A recent alternative, end-toend 3D Deep Convolutional Neural Network (3D-DCNN) [7][33], has achieved accuracy superior to the abovementioned aggregation-of-local-features approach. However, current 3D-DCNN based methods have weaknesses; they lack invariance against 3D rotation, and they often miss detailed geometrical features due to their quantization of shapes into coarse voxels in applying 3D-DCNN. In this paper, we propose a novel deep neural network for 3DMR called Deep Local feature Aggregation Network (DLAN) that combines extraction of rotation-invariant 3D local features and their aggregation in a single deep architecture. The DLAN describes local 3D regions of a 3D model by using a set of 3D geometric features invariant to local rotation. The DLAN then aggregates the set of features into a (global) rotation-invariant and compact feature per 3D model. Experimental evaluation shows that the DLAN outperforms the existing deep learning-based 3DMR algorithms.",
"title": ""
},
{
"docid": "de1d3377aafd684385a332a03d4b6267",
"text": "It has recently been suggested that brain areas crucial for mentalizing, including the medial prefrontal cortex (mPFC), are not activated exclusively during mentalizing about the intentions, beliefs, morals or traits of the self or others, but also more generally during cognitive reasoning including relational processing about objects. Contrary to this notion, a meta-analysis of cognitive reasoning tasks demonstrates that the core mentalizing areas are not systematically recruited during reasoning, but mostly when these tasks describe some human agency or general evaluative and enduring traits about humans, and much less so when these social evaluations are absent. There is a gradient showing less mPFC activation as less mentalizing content is contained in the stimulus material used in reasoning tasks. Hence, it is more likely that cognitive reasoning activates the mPFC because inferences about social agency and mind are involved.",
"title": ""
},
{
"docid": "5de6c98e57b19960e9d2ef4f952cf78d",
"text": "We present chaining techniques for signing/verifying multiple packets using a single signing/verification operation. We then present flow signing and verification procedures based upon a tree chaining technique. Since a single signing/verification operation is amortized over many packets, these procedures improve signing and verification rates by one to two orders of magnitude compared to the approach of signing/verifying packets individually. Our procedures do not depend upon reliable delivery of packets, provide delay-bounded signing, and are thus suitable for delay-sensitive flows and multicast applications. To further improve our procedures, we propose several extensions to the Feige-Fiat-Shamir digital signature scheme to speed up both the signing and verification operations, as well as to allow “adjustable and incremental” verification. The extended scheme, called eFFS, is compared to four other digital signature schemes (RSA, DSA, ElGamal, Rabin). We compare their signing and verification times, as well as key and signature sizes. We observe that (i) the signing and verification operations of eFFS are highly efficient compared to the other schemes, (ii) eFFS allows a tradeoff between memory and signing/verification time, and (iii) eFFS allows adjustable and incremental verification by receivers.",
"title": ""
},
{
"docid": "73f6ba4ad9559cd3c6f7a88223e4b556",
"text": "A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks. There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method, which we call smart augmentation and we show how to use it to increase the accuracy and reduce over fitting on a target network. Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart augmentation has shown the potential to increase accuracy by demonstrably significant measures on all data sets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases.",
"title": ""
},
{
"docid": "2809e4b07123e5d594481e423c001821",
"text": "In the current driving environment, the top priority is the safety of person. There are two methods proposed to solve safety problems. One is active sensors method and another is passive sensor method. Though with high accuracy, active sensors method has many disadvantages such as high cost, failure to adapt to complex change of environments, and problems relating to laws. Thus there is no way to popularize it. In contrast, passive sensor method is more suitable to current assist systems in virtue of low cost, ability to acquire lots of information. In this paper, the passive sensor method is applied to front and rear vision-based collision warning application. Meanwhile, time-to-contact is used to collision judgment analysis and dedicated short range communications is used to give alert information to near vehicle.",
"title": ""
}
] |
scidocsrr
|
55ff13f2e08f1a4027633feff317b156
|
Deep Fusion of Multiple Semantic Cues for Complex Event Recognition
|
[
{
"docid": "a25338ae0035e8a90d6523ee5ef667f7",
"text": "Activity recognition in video is dominated by low- and mid-level features, and while demonstrably capable, by nature, these features carry little semantic meaning. Inspired by the recent object bank approach to image representation, we present Action Bank, a new high-level representation of video. Action bank is comprised of many individual action detectors sampled broadly in semantic space as well as viewpoint space. Our representation is constructed to be semantically rich and even when paired with simple linear SVM classifiers is capable of highly discriminative performance. We have tested action bank on four major activity recognition benchmarks. In all cases, our performance is better than the state of the art, namely 98.2% on KTH (better by 3.3%), 95.0% on UCF Sports (better by 3.7%), 57.9% on UCF50 (baseline is 47.9%), and 26.9% on HMDB51 (baseline is 23.2%). Furthermore, when we analyze the classifiers, we find strong transfer of semantics from the constituent action detectors to the bank classifier.",
"title": ""
}
] |
[
{
"docid": "0acfa73c168328e33a92be4cc9de9c61",
"text": "This article reviews recent advances in applying natural language processing NLP to Electronic Health Records EHRs for computational phenotyping. NLP-based computational phenotyping has numerous applications including diagnosis categorization, novel phenotype discovery, clinical trial screening, pharmacogenomics, drug-drug interaction DDI, and adverse drug event ADE detection, as well as genome-wide and phenome-wide association studies. Significant progress has been made in algorithm development and resource construction for computational phenotyping. Among the surveyed methods, well-designed keyword search and rule-based systems often achieve good performance. However, the construction of keyword and rule lists requires significant manual effort, which is difficult to scale. Supervised machine learning models have been favored because they are capable of acquiring both classification patterns and structures from data. Recently, deep learning and unsupervised learning have received growing attention, with the former favored for its performance and the latter for its ability to find novel phenotypes. Integrating heterogeneous data sources have become increasingly important and have shown promise in improving model performance. Often, better performance is achieved by combining multiple modalities of information. Despite these many advances, challenges and opportunities remain for NLP-based computational phenotyping, including better model interpretability and generalizability, and proper characterization of feature relations in clinical narratives.",
"title": ""
},
{
"docid": "5cccc7cc748d3461dc3c0fb42a09245f",
"text": "The self and attachment difficulties associated with chronic childhood abuse and other forms of pervasive trauma must be understood and addressed in the context of the therapeutic relationship for healing to extend beyond resolution of traditional psychiatric symptoms and skill deficits. The authors integrate contemporary research and theory about attachment and complex developmental trauma, including dissociation, and apply it to psychotherapy of complex trauma, especially as this research and theory inform the therapeutic relationship. Relevant literature on complex trauma and attachment is integrated with contemporary trauma theory as the background for discussing relational issues that commonly arise in this treatment, highlighting common challenges such as forming a therapeutic alliance, managing frame and boundaries, and working with dissociation and reenactments.",
"title": ""
},
{
"docid": "969e21385b897ec7b0f8fda0566db3bc",
"text": "In many real-life recommendation settings, user profiles and past activities are not available. The recommender system should make predictions based on session data, e.g. session clicks and descriptions of clicked items. Conventional recommendation approaches, which rely on past user-item interaction data, cannot deliver accurate results in these situations. In this paper, we describe a method that combines session clicks and content features such as item descriptions and item categories to generate recommendations. To model these data, which are usually of different types and nature, we use 3-dimensional convolutional neural networks with character-level encoding of all input data. While 3D architectures provide a natural way to capture spatio-temporal patterns, character-level networks allow modeling different data types using their raw textual representation, thus reducing feature engineering effort. We applied the proposed method to predict add-to-cart events in e-commerce websites, which is more difficult then predicting next clicks. On two real datasets, our method outperformed several baselines and a state-of-the-art method based on recurrent neural networks.",
"title": ""
},
{
"docid": "4aca364133eb0630c3b97e69922d07b7",
"text": "Deep learning offers new tools to improve our understanding of many important scientific problems. Neutrinos are the most abundant particles in existence and are hypothesized to explain the matter-antimatter asymmetry that dominates our universe. Definitive tests of this conjecture require a detailed understanding of neutrino interactions with a variety of nuclei. Many measurements of interest depend on vertex reconstruction — finding the origin of a neutrino interaction using data from the detector, which can be represented as images. Traditionally, this has been accomplished by utilizing methods that identify the tracks coming from the interaction. However, these methods are not ideal for interactions where an abundance of tracks and cascades occlude the vertex region. Manual algorithm engineering to handle these challenges is complicated and error prone. Deep learning extracts rich, semantic features directly from raw data, making it a promising solution to this problem. In this work, deep learning models are presented that classify the vertex location in regions meaningful to the domain scientists improving their ability to explore more complex interactions.",
"title": ""
},
{
"docid": "abff55f0189ac9aff9db78212c88abf0",
"text": "The climatic modifications lead to global warming; favouring the risk of the appearance and development of diseases are considered until now tropical diseases. Another important factor is the workers' immigration, the economic crisis favouring the passive transmission of new species of culicidae from different areas. Malaria is the disease with the widest distribution in the globe. Millions of people are infected every year in Africa, India, South-East Asia, Middle East, and Central and South America, with more than 41% of the global population under the risk of infestation with malaria. The increase of the number of local cases reported in 2007-2011 indicates that the conditions can favour the high local transmission in the affected areas. In the situation presented, the establishment of the level of risk concerning the reemergence of malaria in Romania becomes a priority.",
"title": ""
},
{
"docid": "7662a9d5d31ed2307837a04ec7a4e27c",
"text": "Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of training data. In this paper, we train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing in a photo-realistic simulation. Training is done through imitation learning with data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots. Additionally, we show that our optimized network architecture can run in real-time on embedded hardware, allowing for efficient onboard processing critical for real-world deployment. From a broader perspective, our results underline the importance of extensive data augmentation techniques to improve robustness in end-to-end learning setups.",
"title": ""
},
{
"docid": "4b09c9f8ddbdfb3fa316ba53d4cdee49",
"text": "We present a Markov chain model to estimate the saturation throughput of an IEEE 802.11 network in the basic access mode using cognitive radio. IEEE 802.11 networks are very popular and likely more spectrum is needed for their future deployment and applications. On the other hand the cognitive radio idea has been proposed to deal with the spectrum shortage problem. It is thus reasonable to assume that future IEEE 802.11 network may operate in the context of cognitive radio. One paradigm is that IEEE 802.11 networks will act as secondary users sharing the spectrum of a primary use using the dynamic spectrum access approach. In this scenario it will be important to study how the physical layer of cognitive radio may impact on the IEEE 802.11 network as a whole. Based on a Markov chain model, we investigate the impact of the primary user’s arrivals on the IEEE 802.11 network’s saturation throughput. We validate our presented model with ns2 simulations. As shown by our numerical results, our model results are very close to the ns2 simulation results.",
"title": ""
},
{
"docid": "6104736f53363991d675c2a03ada8c82",
"text": "The term machine learning refers to a set of topics dealing with the creation and evaluation of algorithms that facilitate pattern recognition, classification, and prediction, based on models derived from existing data. Two facets of mechanization should be acknowledged when considering machine learning in broad terms. Firstly, it is intended that the classification and prediction tasks can be accomplished by a suitably programmed computing machine. That is, the product of machine learning is a classifier that can be feasibly used on available hardware. Secondly, it is intended that the creation of the classifier should itself be highly mechanized, and should not involve too much human input. This second facet is inevitably vague, but the basic objective is that the use of automatic algorithm construction methods can minimize the possibility that human biases could affect the selection and performance of the algorithm. Both the creation of the algorithm and its operation to classify objects or predict events are to be based on concrete, observable data. The history of relations between biology and the field of machine learning is long and complex. An early technique [1] for machine learning called the perceptron constituted an attempt to model actual neuronal behavior, and the field of artificial neural network (ANN) design emerged from this attempt. Early work on the analysis of translation initiation sequences [2] employed the perceptron to define criteria for start sites in Escherichia coli. Further artificial neural network architectures such as the adaptive resonance theory (ART) [3] and neocognitron [4] were inspired from the organization of the visual nervous system. In the intervening years, the flexibility of machine learning techniques has grown along with mathematical frameworks for measuring their reliability, and it is natural to hope that machine learning methods will improve the efficiency of discovery and understanding in the mounting volume and complexity of biological data. This tutorial is structured in four main components. Firstly, a brief section reviews definitions and mathematical prerequisites. Secondly, the field of supervised learning is described. Thirdly, methods of unsupervised learning are reviewed. Finally, a section reviews methods and examples as implemented in the open source data analysis and visualization language R (http://www.r-project.org).",
"title": ""
},
{
"docid": "847a64b0b5f2b8f3387c260bca8bb9c0",
"text": "Pain-related emotions are a major barrier to effective self rehabilitation in chronic pain. Automated coaching systems capable of detecting these emotions are a potential solution. This paper lays the foundation for the development of such systems by making three contributions. First, through literature reviews, an overview of how pain is expressed in chronic pain and the motivation for detecting it in physical rehabilitation is provided. Second, a fully labelled multimodal dataset (named `EmoPain') containing high resolution multiple-view face videos, head mounted and room audio signals, full body 3D motion capture and electromyographic signals from back muscles is supplied. Natural unconstrained pain related facial expressions and body movement behaviours were elicited from people with chronic pain carrying out physical exercises. Both instructed and non-instructed exercises were considered to reflect traditional scenarios of physiotherapist directed therapy and home-based self-directed therapy. Two sets of labels were assigned: level of pain from facial expressions annotated by eight raters and the occurrence of six pain-related body behaviours segmented by four experts. Third, through exploratory experiments grounded in the data, the factors and challenges in the automated recognition of such expressions and behaviour are described, the paper concludes by discussing potential avenues in the context of these findings also highlighting differences for the two exercise scenarios addressed.",
"title": ""
},
{
"docid": "d87730770e080ee926a4859e421d4309",
"text": "The term metastasis is widely used to describe the endpoint of the process by which tumour cells spread from the primary location to an anatomically distant site. Achieving successful dissemination is dependent not only on the molecular alterations of the cancer cells themselves, but also on the microenvironment through which they encounter. Here, we reviewed the molecular alterations of metastatic gastric cancer (GC) as it reflects a large proportion of GC patients currently seen in clinic. We hope that further exploration and understanding of the multistep metastatic cascade will yield novel therapeutic targets that will lead to better patient outcomes.",
"title": ""
},
{
"docid": "fda1df969d6d51c5937f016d661911bf",
"text": "In this paper the solution of two-stage guillotine cutting stock problems is considered. Especially such problems are under investigation where the sizes of the order demands diier in a large range. We propose a new approach dealing with such situations and compare it with the classical Gilmore/Gomory approach. We report results of extensive numerical experiments which show the advantages of the new approach.",
"title": ""
},
{
"docid": "dcda412c18e92650d9791023f13e4392",
"text": "Graph can straightforwardly represent the relations between the objects, which inevitably draws a lot of attention of both academia and industry. Achievements mainly concentrate on homogeneous graph and bipartite graph. However, it is difficult to use existing algorithm in actual scenarios. Because in the real world, the type of the objects and the relations are diverse and the amount of the data can be very huge. Considering of the characteristics of \"black market\", we proposeHGsuspector, a novel and scalable algorithm for detecting collective fraud in directed heterogeneous graphs.We first decompose directed heterogeneous graphs into a set of bipartite graphs, then we define a metric on each connected bipartite graph and calculate scores of it, which fuse the structure information and event probability. The threshold for distinguishing between normal and abnormal can be obtained by statistic or other anomaly detection algorithms in scores space. We also provide a technical solution for fraud detection in e-commerce scenario, which has been successfully applied in Jingdong e-commerce platform to detect collective fraud in real time. The experiments on real-world datasets, which has billion nodes and edges, demonstrate that HGsuspector is more accurate and fast than the most practical and state-of-the-art approach by far.",
"title": ""
},
{
"docid": "45d6863e54b343d7a081e79c84b81e65",
"text": "In order to obtain optimal 3D structure and viewing parameter estimates, bundle adjustment is often used as the last step of feature-based structure and motion estimation algorithms. Bundle adjustment involves the formulation of a large scale, yet sparse minimization problem, which is traditionally solved using a sparse variant of the Levenberg-Marquardt optimization algorithm that avoids storing and operating on zero entries. This paper argues that considerable computational benefits can be gained by substituting the sparse Levenberg-Marquardt algorithm in the implementation of bundle adjustment with a sparse variant of Powell's dog leg non-linear least squares technique. Detailed comparative experimental results provide strong evidence supporting this claim",
"title": ""
},
{
"docid": "185ae8a2c89584385a810071c6003c15",
"text": "In this paper, we propose a free viewpoint image rendering method combined with filter based alpha matting for improving the image quality of image boundaries. When we synthesize a free viewpoint image, blur around object boundaries in an input image spills foreground/background color in the synthesized image. To generate smooth boundaries, alpha matting is a solution. In our method based on filtering, we make a boundary map from input images and depth maps, and then feather the map by using guided filter. In addition, we extend view synthesis method to deal the alpha channel. Experiment results show that the proposed method synthesizes 0.4 dB higher quality images than the conventional method without the matting. Also the proposed method synthesizes 0.2 dB higher quality images than the conventional method of robust matting. In addition, the computational cost of the proposed method is 100x faster than the conventional matting.",
"title": ""
},
{
"docid": "5e8d73e199782d0fe9608483a7f9eafa",
"text": "In the short time since publication of Boykov and Jolly's seminal paper [2001], graph cuts have become well established as a leading method in 2D and 3D semi-automated image segmentation. Although this approach is computationally feasible for many tasks, the memory overhead and supralinear time complexity of leading algorithms results in an excessive computational burden for high-resolution data. In this paper, we introduce a multilevel banded heuristic for computation of graph cuts that is motivated by the well-known narrow band algorithm in level set computation. We perform a number of numerical experiments to show that this heuristic drastically reduces both the running time and the memory consumption of graph cuts while producing nearly the same segmentation result as the conventional graph cuts. Additionally, we are able to characterize the type of segmentation target for which our multilevel banded heuristic yields different results from the conventional graph cuts. The proposed method has been applied to both 2D and 3D images with promising results.",
"title": ""
},
{
"docid": "e1edaf3e8754e8403b9be29f58ba3550",
"text": "This paper presents a simulation framework for pathological gait assistance with a hip exoskeleton. Previously we had developed an event-driven controller for gait assistance [1]. We now simulate (or optimize) the gait assistance in ankle pathologies (e.g., weak dorsiflexion or plantarflexion). It is done by 1) utilizing the neuromuscular walking model, 2) parameterizing assistive torques for swing and stance legs, and 3) performing dynamic optimizations that takes into account the human-robot interactive dynamics. We evaluate the energy expenditures and walking parameters for the different gait types. Results show that each gait type should have a different assistance strategy comparing with the assistance of normal gait. Although we need further studies about the pathologies, our simulation model is feasible to design the gait assistance for the ankle muscle weaknesses.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "60da71841669948e0a57ba4673693791",
"text": "AIMS\nStiffening of the large arteries is a common feature of aging and is exacerbated by a number of disorders such as hypertension, diabetes, and renal disease. Arterial stiffening is recognized as an important and independent risk factor for cardiovascular events. This article will provide a comprehensive review of the recent advance on assessment of arterial stiffness as a translational medicine biomarker for cardiovascular risk.\n\n\nDISCUSSIONS\nThe key topics related to the mechanisms of arterial stiffness, the methodologies commonly used to measure arterial stiffness, and the potential therapeutic strategies are discussed. A number of factors are associated with arterial stiffness and may even contribute to it, including endothelial dysfunction, altered vascular smooth muscle cell (SMC) function, vascular inflammation, and genetic determinants, which overlap in a large degree with atherosclerosis. Arterial stiffness is represented by biomarkers that can be measured noninvasively in large populations. The most commonly used methodologies include pulse wave velocity (PWV), relating change in vessel diameter (or area) to distending pressure, arterial pulse waveform analysis, and ambulatory arterial stiffness index (AASI). The advantages and limitations of these key methodologies for monitoring arterial stiffness are reviewed in this article. In addition, the potential utility of arterial stiffness as a translational medicine surrogate biomarker for evaluation of new potentially vascular protective drugs is evaluated.\n\n\nCONCLUSIONS\nAssessment of arterial stiffness is a sensitive and useful biomarker of cardiovascular risk because of its underlying pathophysiological mechanisms. PWV is an emerging biomarker useful for reflecting risk stratification of patients and for assessing pharmacodynamic effects and efficacy in clinical studies.",
"title": ""
},
{
"docid": "6c97853046dd2673d9c83990119ef43c",
"text": "Atomic actions (or transactions) are useful for coping with concurrency and failures. One way of ensuring atomicity of actions is to implement applications in terms of atomic data types: abstract data types whose objects ensure serializability and recoverability of actions using them. Many atomic types can be implemented to provide high levels of concurrency by taking advantage of algebraic properties of the type's operations, for example, that certain operations commute. In this paper we analyze the level of concurrency permitted by an atomic type. We introduce several local constraints on individual objects that suffice to ensure global atomicity of actions; we call these constraints local atomicity properties. We present three local atomicity properties, each of which is optimal: no strictly weaker local constraint on objects suffices to ensure global atomicity for actions. Thus, the local atomicity properties define precise limits on the amount of concurrency that can be permitted by an atomic type.",
"title": ""
},
{
"docid": "88ac62e6d0b804bca9f035d39a3cb5f5",
"text": "For measuring machines and machine tools, geometrical accuracy is a key performance criterion. While numerical compensation is well established for CMMs, it is increasingly used on machine tools in addition to mechanical accuracy. This paper is an update on the CIRP keynote paper by Sartori and Zhang from 1995 [Sartori S, Zhang GX (1995) Geometric error measurement and compensation of machines, Annals of the CIRP 44(2):599–609]. Since then, numerical error compensation has gained immense importance for precision machining. This paper reviews the fundamentals of numerical error compensation and the available methods for measuring the geometrical errors of a machine. It discusses the uncertainties involved in different mapping methods and their application characteristics. Furthermore, the challenges for the use of numerical compensation for manufacturing machines are specified. Based on technology and market development, this work aims at giving a perspective for the role of numerical compensation in the future. 2008 CIRP.",
"title": ""
}
] |
scidocsrr
|
fa3d75fe64ef0c260821ff265e6b24d1
|
Acquiring temporal constraints between relations
|
[
{
"docid": "444ce710b4c6a161ae5f801ed0ae8bec",
"text": "This paper investigates a machine learning approach for temporally ordering and anchoring events in natural language texts. To address data sparseness, we used temporal reasoning as an oversampling method to dramatically expand the amount of training data, resulting in predictive accuracy on link labeling as high as 93% using a Maximum Entropy classifier on human annotated data. This method compared favorably against a series of increasingly sophisticated baselines involving expansion of rules derived from human intuitions.",
"title": ""
}
] |
[
{
"docid": "1ee1adcfd73e9685eab4e2abd28183c7",
"text": "We describe an algorithm for generating spherical mosaics from a collection of images acquired from a common optical center. The algorithm takes as input an arbitrary number of partially overlapping images, an adjacency map relating the images, initial estimates of the rotations relating each image to a specified base image, and approximate internal calibration information for the camera. The algorithm's output is a rotation relating each image to the base image, and revised estimates of the camera's internal parameters. Our algorithm is novel in the following respects. First, it requires no user input. (Our image capture instrumentation provides both an adjacency map for the mosaic, and an initial rotation estimate for each image.) Second, it optimizes an objective function based on a global correlation of overlapping image regions. Third, our representation of rotations significantly increases the accuracy of the optimization. Finally, our representation and use of adjacency information guarantees globally consistent rotation estimates. The algorithm has proved effective on a collection of nearly four thousand images acquired from more than eighty distinct optical centers. The experimental results demonstrate that the described global optimization strategy is superior to non-global aggregation of pair-wise correlation terms, and that it successfully generates high-quality mosaics despite significant error in initial rotation estimates.",
"title": ""
},
{
"docid": "9ee98f4c2e1fe8b5f49fd0e8a3b142c5",
"text": "In this paper we characterize the workload of a Netflix streaming video web server. Netflix is a widely popular subscription service with over 81 million global subscribers [24]. The service streams professionally produced TV shows and movies over the Internet to an extremely diverse and representative set of playback devices over broadband, DSL, WiFi and cellular connections. Characterizing this type of workload is an important step to understanding and optimizing the performance of the servers used to support the growing number of streaming video services. We focus on the HTTP requests observed at the server from Netflix client devices by analyzing anonymized log files obtained from a server containing a portion of the Netflix catalog. We introduce the notion of chains of sequential requests to represent the spatial locality of the workload and find that despite servicing clients that adapt to changes in network and server conditions, and despite the fact that the majority of chains are short (60% are no longer than 1 MB), the vast majority of the bytes requested are sequential. We also observe that during a viewing session, client devices behave in recognizable patterns. We characterize sessions using transient, stable and inactive phases. We find that playback sessions are surprisingly stable; across all sessions 5% of the total session time is spent in transient phases, 79% in stable phases and 16% in inactive phases, and the average duration of a stable phase is 8.5 minutes. Finally we analyze the chains to evaluate different prefetch algorithms and show that by exploiting knowledge about workload characteristics, the workload can be serviced with 13% lower hard drive utilization or 30% less system memory compared to a prefetch algorithm that makes no use of workload characteristics.",
"title": ""
},
{
"docid": "9328c119a7622b742749d357f58c7617",
"text": "An algorithm is described for recovering the six degrees of freedom of motion of a vehicle from a sequence of range images of a static environment taken by a range camera rigidly attached to the vehicle. The technique utilizes a least-squares minimization of the difference between the measured rate of change of elevation at a point and the rate predicted by the so-called elevation rate constmint equation. It is assumed that most of the surface is smooth enough so that local tangent planes can be constructed, and that the motion between frames is smaller than the size of most features in the range image. This method does not depend on the determination of correspondences between isolated high-level features in the range images. The algorithm has been successfully applied to data obtained from the range imager on the Autonomous Land Vehicle (ALV). Other sensors on the ALV provide an initial approximation to the motion between frames. It was found that the outputs of the vehicle sensors themselves are not suitable for accurate motion recovery because of errors in dead reckoning resulting from such problems as wheel slippage. The sensor measurements are used only to approximately register range data. The algorithm described here then recovers the difference between the true motion and that estimated from the sensor outputs. s 1991",
"title": ""
},
{
"docid": "4bdfc78c5a6b960a012d8d87c9bc182e",
"text": "The purpose of our article is to evaluate wood as a construction material in terms of the energy required for its construction and operation, compared to other types of construction materials. First, the role of construction and material manufacturing is evaluated within the full life cycle energy and CO2 emissions of a building, concluding that the issue of embodied energy justifi es the use of less energy intensive materials. Then the article reviews the literature dealing with the energy requirements of wood based construction, in order to establish whether the use of this natural, low density construction material is more energy effi cient than using brick, reinforced concrete and steel structures. According to our analysis, the vast majority of the studies found that the embodied energy is signifi cantly lower in wood based construction when compared to inorganic materials. According to several authors, wood construction could save much energy and signifi cantly reduce the emissions related to the building sector on the national level. Carbon sequestration, and the related mitigation of the global climate change effect, can be signifi cant if the share of durable wooden buildings can be increased in the market, using sustainably produced raw materials that are handled responsibly at the end of their lifetime. Some confl icting studies make important points concerning the heat storage, recycling and on-site labour demands related to these structures. These sources contribute to a deeper understanding of the issue, but do not alter the basic conclusions concerning the benefi ts of wood based construction. Some important aspects of wood extraction, manufacturing and construction that can help minimising the embodied energy of wood based structures are also discussed in the study.",
"title": ""
},
{
"docid": "17ec5256082713e85c819bb0a0dd3453",
"text": "Scholarly documents contain multiple figures representing experimental findings. These figures are generated from data which is not reported anywhere else in the paper. We propose a modular architecture for analyzing such figures. Our architecture consists of the following modules: 1. An extractor for figures and associated metadata (figure captions and mentions) from PDF documents; 2. A Search engine on the extracted figures and metadata; 3. An image processing module for automated data extraction from the figures and 4. A natural language processing module to understand the semantics of the figure. We discuss the challenges in each step, report an extractor algorithm to extract vector graphics from scholarly documents and a classification algorithm for figures. Our extractor algorithm improves the state of the art by more than 10% and the classification process is very scalable, yet achieves 85\\% accuracy. We also describe a semi-automatic system for data extraction from figures which is integrated with our search engine to improve user experience.",
"title": ""
},
{
"docid": "995fca88b7813c5cfed1c92522cc8d29",
"text": "Diode rectifiers with large dc-bus capacitors, used in the front ends of variable-frequency drives (VFDs) and other ac-to-dc converters, draw discontinuous current from the power system, resulting in current distortion and, hence, voltage distortion. Typically, the power system can handle current distortion without showing signs of voltage distortion. However, when the majority of the load on a distribution feeder is made up of VFDs, current distortion becomes an important issue since it can cause voltage distortion. Multipulse techniques to reduce input current harmonics are popular because they do not interfere with the existing power system either from higher conducted electromagnetic interference, when active techniques are used, or from possible resonance, when capacitor-based filters are employed. In this paper, a new 18-pulse topology is proposed that has two six-pulse rectifiers powered via a phase-shifting isolation transformer, while the third six-pulse rectifier is fed directly from the ac source via a matching inductor. This idea relies on harmonic current cancellation strategy rather than flux cancellation method and results in lower overall harmonics. It is also seen to be smaller in size and weight and lower in cost compared to an isolation transformer. Experimental results are given to validate the concept.",
"title": ""
},
{
"docid": "8ab51537f15c61f5b34a94461b9e0951",
"text": "An approach to the problem of estimating the size of inhomogeneous crowds, which are composed of pedestrians that travel in different directions, without using explicit object segmentation or tracking is proposed. Instead, the crowd is segmented into components of homogeneous motion, using the mixture of dynamic-texture motion model. A set of holistic low-level features is extracted from each segmented region, and a function that maps features into estimates of the number of people per segment is learned with Bayesian regression. Two Bayesian regression models are examined. The first is a combination of Gaussian process regression with a compound kernel, which accounts for both the global and local trends of the count mapping but is limited by the real-valued outputs that do not match the discrete counts. We address this limitation with a second model, which is based on a Bayesian treatment of Poisson regression that introduces a prior distribution on the linear weights of the model. Since exact inference is analytically intractable, a closed-form approximation is derived that is computationally efficient and kernelizable, enabling the representation of nonlinear functions. An approximate marginal likelihood is also derived for kernel hyperparameter learning. The two regression-based crowd counting methods are evaluated on a large pedestrian data set, containing very distinct camera views, pedestrian traffic, and outliers, such as bikes or skateboarders. Experimental results show that regression-based counts are accurate regardless of the crowd size, outperforming the count estimates produced by state-of-the-art pedestrian detectors. Results on 2 h of video demonstrate the efficiency and robustness of the regression-based crowd size estimation over long periods of time.",
"title": ""
},
{
"docid": "9db883fc2d35d52aed34806769685385",
"text": "In dynamic magnetic resonance imaging (MRI) studies, the motion kinetics or the contrast variability are often hard to predict, hampering an appropriate choice of the image update rate or the temporal resolution. A constant azimuthal profile spacing (111.246deg), based on the Golden Ratio, is investigated as optimal for image reconstruction from an arbitrary number of profiles in radial MRI. The profile order is evaluated and compared with a uniform profile distribution in terms of signal-to-noise ratio (SNR) and artifact level. The favorable characteristics of such a profile order are exemplified in two applications on healthy volunteers. First, an advanced sliding window reconstruction scheme is applied to dynamic cardiac imaging, with a reconstruction window that can be flexibly adjusted according to the extent of cardiac motion that is acceptable. Second, a contrast-enhancing k-space filter is presented that permits reconstructing an arbitrary number of images at arbitrary time points from one raw data set. The filter was utilized to depict the T1-relaxation in the brain after a single inversion prepulse. While a uniform profile distribution with a constant angle increment is optimal for a fixed and predetermined number of profiles, a profile distribution based on the Golden Ratio proved to be an appropriate solution for an arbitrary number of profiles",
"title": ""
},
{
"docid": "ac94c03a72607f76e53ae0143349fff3",
"text": "Abrlracr-A h u l a for the cppecity et arbitrary sbgle-wer chrurwla without feedback (mot neccgdueily Wium\" stable, stationary, etc.) is proved. Capacity ie shown to e i p l the supremum, over all input processts, & the input-outpat infiqjknda QBnd as the llnainl ia praabiutJr d the normalized information density. The key to thir zbllljt is a ntw a\"c sppmrh bosed 811 a Ampie II(A Lenar trwrd eu the pralwbility of m-4v hgpothesb t#tcl UIOlls eq*rdIaN <hypotheses. A neassruy and d c i e n t coadition Eor the validity of the strong comeme is given, as well as g\"l expressions for eeapacity.",
"title": ""
},
{
"docid": "9eccf674ee3b3826b010bc142ed24ef0",
"text": "We present an architecture of a recurrent neural network (RNN) with a fullyconnected deep neural network (DNN) as its feature extractor. The RNN is equipped with both causal temporal prediction and non-causal look-ahead, via auto-regression (AR) and moving-average (MA), respectively. The focus of this paper is a primal-dual training method that formulates the learning of the RNN as a formal optimization problem with an inequality constraint that provides a sufficient condition for the stability of the network dynamics. Experimental results demonstrate the effectiveness of this new method, which achieves 18.86% phone recognition error on the TIMIT benchmark for the core test set. The result approaches the best result of 17.7%, which was obtained by using RNN with long short-term memory (LSTM). The results also show that the proposed primal-dual training method produces lower recognition errors than the popular RNN methods developed earlier based on the carefully tuned threshold parameter that heuristically prevents the gradient from exploding.",
"title": ""
},
{
"docid": "4cfd4f09a88186cb7e5f200e340d1233",
"text": "Keyword spotting (KWS) aims to detect predefined keywords in continuous speech. Recently, direct deep learning approaches have been used for KWS and achieved great success. However, these approaches mostly assume fixed keyword vocabulary and require significant retraining efforts if new keywords are to be detected. For unrestricted vocabulary, HMM based keywordfiller framework is still the mainstream technique. In this paper, a novel deep learning approach is proposed for unrestricted vocabulary KWS based on Connectionist Temporal Classification (CTC) with Long Short-Term Memory (LSTM). Here, an LSTM is trained to discriminant phones with the CTC criterion. During KWS, an arbitrary keyword can be specified and it is represented by one or more phone sequences. Due to the property of peaky phone posteriors of CTC, the LSTM can produce a phone lattice. Then, a fast substring matching algorithm based on minimum edit distance is used to search the keyword phone sequence on the phone lattice. The approach is highly efficient and vocabulary independent. Experiments showed that the proposed approach can achieve significantly better results compared to a DNN-HMM based keyword-filler decoding system. In addition, the proposed approach is also more efficient than the DNN-HMM KWS baseline.",
"title": ""
},
{
"docid": "9074416729e07ba4ec11ebd0021b41ed",
"text": "The purpose of this study is to examine the relationships between internet addiction and depression, anxiety, and stress. Participants were 300 university students who were enrolled in mid-size state University, in Turkey. In this study, the Online Cognition Scale and the Depression Anxiety Stress Scale were used. In correlation analysis, internet addiction was found positively related to depression, anxiety, and stress. According to path analysis results, depression, anxiety, and stress were predicted positively by internet addiction. This research shows that internet addiction has a direct impact on depression, anxiety, and stress.",
"title": ""
},
{
"docid": "8405f30ca5f4bd671b056e9ca1f4d8df",
"text": "The remarkable manipulative skill of the human hand is not the result of rapid sensorimotor processes, nor of fast or powerful effector mechanisms. Rather, the secret lies in the way manual tasks are organized and controlled by the nervous system. At the heart of this organization is prediction. Successful manipulation requires the ability both to predict the motor commands required to grasp, lift, and move objects and to predict the sensory events that arise as a consequence of these commands.",
"title": ""
},
{
"docid": "2215fd5b4f1e884a66b62675c8c92d33",
"text": "In the context of structural optimization we propose a new numerical method based on a combination of the classical shape derivative and of the level-set method for front propagation. We implement this method in two and three space dimensions for a model of linear or nonlinear elasticity. We consider various objective functions with weight and perimeter constraints. The shape derivative is computed by an adjoint method. The cost of our numerical algorithm is moderate since the shape is captured on a fixed Eulerian mesh. Although this method is not specifically designed for topology optimization, it can easily handle topology changes. However, the resulting optimal shape is strongly dependent on the initial guess. 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9834dd845dcb15c91b590881ce7b2f5e",
"text": "Plant reproduction occurs through the production of gametes by a haploid generation, the gametophyte. Flowering plants have highly reduced male and female gametophytes, called pollen grains and embryo sacs, respectively, consisting of only a few cells. Gametophytes are critical for sexual reproduction, but detailed understanding of their development remains poor as compared to the diploid sporophyte. This article reviews recent progress in understanding the mechanisms underlying gametophytic development and function in flowering plants. The focus is on genes and molecules involved in the processes of initiation, growth, cell specification, and fertilization of the male and female gametophytes derived primarily from studies in model systems.",
"title": ""
},
{
"docid": "a931f939e2e0c0f2f8940796ee23e957",
"text": "PURPOSE OF REVIEW\nMany patients requiring cardiac arrhythmia device surgery are on chronic oral anticoagulation therapy. The periprocedural management of their anticoagulation presents a dilemma to physicians, particularly in the subset of patients with moderate-to-high risk of arterial thromboembolic events. Physicians have responded by treating patients with bridging anticoagulation while oral anticoagulation is temporarily discontinued. However, there are a number of downsides to bridging anticoagulation around device surgery; there is a substantial risk of significant device pocket hematoma with important clinical sequelae; bridging anticoagulation may lead to more arterial thromboembolic events and bridging anticoagulation is expensive.\n\n\nRECENT FINDINGS\nIn response to these issues, a number of centers have explored the option of performing device surgery without cessation of oral anticoagulation. The observational data suggest a greatly reduced hematoma rate with this strategy. Despite these encouraging results, most physicians are reluctant to move to operating on continued Coumadin in the absence of confirmatory data from a randomized trial.\n\n\nSUMMARY\nWe have designed a prospective, single-blind, randomized, controlled trial to address this clinical question. In the conventional arm, patients will be bridged. In the experimental arm, patients will continue on oral anticoagulation and the primary outcome is clinically significant hematoma. Our study has clinical relevance to at least 70 000 patients per year in North America.",
"title": ""
},
{
"docid": "0f3cb3d8a841e0de31438da1dd99c176",
"text": "In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies.",
"title": ""
},
{
"docid": "d621b555171a8545fa00ea7b84d6cacb",
"text": "Multiple café-au-lait macules (CALMs) are the hallmark of Von Recklinghausen disease, or neurofibromatosis type 1 (NF1). In 2007 we reported that some individuals with multiple CALMs have a heterozygous mutation in the SPRED1 gene and have NF1-like syndrome, or Legius syndrome. Individuals with Legius syndrome have multiple CALMs with or without freckling, but they do not show the typical NF1-associated tumors such as neurofibromas or optic pathway gliomas. NF1-associated bone abnormalities and Lisch nodules are also not reported in patients with Legius syndrome. Consequently, individuals with Legius syndrome require less intense medical surveillance than those with NF1. The SPRED1 gene was identified in 2001 and codes for a protein that downregulates the RAS-mitogen activated protein kinase (RAS-MAPK) pathway; as does neurofibromin, the protein encoded by the NF1 gene. It is estimated that about 1-4% of individuals with multiple CALMs have a heterozygous SPRED1 mutation. Mutational and clinical data on 209 patients with Legius syndrome are tabulated in an online database (http://www.lovd.nl/SPRED1). Mice with homozygous knockout of the Spred1 gene show learning deficits and decreased synaptic plasticity in hippocampal neurons similar to those seen in Nf1 heterozygous mice, underlining the importance of the RAS-MAPK pathway for learning and memory. Recently, specific binding between neurofibromin and SPRED1 was demonstrated. SPRED1 seems to play an important role in recruiting neurofibromin to the plasma membrane.",
"title": ""
},
{
"docid": "b952967acb2eaa9c780bffe211d11fa0",
"text": "Cryptographic message authentication is a growing need for FPGA-based embedded systems. In this paper a customized FPGA implementation of a GHASH function that is used in AES-GCM, a widely-used message authentication protocol, is described. The implementation limits GHASH logic utilization by specializing the hardware implementation on a per-key basis. The implemented module can generate a 128bit message authentication code in both pipelined and unpipelined versions. The pipelined GHASH version achieves an authentication throughput of more than 14 Gbit/s on a Spartan-3 FPGA and 292 Gbit/s on a Virtex-6 device. To promote adoption in the field, the complete source code for this work has been made publically-available.",
"title": ""
},
{
"docid": "2cab3b3bed055eff92703d23b1edc69d",
"text": "Due to their nonvolatile nature, excellent scalability, and high density, memristive nanodevices provide a promising solution for low-cost on-chip storage. Integrating memristor-based synaptic crossbars into digital neuromorphic processors (DNPs) may facilitate efficient realization of brain-inspired computing. This article investigates architectural design exploration of DNPs with memristive synapses by proposing two synapse readout schemes. The key design tradeoffs involving different analog-to-digital conversions and memory accessing styles are thoroughly investigated. A novel storage strategy optimized for feedforward neural networks is proposed in this work, which greatly reduces the energy and area cost of the memristor array and its peripherals.",
"title": ""
}
] |
scidocsrr
|
5beb540ccb52b4842aa712b9ecec6093
|
Structural Neighborhood Based Classification of Nodes in a Network
|
[
{
"docid": "3442a266eaaf878a507f58124e15fee3",
"text": "The application of kernel-based learning algorithms has, so far, largely been confined to realvalued data and a few special data types, such as strings. In this paper we propose a general method of constructing natural families of kernels over discrete structures, based on the matrix exponentiation idea. In particular, we focus on generating kernels on graphs, for which we propose a special class of exponential kernels called diffusion kernels, which are based on the heat equation and can be regarded as the discretization of the familiar Gaussian kernel of Euclidean space.",
"title": ""
}
] |
[
{
"docid": "59eaa9f4967abdc1c863f8fb256ae966",
"text": "CONTEXT\nThe projected expansion in the next several decades of the elderly population at highest risk for Parkinson disease (PD) makes identification of factors that promote or prevent the disease an important goal.\n\n\nOBJECTIVE\nTo explore the association of coffee and dietary caffeine intake with risk of PD.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nData were analyzed from 30 years of follow-up of 8004 Japanese-American men (aged 45-68 years) enrolled in the prospective longitudinal Honolulu Heart Program between 1965 and 1968.\n\n\nMAIN OUTCOME MEASURE\nIncident PD, by amount of coffee intake (measured at study enrollment and 6-year follow-up) and by total dietary caffeine intake (measured at enrollment).\n\n\nRESULTS\nDuring follow-up, 102 men were identified as having PD. Age-adjusted incidence of PD declined consistently with increased amounts of coffee intake, from 10.4 per 10,000 person-years in men who drank no coffee to 1.9 per 10,000 person-years in men who drank at least 28 oz/d (P<.001 for trend). Similar relationships were observed with total caffeine intake (P<.001 for trend) and caffeine from non-coffee sources (P=.03 for trend). Consumption of increasing amounts of coffee was also associated with lower risk of PD in men who were never, past, and current smokers at baseline (P=.049, P=.22, and P=.02, respectively, for trend). Other nutrients in coffee, including niacin, were unrelated to PD incidence. The relationship between caffeine and PD was unaltered by intake of milk and sugar.\n\n\nCONCLUSIONS\nOur findings indicate that higher coffee and caffeine intake is associated with a significantly lower incidence of PD. This effect appears to be independent of smoking. The data suggest that the mechanism is related to caffeine intake and not to other nutrients contained in coffee. JAMA. 2000;283:2674-2679.",
"title": ""
},
{
"docid": "fcbfa224b2708839e39295f24f4405e1",
"text": "A dataset is imbalanced if the classification categories are not approximately equally represented. Recent years brought increased interest in applying machine learning techniques to difficult \"real-world\" problems, many of which are characterized by imbalanced data. Additionally the distribution of the testing data may differ from that of the training data, and the true misclassification costs may be unknown at learning time. Predictive accuracy, a popular choice for evaluating performance of a classifier, might not be appropriate when the data is imbalanced andlor the costs of different errors vary markedly. In this Chapter, we discuss some of the sampling techniques used for balancing the datasets, and the performance measures more appropriate for mining imbalanced datasets.",
"title": ""
},
{
"docid": "a29ee41e8f46d1feebeb67886b657f70",
"text": "Feeling emotion is a critical characteristic to distinguish people from machines. Among all the multi-modal resources for emotion detection, textual datasets are those containing the least additional information in addition to semantics, and hence are adopted widely for testing the developed systems. However, most of the textual emotional datasets consist of emotion labels of only individual words, sentences or documents, which makes it challenging to discuss the contextual flow of emotions. In this paper, we introduce EmotionLines, the first dataset with emotions labeling on all utterances in each dialogue only based on their textual content. Dialogues in EmotionLines are collected from Friends TV scripts and private Facebook messenger dialogues. Then one of seven emotions, six Ekman’s basic emotions plus the neutral emotion, is labeled on each utterance by 5 Amazon MTurkers. A total of 29,245 utterances from 2,000 dialogues are labeled in EmotionLines. We also provide several strong baselines for emotion detection models on EmotionLines in this paper.",
"title": ""
},
{
"docid": "e6db8cbbb3f7bac211f672ffdef44fb6",
"text": "This paper aims to develop a benchmarking framework that evaluates the cold chain performance of a company, reveals its strengths and weaknesses and finally identifies and prioritizes potential alternatives for continuous improvement. A Delphi-AHP-TOPSIS based methodology has divided the whole benchmarking into three stages. The first stage is Delphi method, where identification, synthesis and prioritization of key performance factors and sub-factors are done and a novel consistent measurement scale is developed. The second stage is Analytic Hierarchy Process (AHP) based cold chain performance evaluation of a selected company against its competitors, so as to observe cold chain performance of individual factors and sub-factors, as well as overall performance index. And, the third stage is Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) based assessment of possible alternatives for the continuous improvement of the company’s cold chain performance. Finally a demonstration of proposed methodology in a retail industry is presented for better understanding. The proposed framework can assist managers to comprehend the present strengths and weaknesses of their cold. They can identify good practices from the market leader and can benchmark them for improving weaknesses keeping in view the current operational conditions and strategies of the company. This framework also facilitates the decision makers to better understand the complex relationships of the relevant cold chain performance factors in decision-making. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1c6677209ac3c37e4ac84b153321ab7c",
"text": "BACKGROUND\nAsthma guidelines indicate that the goal of treatment should be optimum asthma control. In a busy clinic practice with limited time and resources, there is need for a simple method for assessing asthma control with or without lung function testing.\n\n\nOBJECTIVES\nThe objective of this article was to describe the development of the Asthma Control Test (ACT), a patient-based tool for identifying patients with poorly controlled asthma.\n\n\nMETHODS\nA 22-item survey was administered to 471 patients with asthma in the offices of asthma specialists. The specialist's rating of asthma control after spirometry was also collected. Stepwise regression methods were used to select a subset of items that showed the greatest discriminant validity in relation to the specialist's rating of asthma control. Internal consistency reliability was computed, and discriminant validity tests were conducted for ACT scale scores. The performance of ACT was investigated by using logistic regression methods and receiver operating characteristic analyses.\n\n\nRESULTS\nFive items were selected from regression analyses. The internal consistency reliability of the 5-item ACT scale was 0.84. ACT scale scores discriminated between groups of patients differing in the specialist's rating of asthma control (F = 34.5, P <.00001), the need for change in patient's therapy (F = 40.3, P <.00001), and percent predicted FEV(1) (F = 4.3, P =.0052). As a screening tool, the overall agreement between ACT and the specialist's rating ranged from 71% to 78% depending on the cut points used, and the area under the receiver operating characteristic curve was 0.77.\n\n\nCONCLUSION\nResults reinforce the usefulness of a brief, easy to administer, patient-based index of asthma control.",
"title": ""
},
{
"docid": "204b902e344ac52ba5ed90e9f8d5cf54",
"text": "The reason for the rapid rise of autism in the United States that began in the 1990s is a mystery. Although individuals probably have a genetic predisposition to develop autism, researchers suspect that one or more environmental triggers are also needed. One of those triggers might be the battery of vaccinations that young children receive. Using regression analysis and controlling for family income and ethnicity, the relationship between the proportion of children who received the recommended vaccines by age 2 years and the prevalence of autism (AUT) or speech or language impairment (SLI) in each U.S. state from 2001 and 2007 was determined. A positive and statistically significant relationship was found: The higher the proportion of children receiving recommended vaccinations, the higher was the prevalence of AUT or SLI. A 1% increase in vaccination was associated with an additional 680 children having AUT or SLI. Neither parental behavior nor access to care affected the results, since vaccination proportions were not significantly related (statistically) to any other disability or to the number of pediatricians in a U.S. state. The results suggest that although mercury has been removed from many vaccines, other culprits may link vaccines to autism. Further study into the relationship between vaccines and autism is warranted.",
"title": ""
},
{
"docid": "38bdfe23b1e62cd162ed18d741f9ba05",
"text": "The authors present results of 4 studies that seek to determine the discriminant and incremental validity of the 3 most widely studied traits in psychology-self-esteem, neuroticism, and locus of control-along with a 4th, closely related trait-generalized self-efficacy. Meta-analytic results indicated that measures of the 4 traits were strongly related. Results also demonstrated that a single factor explained the relationships among measures of the 4 traits. The 4 trait measures display relatively poor discriminant validity, and each accounted for little incremental variance in predicting external criteria relative to the higher order construct. In light of these results, the authors suggest that measures purporting to assess self-esteem, locus of control, neuroticism, and generalized self-efficacy may be markers of the same higher order concept.",
"title": ""
},
{
"docid": "c18e8e3658fb4d9581b71b1e9feb8808",
"text": "This paper presents the design and analysis of a linear oscillatory single-phase permanent magnet generator (LOG) for free-piston stirling engine systems (FPSEs). In order to implement the design of LOG for suitable FPSEs, we performed a characteristic analysis of various types of LOGs having different design parameters. To improve the efficiency, we performed a characteristic analysis of the eddy-current loss based on the permanent magnet division and influence of core lamination. A dynamic characteristic analysis of LOG was performed for selected design parameters, and the analysis results of the designed LOG were compared with the measured results.",
"title": ""
},
{
"docid": "473aadc8d69632f810901d6360dd2b0c",
"text": "One of the challenges in developing real-world autonomous robots is the need for integrating and rigorously testing high-level scripting, motion planning, perception, and control algorithms. For this purpose, we introduce an open-source cross-platform software architecture called OpenRAVE, the Open Robotics and Animation Virtual Environment. OpenRAVE is targeted for real-world autonomous robot applications, and includes a seamless integration of 3-D simulation, visualization, planning, scripting and control. A plugin architecture allows users to easily write custom controllers or extend functionality. With OpenRAVE plugins, any planning algorithm, robot controller, or sensing subsystem can be distributed and dynamically loaded at run-time, which frees developers from struggling with monolithic code-bases. Users of OpenRAVE can concentrate on the development of planning and scripting aspects of a problem without having to explicitly manage the details of robot kinematics and dynamics, collision detection, world updates, and robot control. The OpenRAVE architecture provides a flexible interface that can be used in conjunction with other popular robotics packages such as Player and ROS because it is focused on autonomous motion planning and high-level scripting rather than low-level control and message protocols. OpenRAVE also supports a powerful network scripting environment which makes it simple to control and monitor robots and change execution flow during run-time. One of the key advantages of open component architectures is that they enable the robotics research community to easily share and compare algorithms.",
"title": ""
},
{
"docid": "fa3c52e9b3c4a361fd869977ba61c7bf",
"text": "The combination of the Internet and emerging technologies such as nearfield communications, real-time localization, and embedded sensors lets us transform everyday objects into smart objects that can understand and react to their environment. Such objects are building blocks for the Internet of Things and enable novel computing applications. As a step toward design and architectural principles for smart objects, the authors introduce a hierarchy of architectures with increasing levels of real-world awareness and interactivity. In particular, they describe activity-, policy-, and process-aware smart objects and demonstrate how the respective architectural abstractions support increasingly complex application.",
"title": ""
},
{
"docid": "27814a816db4a598248dbb316e7b62a5",
"text": "Denial of Service (DoS) or Distributed Denial of Service (DDoS) attacks are typically explicit attempts to exhaust victim’s bandwidth or disrupt legitimate users’ access to services. Traditional architecture of internet is vulnerable to DDoS attacks and it provides an opportunity to an attacker to gain access to a large number of compromised computers by exploiting their vulnerabilities to set up attack networks or Botnets. Once attack network or Botnet has been set up, an attacker invokes a large-scale, coordinated attack against one or more targets. Asa result of the continuous evolution of new attacks and ever-increasing range of vulnerable hosts on the internet, many DDoS attack Detection, Prevention and Traceback mechanisms have been proposed, In this paper, we tend to surveyed different types of attacks and techniques of DDoS attacks and their countermeasures. The significance of this paper is that the coverage of many aspects of countering DDoS attacks including detection, defence and mitigation, traceback approaches, open issues and research challenges. GJCST-E Classification : D.4.6 DoSandDDoSAttacksDefenseDetection andTraceback Mechanisms-A Survey Strictly as per the compliance and regulations of:",
"title": ""
},
{
"docid": "38f85a10e8f8b815974f5e42386b1fa3",
"text": "Because Facebook is available on hundreds of millions of desktop and mobile computing platforms around the world and because it is available on many different kinds of platforms (from desktops and laptops running Windows, Unix, or OS X to hand held devices running iOS, Android, or Windows Phone), it would seem to be the perfect place to conduct steganography. On Facebook, information hidden in image files will be further obscured within the millions of pictures and other images posted and transmitted daily. Facebook is known to alter and compress uploaded images so they use minimum space and bandwidth when displayed on Facebook pages. The compression process generally disrupts attempts to use Facebook for image steganography. This paper explores a method to minimize the disruption so JPEG images can be used as steganography carriers on Facebook.",
"title": ""
},
{
"docid": "0827a91d97ed83bada2e73c0b4bbe308",
"text": "Several studies investigated different interaction techniques and input devices for older adults using touchscreen. This literature review analyses the population involved, the kind of tasks that were executed, the apparatus, the input techniques, the provided feedback, the collected data and author's findings and their recommendations. As conclusion, this review shows that age-related changes, previous experience with technologies, characteristics of handheld devices and use situations need to be studied.",
"title": ""
},
{
"docid": "7720930292c9e6140c29a17de3486ebc",
"text": "Horticulture therapy employs plants and gardening activities in therapeutic and rehabilitation activities and could be utilized to improve the quality of life of the worldwide aging population, possibly reducing costs for long-term, assisted living and dementia unit residents. Preliminary studies have reported the benefits of horticultural therapy and garden settings in reduction of pain, improvement in attention, lessening of stress, modulation of agitation, lowering of as needed medications, antipsychotics and reduction of falls. This is especially relevant for both the United States and the Republic of Korea since aging is occurring at an unprecedented rate, with Korea experiencing some of the world's greatest increases in elderly populations. In support of the role of nature as a therapeutic modality in geriatrics, most of the existing studies of garden settings have utilized views of nature or indoor plants with sparse studies employing therapeutic gardens and rehabilitation greenhouses. With few controlled clinical trials demonstrating the positive or negative effects of the use of garden settings for the rehabilitation of the aging populations, a more vigorous quantitative analysis of the benefits is long overdue. This literature review presents the data supporting future studies of the effects of natural settings for the long term care and rehabilitation of the elderly having the medical and mental health problems frequently occurring with aging.",
"title": ""
},
{
"docid": "f031ec76c4d71fcd7d7380640c933fd2",
"text": "GPU (Graphics Processing Unit) has a great impact on computing field. To enhance the performance of computing systems, researchers and developers use the parallel computing architecture of GPU. On the other hand, to reduce the development time of new products, two programming models are included in GPU, which are OpenCL (Open Computing Language) and CUDA (Compute Unified Device Architecture). The benefit of involving the two programming models in GPU is that researchers and developers don't have to understand OpenGL, DirectX or other program design, but can use GPU through simple programming language. OpenCL is an open standard API, which has the advantage of cross-platform. CUDA is a parallel computer architecture developed by NVIDIA, which includes Runtime API and Driver API. Compared with OpenCL, CUDA is with better performance. In this paper, we used plenty of similar kernels to compare the computing performance of C, OpenCL and CUDA, the two kinds of API's on NVIDIA Quadro 4000 GPU. The experimental result showed that, the executive time of CUDA Driver API was 94.9%~99.0% faster than that of C, while and the executive time of CUDA Driver API was 3.8%~5.4% faster than that of OpenCL. Accordingly, the cross-platform characteristic of OpenCL did not affect the performance of GPU.",
"title": ""
},
{
"docid": "6ff034e2ff0d54f7e73d23207789898d",
"text": "This letter presents two high-gain, multidirector Yagi-Uda antennas for use within the 24.5-GHz ISM band, realized through a multilayer, purely additive inkjet printing fabrication process on a flexible substrate. Multilayer material deposition is used to realize these 3-D antenna structures, including a fully printed 120- μm-thick dielectric substrate for microstrip-to-slotline feeding conversion. The antennas are fabricated, measured, and compared to simulated results showing good agreement and highlighting the reliable predictability of the printing process. An endfire realized gain of 8 dBi is achieved within the 24.5-GHz ISM band, presenting the highest-gain inkjet-printed antenna at this end of the millimeter-wave regime. The results of this work further demonstrate the feasibility of utilizing inkjet printing for low-cost, vertically integrated antenna structures for on-chip and on-package integration throughout the emerging field of high-frequency wireless electronics.",
"title": ""
},
{
"docid": "a357ce62099cd5b12c09c688c5b9736e",
"text": "Considerations of personal identity bear on John Searle's Chinese Room argument, and on the opposed position that a computer itself could really understand a natural language. In this paper I develop the notion of a virtual person, modelled on the concept of virtual machines familiar in computer science. I show how Searle's argument, and J. Maloney's attempt to defend it, fail. I conclude that Searle is correct in holding that no digital machine could understand language, but wrong in holding that artificial minds are impossible: minds and persons are not the same as the machines, biological or electronic, that realize them.",
"title": ""
},
{
"docid": "549f719cd53f769123c34d65dca1f566",
"text": "BACKGROUND\nA large body of scientific literature derived from experimental studies emphasizes the vital role of vagal-nociceptive networks in acute pain processing. However, research on vagal activity, indexed by vagally-mediated heart rate variability (vmHRV) in chronic pain patients (CPPs), has not yet been summarized.\n\n\nOBJECTIVES\nTo systematically investigate differences in vagus nerve activity indexed by time- and frequency-domain measures of vmHRV in CPPs compared to healthy controls (HCs).\n\n\nSTUDY DESIGN\nA systematic review and meta-analysis, including meta-regression on a variety of populations (i.e., clinical etiology) and study-level (i.e., length of HRV recording) covariates.\n\n\nSETTING\nNot applicable (variety of studies included in the meta-analysis).\n\n\nMETHODS\nEight computerized databases (PubMed via MEDLINE, PsycNET, PsycINFO, Embase, CINAHL, Web of Science, PSYNDEX, and the Cochrane Library) in addition to a hand search were systematically screened for eligible studies based on pre-defined inclusion criteria. A meta-analysis on all empirical investigations reporting short- and long-term recordings of continuous time- (root-mean-square of successive R-R-interval differences [RMSSD]) and frequency-domain measures (high-frequency [HF] HRV) of vmHRV in CPPs and HCs was performed. True effect estimates as adjusted standardized mean differences (SMD; Hedges g) combined with inverse variance weights using a random effects model were computed.\n\n\nRESULTS\nCPPs show lower vmHRV than HCs indexed by RMSSD (Z = 5.47, P < .0001; g = -0.24;95% CI [-0.33, -0.16]; k = 25) and HF (Z = 4.54, P < .0001; g = -0.30; 95% CI [-0.44, -0.17]; k = 61).Meta-regression on covariates revealed significant differences by clinical etiology, age, gender, and length of HRV recording.\n\n\nLIMITATIONS\nWe did not control for other potential covariates (i.e., duration of chronic pain, medication intake) which may carry potential risk of bias.\n\n\nCONCLUSION(S)\nThe present meta-analysis is the most extensive review of the current evidence on vagal activity indexed by vmHRV in CPPs. CPPs were shown to have lower vagal activity, indexed by vmHRV, compared to HCs. Several covariates in this relationship have been identified. Further research is needed to investigate vagal activity in CPPs, in particular prospective and longitudinal follow-up studies are encouraged.",
"title": ""
},
{
"docid": "45375c1527fcb46d0d29bbb4fdab4f9c",
"text": "Removing suffixes by automatic means is an operation which is especially useful in the field of information retrieval. In a typical IR environment, one has a collection of documents, each described by the words in the document title and possibly by words in the document abstract. Ignoring the issue of precisely where the words originate, we can say that a document is represented by a vetor of words, or terms. Terms with a common stem will usually have similar meanings, for example:",
"title": ""
},
{
"docid": "24ac33300d3ea99441068c20761e8305",
"text": "Purpose – The purpose of this research is to examine the critical success factors of mobile web site adoption. Design/methodology/approach – Based on the valid responses collected from a questionnaire survey, the structural equation modelling technique was employed to examine the research model. Findings – The results indicate that system quality is the main factor affecting perceived ease of use, whereas information quality is the main factor affecting perceived usefulness. Service quality has significant effects on trust and perceived ease of use. Perceived usefulness, perceived ease of use and trust determine user satisfaction. Practical implications – Mobile service providers need to improve the system quality, information quality and service quality of mobile web sites to enhance user satisfaction. Originality/value – Previous research has mainly focused on e-commerce web site success and seldom examined the factors affecting mobile web site success. This research fills the gap. The research draws on information systems success theory, the technology acceptance model and trust theory as the theoretical bases.",
"title": ""
}
] |
scidocsrr
|
7d5a7a32bc43397f786ee4c6b04a4f5f
|
Vehicle detection for traffic flow analysis
|
[
{
"docid": "27f3060ef96f1656148acd36d50f02ce",
"text": "Video sensors become particularly important in traffic applications mainly due to their fast response, easy installation, operation and maintenance, and their ability to monitor wide areas. Research in several fields of traffic applications has resulted in a wealth of video processing and analysis methods. Two of the most demanding and widely studied applications relate to traffic monitoring and automatic vehicle guidance. In general, systems developed for these areas must integrate, amongst their other tasks, the analysis of their static environment (automatic lane finding) and the detection of static or moving obstacles (object detection) within their space of interest. In this paper we present an overview of image processing and analysis tools used in these applications and we relate these tools with complete systems developed for specific traffic applications. More specifically, we categorize processing methods based on the intrinsic organization of their input data (feature-driven, area-driven, or model-based) and the domain of processing (spatial/frame or temporal/video). Furthermore, we discriminate between the cases of static and mobile camera. Based on this categorization of processing tools, we present representative systems that have been deployed for operation. Thus, the purpose of the paper is threefold. First, to classify image-processing methods used in traffic applications. Second, to provide the advantages and disadvantages of these algorithms. Third, from this integrated consideration, to attempt an evaluation of shortcomings and general needs in this field of active research. q 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "c708834dc328b9ab60471535bdd37cf0",
"text": "Trajectory optimizers are a powerful class of methods for generating goal-directed robot motion. Differential Dynamic Programming (DDP) is an indirect method which optimizes only over the unconstrained control-space and is therefore fast enough to allow real-time control of a full humanoid robot on modern computers. Although indirect methods automatically take into account state constraints, control limits pose a difficulty. This is particularly problematic when an expensive robot is strong enough to break itself. In this paper, we demonstrate that simple heuristics used to enforce limits (clamping and penalizing) are not efficient in general. We then propose a generalization of DDP which accommodates box inequality constraints on the controls, without significantly sacrificing convergence quality or computational effort. We apply our algorithm to three simulated problems, including the 36-DoF HRP-2 robot. A movie of our results can be found here goo.gl/eeiMnn.",
"title": ""
},
{
"docid": "58c488555240ded980033111a9657be4",
"text": "BACKGROUND\nThe management of opioid-induced constipation (OIC) is often complicated by the fact that clinical measures of constipation do not always correlate with patient perception. As the discomfort associated with OIC can lead to poor compliance with the opioid treatment, a shift in focus towards patient assessment is often advocated.\n\n\nSCOPE\nThe Bowel Function Index * (BFI) is a new patient-assessment scale that has been developed and validated specifically for OIC. It is a physician-administered, easy-to-use scale made up of three items (ease of defecation, feeling of incomplete bowel evacuation, and personal judgement of constipation). An extensive analysis has been performed in order to validate the BFI as reliable, stable, clinically valid, and responsive to change in patients with OIC, with a 12-point change in score constituting a clinically relevant change in constipation.\n\n\nFINDINGS\nThe results of the validation analysis were based on major clinical trials and have been further supported by data from a large open-label study and a pharmaco-epidemiological study, in which the BFI was used effectively to assess OIC in a large population of patients treated with opioids. Although other patient self-report scales exist, the BFI offers several unique advantages. First, by being physician-administered, the BFI minimizes reading and comprehension difficulties; second, by offering general and open-ended questions which capture patient perspective, the BFI is likely to detect most patients suffering from OIC; third, by being short and easy-to-use, it places little burden on the patient, thereby increasing the likelihood of gathering accurate information.\n\n\nCONCLUSION\nAltogether, the available data suggest that the BFI will be useful in clinical trials and in daily practice.",
"title": ""
},
{
"docid": "6cce055b947b1d222bfdee01507416a1",
"text": "An automatic road sign recognition system first locates road signs within images captured by an imaging sensor on-board of a vehicle, and then identifies road signs assisting the driver of the vehicle to properly operate the vehicle. This paper presents an automatic road sign recognition system capable of analysing live images, detecting multiple road signs within images, and classifying the type of the detected road signs. The system consists of two modules: detection and classification. The detection module segments the input image in the hue-saturation-intensity colour space and locates road signs. The classification module determines the type of detected road signs using a series of one to one architectural Multi Layer Perceptron neural networks. The performances of the classifiers that are trained using Resillient Backpropagation and Scaled Conjugate Gradient algorithms are compared. The experimental results demonstrate that the system is capable of achieving an average recognition hit-rate of 96% using Scaled Conjugate Gradient trained classifiers.",
"title": ""
},
{
"docid": "2c328d1dd45733ad8063ea89a6b6df43",
"text": "We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvement. We study RPL in five challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently.",
"title": ""
},
{
"docid": "902c6c4cc66b827f901648fd3ac2f6a9",
"text": "In recent years, multiple neuromorphic architectures have been designed to execute cognitive applications that deal with image and speech analysis. These architectures have followed one of two approaches. One class of architectures is based on machine learning with artificial neural networks. A second class is focused on emulating biology with spiking neuron models, in an attempt to eventually approach the brain's accuracy and energy efficiency. A prominent example of the second class is IBM's TrueNorth processor that can execute large spiking networks on a low-power tiled architecture, and achieve high accuracy on a variety of tasks. However, as we show in this work, there are many inefficiencies in the TrueNorth design. We propose a new architecture, INXS, for spiking neural networks that improves upon the computational efficiency and energy efficiency of the TrueNorth design by 3,129× and 10× respectively. The architecture uses memristor crossbars to compute the effects of input spikes on several neurons in parallel. Digital units are then used to update neuron state. We show that the parallelism offered by crossbars is critical in achieving high throughput and energy efficiency.",
"title": ""
},
{
"docid": "9847518e92a8f1b6cef2365452b01008",
"text": "This paper presents a Planar Inverted F Antenna (PIFA) tuned with a fixed capacitor to the low frequency bands supported by the Long Term Evolution (LTE) technology. The tuning range is investigated and optimized with respect to the bandwidth and the efficiency of the resulting antenna. Simulations and mock-ups are presented.",
"title": ""
},
{
"docid": "d8a0fec69df5f8eeb2bb8e82484b8ac7",
"text": "Traditionally, Information and Communication Technology (ICT) “has been segregated from the normal teaching classroom” [12], e.g. in computer labs. This has been changed with the advent of smaller devices like iPads. There is a shift from separating ICT and education to co-located settings in which digital technology becomes part of the classroom. This paper presents the results from a study about exploring digital didactical designs using iPads applied by teachers in schools. Classroom observations and interviews in iPad-classrooms in Danish schools have been done with the aim to provide empirical evidence on the co-evolutionary design of both, didactical designs and iPads. The Danish community Odder has 7 schools where around 200 teachers and 2,000 students aged 6-16 use iPads in a 1:1 iPad-program. Three key aspects could be explored: The teachers’ digital didactical designs embrace a) new learning goals where more than one correct answer exists, b) focus on producing knowledge in informal-in-formal learning spaces, c) making learning visible in different products (text, comics, podcasts etc.). The results show the necessity of rethinking traditional Didaktik towards Digital Didactics.",
"title": ""
},
{
"docid": "fbce6308301306e0ef5877b192281a95",
"text": "AIM\nThe aim of this paper is to distinguish the integrative review method from other review methods and to propose methodological strategies specific to the integrative review method to enhance the rigour of the process.\n\n\nBACKGROUND\nRecent evidence-based practice initiatives have increased the need for and the production of all types of reviews of the literature (integrative reviews, systematic reviews, meta-analyses, and qualitative reviews). The integrative review method is the only approach that allows for the combination of diverse methodologies (for example, experimental and non-experimental research), and has the potential to play a greater role in evidence-based practice for nursing. With respect to the integrative review method, strategies to enhance data collection and extraction have been developed; however, methods of analysis, synthesis, and conclusion drawing remain poorly formulated.\n\n\nDISCUSSION\nA modified framework for research reviews is presented to address issues specific to the integrative review method. Issues related to specifying the review purpose, searching the literature, evaluating data from primary sources, analysing data, and presenting the results are discussed. Data analysis methods of qualitative research are proposed as strategies that enhance the rigour of combining diverse methodologies as well as empirical and theoretical sources in an integrative review.\n\n\nCONCLUSION\nAn updated integrative review method has the potential to allow for diverse primary research methods to become a greater part of evidence-based practice initiatives.",
"title": ""
},
{
"docid": "69198cc56f9c4f7f1f235ae7d7c34479",
"text": "This paper presents fine-tuned CNN features for person re-identification. Recently, features extracted from top layers of pre-trained Convolutional Neural Network (CNN) on a large annotated dataset, e.g., ImageNet, have been proven to be strong off-the-shelf descriptors for various recognition tasks. However, large disparity among the pre-trained task, i.e., ImageNet classification, and the target task, i.e., person image matching, limits performances of the CNN features for person re-identification. In this paper, we improve the CNN features by conducting a fine-tuning on a pedestrian attribute dataset. In addition to the classification loss for multiple pedestrian attribute labels, we propose new labels by combining different attribute labels and use them for an additional classification loss function. The combination attribute loss forces CNN to distinguish more person specific information, yielding more discriminative features. After extracting features from the learned CNN, we apply conventional metric learning on a target re-identification dataset for further increasing discriminative power. Experimental results on four challenging person re-identification datasets (VIPeR, CUHK, PRID450S and GRID) demonstrate the effectiveness of the proposed features.",
"title": ""
},
{
"docid": "b3c779728e4f669784c31a89ed7790f9",
"text": "Head pose estimation is essential for several applications and is particularly required for head pose-free eye-gaze tracking where estimation of head rotation permits free head movement during tracking. While the literature is broad, the accuracy of recent vision-based head pose estimation methods is contingent upon the availability of training data or accurate initialisation and tracking of specific facial landmarks. In this paper, we propose a method to estimate the head pose in realtime from the trajectories of a set of feature points spread randomly over the face region, without requiring a training phase or model-fitting of specific facial features. Conversely, without seeking specific facial landmarks, our method exploits the sparse 3-dimensional shape of the surface of interest, recovered via shape and motion factorisation, in combination with particle filtering to correct mistracked feature points and improve upon an initial estimation of the 3-dimensional shape during tracking. In comparison with two additional methods, quantitative results obtained through our modeland landmark-free method yield a reduction in the head pose estimation error for a wide range of head rotation angles.",
"title": ""
},
{
"docid": "fdebcc3ec36a61186b893773eedbd529",
"text": "OBJECTIVE\nClinical observations of the flexion synergy in individuals with chronic hemiparetic stroke describe coupling of shoulder, elbow, wrist, and finger joints. Yet, experimental quantification of the synergy within a shoulder abduction (SABD) loading paradigm has focused only on shoulder and elbow joints. The paretic wrist and fingers have typically been studied in isolation. Therefore, this study quantified involuntary behavior of paretic wrist and fingers during concurrent activation of shoulder and elbow.\n\n\nMETHODS\nEight individuals with chronic moderate-to-severe hemiparesis and four controls participated. Isometric wrist/finger and thumb flexion forces and wrist/finger flexor and extensor electromyograms (EMG) were measured at two positions when lifting the arm: in front of the torso and at maximal reaching distance. The task was completed in the ACT(3D) robotic device with six SABD loads by paretic, non-paretic, and control limbs.\n\n\nRESULTS\nConsiderable forces and EMG were generated during lifting of the paretic arm only, and they progressively increased with SABD load. Additionally, the forces were greater at the maximal reach position than at the position front of the torso.\n\n\nCONCLUSIONS\nFlexion of paretic wrist and fingers is involuntarily coupled with certain shoulder and elbow movements.\n\n\nSIGNIFICANCE\nActivation of the proximal upper limb must be considered when seeking to understand, rehabilitate, or develop devices to assist the paretic hand.",
"title": ""
},
{
"docid": "5bfedcfae127e808974ceaf0dca7970c",
"text": "A new information-theoretic approach is presented for finding the registration of volumetric medical images of differing modalities. Registration is achieved by adjustment of the relative position and orientation until the mutual information between the images is maximized. In our derivation of the registration procedure, few assumptions are made about the nature of the imaging process. As a result the algorithms are quite general and can foreseeably be used with a wide variety of imaging devices. This approach works directly with image data; no pre-processing or segmentation is required. This technique is, however, more flexible and robust than other intensity-based techniques like correlation. Additionally, it has an efficient implementation that is based on stochastic approximation. Experiments are presented that demonstrate the approach registering magnetic resonance (MR) images with computed tomography (CT) images, and with positron-emission tomography (PET) images. Surgical applications of the registration method are described.",
"title": ""
},
{
"docid": "fd14b9e25affb05fd9b05036f3ce350b",
"text": "Recent advances in pedestrian detection are attained by transferring the learned features of Convolutional Neural Network (ConvNet) to pedestrians. This ConvNet is typically pre-trained with massive general object categories (e.g. ImageNet). Although these features are able to handle variations such as poses, viewpoints, and lightings, they may fail when pedestrian images with complex occlusions are present. Occlusion handling is one of the most important problem in pedestrian detection. Unlike previous deep models that directly learned a single detector for pedestrian detection, we propose DeepParts, which consists of extensive part detectors. DeepParts has several appealing properties. First, DeepParts can be trained on weakly labeled data, i.e. only pedestrian bounding boxes without part annotations are provided. Second, DeepParts is able to handle low IoU positive proposals that shift away from ground truth. Third, each part detector in DeepParts is a strong detector that can detect pedestrian by observing only a part of a proposal. Extensive experiments in Caltech dataset demonstrate the effectiveness of DeepParts, which yields a new state-of-the-art miss rate of 11:89%, outperforming the second best method by 10%.",
"title": ""
},
{
"docid": "246a4ed0d3a94fead44c1e48cc235a63",
"text": "With the introduction of fully convolutional neural networks, deep learning has raised the benchmark for medical image segmentation on both speed and accuracy, and different networks have been proposed for 2D and 3D segmentation with promising results. Nevertheless, most networks only handle relatively small numbers of labels (<10), and there are very limited works on handling highly unbalanced object sizes especially in 3D segmentation. In this paper, we propose a network architecture and the corresponding loss function which improve segmentation of very small structures. By combining skip connections and deep supervision with respect to the computational feasibility of 3D segmentation, we propose a fast converging and computationally efficient network architecture for accurate segmentation. Furthermore, inspired by the concept of focal loss, we propose an exponential logarithmic loss which balances the labels not only by their relative sizes but also by their segmentation difficulties. We achieve an average Dice coefficient of 82% on brain segmentation with 20 labels, with the ratio of the smallest to largest object sizes as 0.14%. Less than 100 epochs are required to reach such accuracy, and segmenting a 128×128×128 volume only takes around 0.4 s.",
"title": ""
},
{
"docid": "6c149f1f6e9dc859bf823679df175afb",
"text": "Neurofeedback is attracting renewed interest as a method to self-regulate one's own brain activity to directly alter the underlying neural mechanisms of cognition and behavior. It not only promises new avenues as a method for cognitive enhancement in healthy subjects, but also as a therapeutic tool. In the current article, we present a review tutorial discussing key aspects relevant to the development of electroencephalography (EEG) neurofeedback studies. In addition, the putative mechanisms underlying neurofeedback learning are considered. We highlight both aspects relevant for the practical application of neurofeedback as well as rather theoretical considerations related to the development of new generation protocols. Important characteristics regarding the set-up of a neurofeedback protocol are outlined in a step-by-step way. All these practical and theoretical considerations are illustrated based on a protocol and results of a frontal-midline theta up-regulation training for the improvement of executive functions. Not least, assessment criteria for the validation of neurofeedback studies as well as general guidelines for the evaluation of training efficacy are discussed.",
"title": ""
},
{
"docid": "de638a90e5a6ef3bf030d998b0e921a3",
"text": "The quantization techniques have shown competitive performance in approximate nearest neighbor search. The state-of-the-art algorithm, composite quantization, takes advantage of the compositionabity, i.e., the vector approximation accuracy, as opposed to product quantization and Cartesian k-means. However, we have observed that the runtime cost of computing the distance table in composite quantization, which is used as a lookup table for fast distance computation, becomes nonnegligible in real applications, e.g., reordering the candidates retrieved from the inverted index when handling very large scale databases. To address this problem, we develop a novel approach, called sparse composite quantization, which constructs sparse dictionaries. The benefit is that the distance evaluation between the query and the dictionary element (a sparse vector) is accelerated using the efficient sparse vector operation, and thus the cost of distance table computation is reduced a lot. Experiment results on large scale ANN retrieval tasks (1M SIFTs and 1B SIFTs) and applications to object retrieval show that the proposed approach yields competitive performance: superior search accuracy to product quantization and Cartesian k-means with almost the same computing cost, and much faster ANN search than composite quantization with the same level of accuracy.",
"title": ""
},
{
"docid": "8be957572c846ddda107d8343094401b",
"text": "Corporate accounting statements provide financial markets, and tax services with valuable data on the economic health of companies, although financial indices are only focused on a very limited part of the activity within the company. Useful tools in the field of processing extended financial and accounting data are the methods of Artificial Intelligence, aiming the efficient delivery of financial information to tax services, investors, and financial markets where lucrative portfolios can be created. Key-words: Financial Indices, Artificial Intelligence, Data Mining, Neural Networks, Genetic Algorithms",
"title": ""
},
{
"docid": "23d560ca3bb6f2d7d9b615b5ad3224d2",
"text": "The Pebbles project is creating applications to connmt multiple Personal DigiM Assistants &DAs) to a main computer such as a PC We are cmenfly using 3Com Pd@Ilots b-use they are popdar and widespread. We created the ‘Remote Comrnandefl application to dow users to take turns sending input from their PahnPiiots to the PC as if they were using the PCS mouse and keyboard. ‘.PebblesDraw” is a shared whiteboard application we btit that allows dl of tie users to send input simtdtaneously while sharing the same PC display. We are investigating the use of these applications in various contexts, such as colocated mmtings. Keywor& Personal Digiti Assistants @DAs), PH11oc Single Display Groupware, Pebbles, AmuleL",
"title": ""
},
{
"docid": "92d5ebd49670681a5d43ba90731ae013",
"text": "Prior work has shown that return oriented programming (ROP) can be used to bypass W⊕X, a software defense that stops shellcode, by reusing instructions from large libraries such as libc. Modern operating systems have since enabled address randomization (ASLR), which randomizes the location of libc, making these techniques unusable in practice. However, modern ASLR implementations leave smaller amounts of executable code unrandomized and it has been unclear whether an attacker can use these small code fragments to construct payloads in the general case. In this paper, we show defenses as currently deployed can be bypassed with new techniques for automatically creating ROP payloads from small amounts of unrandomized code. We propose using semantic program verification techniques for identifying the functionality of gadgets, and design a ROP compiler that is resistant to missing gadget types. To demonstrate our techniques, we build Q, an end-to-end system that automatically generates ROP payloads for a given binary. Q can produce payloads for 80% of Linux /usr/bin programs larger than 20KB. We also show that Q can automatically perform exploit hardening: given an exploit that crashes with defenses on, Q outputs an exploit that bypasses both W⊕X and ASLR. We show that Q can harden nine realworld Linux and Windows exploits, enabling an attacker to automatically bypass defenses as deployed by industry for those programs.",
"title": ""
}
] |
scidocsrr
|
caa2bb5c3492d4fda706ccb1d6777b0d
|
Summary in context: Searching versus browsing
|
[
{
"docid": "7c0ef25b2a4d777456facdfc526cf206",
"text": "The paper presents a novel approach to unsupervised text summarization. The novelty lies in exploiting the diversity of concepts in text for summarization, which has not received much attention in the summarization literature. A diversity-based approach here is a principled generalization of Maximal Marginal Relevance criterion by Carbonell and Goldstein \\cite{carbonell-goldstein98}.\nWe propose, in addition, aninformation-centricapproach to evaluation, where the quality of summaries is judged not in terms of how well they match human-created summaries but in terms of how well they represent their source documents in IR tasks such document retrieval and text categorization.\nTo find the effectiveness of our approach under the proposed evaluation scheme, we set out to examine how a system with the diversity functionality performs against one without, using the BMIR-J2 corpus, a test data developed by a Japanese research consortium. The results demonstrate a clear superiority of a diversity based approach to a non-diversity based approach.",
"title": ""
}
] |
[
{
"docid": "b2d75c2f8ac81937557bb4de1113b90d",
"text": "End-to-end learning framework is useful for building dialog systems for its simplicity in training and efficiency in model updating. However, current end-to-end approaches only consider user semantic inputs in learning and under-utilize other user information. Therefore, we propose to include user sentiment obtained through multimodal information (acoustic, dialogic and textual), in the end-to-end learning framework to make systems more user-adaptive and effective. We incorporated user sentiment information in both supervised and reinforcement learning settings. In both settings, adding sentiment information reduced the dialog length and improved the task success rate on a bus information search task. This work is the first attempt to incorporate multimodal user information in the adaptive end-toend dialog system training framework and attained state-of-the-art performance.",
"title": ""
},
{
"docid": "e57732931a053f73280564270c764f15",
"text": "Neural generative model in question answering (QA) usually employs sequence-to-sequence (Seq2Seq) learning to generate answers based on the user’s questions as opposed to the retrieval-based model selecting the best matched answer from a repository of pre-defined QA pairs. One key challenge of neural generative model in QA lies in generating high-frequency and generic answers regardless of the questions, partially due to optimizing log-likelihood objective function. In this paper, we investigate multitask learning (MTL) in neural network-based method under a QA scenario. We define our main task as agenerative QA via Seq2Seq learning. And we define our auxiliary task as a discriminative QA via binary QAclassification. Both main task and auxiliary task are learned jointly with shared representations, allowing to obtain improved generalization and transferring classification labels as extra evidences to guide the word sequence generation of the answers. Experimental results on both automatic evaluations and human annotations demonstrate the superiorities of our proposed method over baselines.",
"title": ""
},
{
"docid": "0af9b629032ae50a2e94310abcc55aa5",
"text": "We introduce novel relaxations for cardinality-constrained learning problems, including least-squares regression as a special but important case. Our approach is based on reformulating a cardinality-constrained problem exactly as a Boolean program, to which standard convex relaxations such as the Lasserre and Sherali-Adams hierarchies can be applied. We analyze the first-order relaxation in detail, deriving necessary and sufficient conditions for exactness in a unified manner. In the special case of least-squares regression, we show that these conditions are satisfied with high probability for random ensembles satisfying suitable incoherence conditions, similar to results on 1-relaxations. In contrast to known methods, our relaxations yield lower bounds on the objective, and it can be verified whether or not the relaxation is exact. If it is not, we show that randomization based on the relaxed solution offers a principled way to generate provably good feasible solutions. This property enables us to obtain high quality estimates even if incoherence conditions are not met, as might be expected in real datasets. We numerically illustrate the performance of the relaxationrandomization strategy in both synthetic and real high-dimensional datasets, revealing substantial improvements relative to 1-based methods and greedy selection heuristics. B Laurent El Ghaoui [email protected] Mert Pilanci [email protected] Martin J. Wainwright [email protected] 1 Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, USA 2 Department of Electrical Engineering and Computer Sciences and Department of Statistics, University of California, Berkeley, CA, USA",
"title": ""
},
{
"docid": "7ca1c9096c6176cb841ae7f0e7262cb7",
"text": "“Industry 4.0” is recognized as the future of industrial production in which concepts as Smart Factory and Decentralized Decision Making are fundamental. This paper proposes a novel strategy to support decentralized decision, whilst identifying opportunities and challenges of Industry 4.0 contextualizing the potential that represents industrial digitalization and how technological advances can contribute for a new perspective on manufacturing production. It is analysed a set of barriers to the full implementation of Industry 4.0 vision, identifying areas in which decision support is vital. Then, for each of the identified areas, the authors propose a strategy, characterizing it together with the level of complexity that is involved in the different processes. The strategies proposed are derived from the needs of two of Industry 4.0 main characteristics: horizontal integration and vertical integration. For each case, decision approaches are proposed concerning the type of decision required (strategic, tactical, operational and real-time). Validation results are provided together with a discussion on the main challenges that might be an obstacle for a successful decision strategy.",
"title": ""
},
{
"docid": "c6878e9e106655f492a989be9e33176f",
"text": "Employees who are engaged in their work are fully connected with their work roles. They are bursting with energy, dedicated to their work, and immersed in their work activities. This article presents an overview of the concept of work engagement. I discuss the antecedents and consequences of engagement. The review shows that job and personal resources are the main predictors of engagement. These resources gain their salience in the context of high job demands. Engaged workers are more open to new information, more productive, and more willing to go the extra mile. Moreover, engaged workers proactively change their work environment in order to stay engaged. The findings of previous studies are integrated in an overall model that can be used to develop work engagement and advance job performance in today’s workplace.",
"title": ""
},
{
"docid": "5e53a20b6904a9b8765b0384f5d1d692",
"text": "This paper provides a description of the crowdfunding sector, considering investment-based crowdfunding platforms as well as platforms in which funders do not obtain monetary payments. It lays out key features of this quickly developing sector and explores the economic forces at play that can explain the design of these platforms. In particular, it elaborates on cross-group and within-group external e¤ects and asymmetric information on crowdfunding platforms. Keywords: Crowdfunding, Platform markets, Network e¤ects, Asymmetric information, P2P lending JEL-Classi
cation: L13, D62, G24 Université catholique de Louvain, CORE and Louvain School of Management, and CESifo yRITM, University of Paris Sud and Digital Society Institute zUniversity of Mannheim, Mannheim Centre for Competition and Innovation (MaCCI), and CERRE. Email: [email protected]",
"title": ""
},
{
"docid": "39208755abbd92af643d0e30029f6cc0",
"text": "The biomedical community makes extensive use of text mining technology. In the past several years, enormous progress has been made in developing tools and methods, and the community has been witness to some exciting developments. Although the state of the community is regularly reviewed, the sheer volume of work related to biomedical text mining and the rapid pace in which progress continues to be made make this a worthwhile, if not necessary, endeavor. This chapter provides a brief overview of the current state of text mining in the biomedical domain. Emphasis is placed on the resources and tools available to biomedical researchers and practitioners, as well as the major text mining tasks of interest to the community. These tasks include the recognition of explicit facts from biomedical literature, the discovery of previously unknown or implicit facts, document summarization, and question answering. For each topic, its basic challenges and methods are outlined and recent and influential work is reviewed.",
"title": ""
},
{
"docid": "89e034a5f8472ef4426f4642d01b9802",
"text": "This paper presents CORD, a reliable bulk data dissemination protocol for propagating a large data object to all the nodes in a large scale sensor network. Unlike well- known reliable data dissemination protocols such as Deluge whose primary design criterion is to reduce the latency of object propagation, CORD's primary goal is to minimize energy consumption. To achieve its goals CORD employs a two phase approach in which the object is delivered to a subset of nodes in the network that form a connected dominating set in the first phase, and to the remaining nodes in the second phase. Further, CORD installs a coordinated sleep schedule on the nodes in the network whereby nodes that are not involved in receiving or transmitting data can turn off their radios to reduce their energy consumption. We evaluated the performance of CORD experimentally on both an indoor and outdoor sensor network testbed and via extensive simulations. Our results show that in comparison to Deluge (the de facto network reprogramming protocol for TinyOS) CORD significantly reduces the energy consumption for reliable data dissemination while achieving a comparable latency.",
"title": ""
},
{
"docid": "4aee0c91e48b9a34be4591d36103c622",
"text": "We construct a polyhedron that is topologically convex (i.e., has the graph of a convex polyhedron) yet has no vertex unfolding: no matter how we cut along the edges and keep faces attached at vertices to form a connected (hinged) surface, the surface necessarily unfolds with overlap.",
"title": ""
},
{
"docid": "24a10176ec2367a6a0b5333d57b894b8",
"text": "Automated classification of biological cells according to their 3D morphology is highly desired in a flow cytometer setting. We have investigated this possibility experimentally and numerically using a diffraction imaging approach. A fast image analysis software based on the gray level co-occurrence matrix (GLCM) algorithm has been developed to extract feature parameters from measured diffraction images. The results of GLCM analysis and subsequent classification demonstrate the potential for rapid classification among six types of cultured cells. Combined with numerical results we show that the method of diffraction imaging flow cytometry has the capacity as a platform for high-throughput and label-free classification of biological cells.",
"title": ""
},
{
"docid": "0be273eb8dfec6a6f71a44f38e8207ba",
"text": "Clustering is a powerful tool which has been used in several forecasting works, such as time series forecasting, real time storm detection, flood forecasting and so on. In this paper, a generic methodology for weather forecasting is proposed by the help of incremental K-means clustering algorithm. Weather forecasting plays an important role in day to day applications.Weather forecasting of this paper is done based on the incremental air pollution database of west Bengal in the years of 2009 and 2010. This paper generally uses typical Kmeans clustering on the main air pollution database and a list of weather category will be developed based on the maximum mean values of the clusters.Now when the new data are coming, the incremental K-means is used to group those data into those clusters whose weather category has been already defined. Thus it builds up a strategy to predict the weather of the upcoming data of the upcoming days. This forecasting database is totally based on the weather of west Bengal and this forecasting methodology is developed to mitigating the impacts of air pollutions and launch focused modeling computations for prediction and forecasts of weather events. Here accuracy of this approach is also measured.",
"title": ""
},
{
"docid": "20e10963c305ca422fb025cafc807301",
"text": "The new psychological disorder of Internet addiction is fast accruing both popular and professional recognition. Past studies have indicated that some patterns of Internet use are associated with loneliness, shyness, anxiety, depression, and self-consciousness, but there appears to be little consensus about Internet addiction disorder. This exploratory study attempted to examine the potential influences of personality variables, such as shyness and locus of control, online experiences, and demographics on Internet addiction. Data were gathered from a convenient sample using a combination of online and offline methods. The respondents comprised 722 Internet users mostly from the Net-generation. Results indicated that the higher the tendency of one being addicted to the Internet, the shyer the person is, the less faith the person has, the firmer belief the person holds in the irresistible power of others, and the higher trust the person places on chance in determining his or her own course of life. People who are addicted to the Internet make intense and frequent use of it both in terms of days per week and in length of each session, especially for online communication via e-mail, ICQ, chat rooms, newsgroups, and online games. Furthermore, full-time students are more likely to be addicted to the Internet, as they are considered high-risk for problems because of free and unlimited access and flexible time schedules. Implications to help professionals and student affairs policy makers are addressed.",
"title": ""
},
{
"docid": "c09adc1924c9c1b32c33b23d9df489b9",
"text": "In recent years, “document store” NoSQL systems have exploded in popularity. A large part of this popularity has been driven by the adoption of the JSON data model in these NoSQL systems. JSON is a simple but expressive data model that is used in many Web 2.0 applications, and maps naturally to the native data types of many modern programming languages (e.g. Javascript). The advantages of these NoSQL document store systems (like MongoDB and CouchDB) are tempered by a lack of traditional RDBMS features, notably a sophisticated declarative query language, rich native query processing constructs (e.g. joins), and transaction management providing ACID safety guarantees. In this paper, we investigate whether the advantages of the JSON data model can be added to RDBMSs, gaining some of the traditional benefits of relational systems in the bargain. We present Argo, an automated mapping layer for storing and querying JSON data in a relational system, and NoBench, a benchmark suite that evaluates the performance of several classes of queries over JSON data in NoSQL and SQL databases. Our results point to directions of how one can marry the best of both worlds, namely combining the flexibility of JSON to support the popular document store model with the rich query processing and transactional properties that are offered by traditional relational DBMSs.",
"title": ""
},
{
"docid": "ebc17ee3bfe7fb5cda23c7db07e5ae8d",
"text": "This paper describes the hardware and software ecosystem encompassing the brain-inspired TrueNorth processor – a 70mW reconfigurable silicon chip with 1 million neurons, 256 million synapses, and 4096 parallel and distributed neural cores. For systems, we present a scale-out system loosely coupling 16 single-chip boards and a scale-up system tightly integrating 16 chips in a 4 × 4 configuration by exploiting TrueNorth's native tiling. For software, we present an end-to-end ecosystem consisting of a simulator, a programming language, an integrated programming environment, a library of algorithms and applications, firmware, tools for deep learning, a teaching curriculum, and cloud enablement. For the scale-up systems we summarize our approach to physical placement of neural network, to reduce intra- and inter-chip network traffic. The ecosystem is in use at over 30 universities and government/corporate labs. Our platform is a substrate for a spectrum of applications from mobile and embedded computing to cloud and supercomputers.",
"title": ""
},
{
"docid": "a6f1480f52d142a013bb88a92e47b0d7",
"text": "An isolated switched high step up boost DC-DC converter is discussed in this paper. The main objective of this paper is to step up low voltage to very high voltage. This paper mainly initiates at boosting a 30V DC into 240V DC. The discussed converter benefits from the continuous input current. Usually, step-up DC-DC converters are suitable for input whose voltage level is very low. The circuital design comprises of four main stages. Firstly, an impedance network which is used to boost the low input voltage. Secondly a switching network which is used to boost the input voltage then an isolation transformer which is used to provide higher boosting ability and finally a voltage multiplier rectifier which is used to rectify the secondary voltage of the transformer. No switching deadtime is required, which increases the reliability of the converter. Comparing with the existing step-up topologies indicates that this new design is hybrid, portable, higher power density and the size of the whole system is also reduced. The principles as well as operations were analysed and experimentally worked out, which provides a higher efficiency. KeywordImpedance Network, Switching Network, Isolation Transformer, Voltage Multiplier Rectifier, MicroController, DC-DC Boost Converter __________________________________________________________________________________________________",
"title": ""
},
{
"docid": "5f39c5df4127824b3408e2b34f000bee",
"text": "Objective To evaluate the information, its source, beliefs an d perceptions of acne patients regarding acne and their expectations about treatment. Patients and methods All acne patients visiting Dermatology outpatient c lini at WAPDA Teaching Hospital Complex, Lahore and at private practice fo r management were asked to fill a voluntary questionnaire containing information about patients ’ beliefs and perception about acne. Grading was done by a dermatologist. Result 449 patients completed the pro forma. Males were 37 % and females 63%. 54.1% of patients waited for one year to have treatment. More than 60 % thought acne as a curable disease and more than 50% expected it to clear in 2-4 weeks. Most of them decided themselves to visit the doctor or were influenced by their parents. Most of them gath ered information regarding acne from close relatives and friends. Infection and poor hygiene ( less washing of face with soap) was thought to be the most important cause. Facial masks and lotions were most commonly tried non-prescription acne products. 45% thought that acne had a severe i mpact on their self-image. Topical treatment was the most desired one. More than 40% of patients had grade IV acne and there was no significant difference between males and females re garding grade wise presentation. Conclusion Community-based health education program is require d to increase the awareness about acne and to resolve the misconceptions.",
"title": ""
},
{
"docid": "02b4c741b4a68e1b437674d874f10253",
"text": "Traffic sign recognition is an important step for integrating smart vehicles into existing road transportation systems. In this paper, an NVIDIA Jetson TX1-based traffic sign recognition system is introduced for driver assistance applications. The system incorporates two major operations, traffic sign detection and recognition. Image color and shape based detection is used to locate potential signs in each frame. A pre-trained convolutional neural network performs classification on these potential sign candidates. The proposed system is implemented on NVIDIA Jetson TX1 board with web-camera. Based on a well-known benchmark suite, 96% detection accuracy is achieved while executing at 1.6 frames per seconds.",
"title": ""
},
{
"docid": "8a363d7fa2bbf4b30312ca9efc2b3fa5",
"text": "The objective of the present study was to investigate whether transpedicular bone grafting as a supplement to posterior pedicle screw fixation in thoracolumbar fractures results in a stable reconstruction of the anterior column, that allows healing of the fracture without loss of correction. Posterior instrumentation using an internal fixator is a standard procedure for stabilizing the injured thoracolumbar spine. Transpedicular bone grafting was first described by Daniaux in 1986 to achieve intrabody fusion. Pedicle screw fixation with additional transpedicular fusion has remained controversial because of inconsistent reports. A retrospective single surgeon cohort study was performed. Between October 2001 and May 2007, 30 consecutive patients with 31 acute traumatic burst fractures of the thoracolumbar spine (D12-L5) were treated operatively. The mean age of the patients was 45.7 years (range: 19-78). There were 23 men and 7 women. Nineteen thoracolumbar fractures were sustained in falls from a height; the other fractures were the result of motor vehicle accidents. The vertebrae most often involved were L1 in 13 patients and L2 in 8 patients. According to the Magerl classification, 25 patients sustained Type A1, 4 Type A2 and 2 Type A3 fractures. The mean time from injury to surgery was 6 days (range 2-14 days). Two postoperative complications were observed: one superficial and one deep infection. Mean Cobb's angle improved from +7.16 degrees (SD 12.44) preoperatively to -5.48 degrees (SD 11.44) immediately after operation, with a mean loss of correction of 1.00 degrees (SD 3.04) at two years. Reconstruction of the anterior column is important to prevent loss of correction. In our experience, the use of transpedicular bone grafting has efficiently restored the anterior column and has preserved the post-operative correction of kyphosis until healing of the fracture.",
"title": ""
},
{
"docid": "636ace52ca3377809326735810a08310",
"text": "BACKGROUND\nAlthough many patients with venous thromboembolism require extended treatment, it is uncertain whether it is better to use full- or lower-intensity anticoagulation therapy or aspirin.\n\n\nMETHODS\nIn this randomized, double-blind, phase 3 study, we assigned 3396 patients with venous thromboembolism to receive either once-daily rivaroxaban (at doses of 20 mg or 10 mg) or 100 mg of aspirin. All the study patients had completed 6 to 12 months of anticoagulation therapy and were in equipoise regarding the need for continued anticoagulation. Study drugs were administered for up to 12 months. The primary efficacy outcome was symptomatic recurrent fatal or nonfatal venous thromboembolism, and the principal safety outcome was major bleeding.\n\n\nRESULTS\nA total of 3365 patients were included in the intention-to-treat analyses (median treatment duration, 351 days). The primary efficacy outcome occurred in 17 of 1107 patients (1.5%) receiving 20 mg of rivaroxaban and in 13 of 1127 patients (1.2%) receiving 10 mg of rivaroxaban, as compared with 50 of 1131 patients (4.4%) receiving aspirin (hazard ratio for 20 mg of rivaroxaban vs. aspirin, 0.34; 95% confidence interval [CI], 0.20 to 0.59; hazard ratio for 10 mg of rivaroxaban vs. aspirin, 0.26; 95% CI, 0.14 to 0.47; P<0.001 for both comparisons). Rates of major bleeding were 0.5% in the group receiving 20 mg of rivaroxaban, 0.4% in the group receiving 10 mg of rivaroxaban, and 0.3% in the aspirin group; the rates of clinically relevant nonmajor bleeding were 2.7%, 2.0%, and 1.8%, respectively. The incidence of adverse events was similar in all three groups.\n\n\nCONCLUSIONS\nAmong patients with venous thromboembolism in equipoise for continued anticoagulation, the risk of a recurrent event was significantly lower with rivaroxaban at either a treatment dose (20 mg) or a prophylactic dose (10 mg) than with aspirin, without a significant increase in bleeding rates. (Funded by Bayer Pharmaceuticals; EINSTEIN CHOICE ClinicalTrials.gov number, NCT02064439 .).",
"title": ""
},
{
"docid": "12a89641dd93939be587b2bcf1b26939",
"text": "Drug-drug interaction (DDI) is a vital information when physicians and pharmacists prepare for the combined use of two or more drugs. Thus, several DDI databases are constructed to avoid mistakenly medicine administering. In recent years, automatically extracting DDIs from biomedical text has drawn researchers’ attention. However, the existing work need either complex feature engineering or NLP tools, both of which are insufficient for sentence comprehension. Inspired by the deep learning approaches in natural language processing, we propose a recurrent neural network model with multiple attention layers for DDI classification. We evaluate our model on 2013 SemEval DDIExtraction dataset. The experiments show that our model classifies most of the drug pairs into correct DDI categories, which outperforms the existing NLP or deep learning method.",
"title": ""
}
] |
scidocsrr
|
d1a7a06b4bb406fbc85882382610e767
|
Tactical cooperative planning for autonomous highway driving using Monte-Carlo Tree Search
|
[
{
"docid": "6b878f3084bd74d963f25b3fd87d0a34",
"text": "Cooperative behavior planning for automated vehicles is getting more and more attention in the research community. This paper introduces two dimensions to structure cooperative driving tasks. The authors suggest to distinguish driving tasks by the used communication channels and by the hierarchical level of cooperative skills and abilities. In this manner, this paper presents the cooperative behavior skills of \"Jack\", our automated vehicle driving from Stanford to Las Vegas in January 2015.",
"title": ""
}
] |
[
{
"docid": "d49d099d3f560584f2d080e7a1e2711f",
"text": "Dark Web forums are heavily used by extremist and terrorist groups for communication, recruiting, ideology sharing, and radicalization. These forums often have relevance to the Iraqi insurgency or Al-Qaeda and are of interest to security and intelligence organizations. This paper presents an automated approach to sentiment and affect analysis of selected radical international Ahadist Dark Web forums. The approach incorporates a rich textual feature representation and machine learning techniques to identify and measure the sentiment polarities and affect intensities expressed in forum communications. The results of sentiment and affect analysis performed on two large-scale Dark Web forums are presented, offering insight into the communities and participants.",
"title": ""
},
{
"docid": "d763198d3bfb1d30b153e13245c90c08",
"text": "Inspired by the aerial maneuvering ability of lizards, we present the design and control of MSU (Michigan State University) tailbot - a miniature-tailed jumping robot. The robot can not only wheel on the ground, but also jump up to overcome obstacles. Moreover, once leaping into the air, it can control its body angle using an active tail to dynamically maneuver in midair for safe landings. We derive the midair dynamics equation and design controllers, such as a sliding mode controller, to stabilize the body at desired angles. To the best of our knowledge, this is the first miniature (maximum size 7.5 cm) and lightweight (26.5 g) robot that can wheel on the ground, jump to overcome obstacles, and maneuver in midair. Furthermore, tailbot is equipped with on-board energy, sensing, control, and wireless communication capabilities, enabling tetherless or autonomous operations. The robot in this paper exemplifies the integration of mechanical design, embedded system, and advanced control methods that will inspire the next-generation agile robots mimicking their biological counterparts. Moreover, it can serve as mobile sensor platforms for wireless sensor networks with many field applications.",
"title": ""
},
{
"docid": "a26dd0133a66a8868d84ef418bcaf9f5",
"text": "In performance display advertising a key metric of a campaign effectiveness is its conversion rate -- the proportion of users who take a predefined action on the advertiser website, such as a purchase. Predicting this conversion rate is thus essential for estimating the value of an impression and can be achieved via machine learning. One difficulty however is that the conversions can take place long after the impression -- up to a month -- and this delayed feedback hinders the conversion modeling. We tackle this issue by introducing an additional model that captures the conversion delay. Intuitively, this probabilistic model helps determining whether a user that has not converted should be treated as a negative sample -- when the elapsed time is larger than the predicted delay -- or should be discarded from the training set -- when it is too early to tell. We provide experimental results on real traffic logs that demonstrate the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "238215c552415dd21bed4a12fdc0cc4c",
"text": "A method for real-time estimation of parameters in a linear dynamic state space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight for indirect adaptive or reconfigurable control. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle (HARV) were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than 1 cycle of the dominant dynamic mode natural frequencies, using control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements, and could be implemented aboard an aircraft in real time. Nomenclature A,B,C,D system matrices E ; @ expectation operator g acceleration due to gravity, ft/sec h altitude, ft j imaginary number = -1 M Mach number N total number of samples Re real part R measurement noise covariance matrix t time Copyright © 1999 by the American Institute of Aeronautics and Astronautics, Inc. No copyright is asserted in the United States under Title 17, U.S. Code. The U.S. Government has a royaltyfree license to exercise all rights under the copyright claimed herein for Governmental purposes. All other rights are reserved by the copyright owner. Vt true airspeed, ft/sec x, u, y state, input, and output vectors zi measured output vector at time i t D α angle of attack, rad β sideslip angle, rad δ δ a r , aileron, rudder deflections, rad δ δ e s , elevator, stabilator deflections, rad δ ij Kronecker delta ν i discrete measurement noise vector σ 2 variance ω angular frequency, rad/sec θ p-dimensional parameter vector superscripts T transpose † complex conjugate transpose ~ discrete Fourier transform $ estimate –1 matrix inverse subscripts i value at time i t D o trim or initial value Introduction Real-time identification of dynamic models is a requirement for indirect adaptive or reconfigurable control. One approach for satisfying this requirement is to assume the dynamic model has a linear structure with time-varying parameters to account for changes in the flight condition, stores, configuration, remaining fuel, or from various types of failures, wear, or damage. The task is then to identify accurate linear model parameter estimates from measured data in real time, so that the adaptive control logic can make the necessary https://ntrs.nasa.gov/search.jsp?R=20040087105 2017-09-13T22:15:14+00:00Z",
"title": ""
},
{
"docid": "85678fca24cfa94efcc36570b3f1ef62",
"text": "Content-based recommender systems use preference ratings and features that characterize media to model users' interests or information needs for making future recommendations. While previously developed in the music and text domains, we present an initial exploration of content-based recommendation for spoken documents using a corpus of public domain internet audio. Unlike familiar speech technologies of topic identification and spoken document retrieval, our recommendation task requires a more comprehensive notion of document relevance than bags-of-words would supply. Inspired by music recommender systems, we automatically extract a wide variety of content-based features to characterize non-linguistic aspects of the audio such as speaker, language, gender, and environment. To combine these heterogeneous information sources into a single relevance judgement, we evaluate feature, score, and hybrid fusion techniques. Our study provides an essential first exploration of the task and clearly demonstrates the value of a multisource approach over a bag-of-words baseline.",
"title": ""
},
{
"docid": "4e071e10b9263d98061b87a7c7ceee02",
"text": "Seeking more common ground between data scientists and their critics.",
"title": ""
},
{
"docid": "9b430645f7b0da19b2c55d43985259d8",
"text": "Research on human spatial memory and navigational ability has recently shown the strong influence of reference systems in spatial memory on the ways spatial information is accessed in navigation and other spatially oriented tasks. One of the main findings can be characterized as a large cognitive cost, both in terms of speed and accuracy that occurs whenever the reference system used to encode spatial information in memory is not aligned with the reference system required by a particular task. In this paper, the role of aligned and misaligned reference systems is discussed in the context of the built environment and modern architecture. The role of architectural design on the perception and mental representation of space by humans is investigated. The navigability and usability of built space is systematically analysed in the light of cognitive theories of spatial and navigational abilities of humans. It is concluded that a building’s navigability and related wayfinding issues can benefit from architectural design that takes into account basic results of spatial cognition research. 1 Wayfinding and Architecture Life takes place in space and humans, like other organisms, have developed adaptive strategies to find their way around their environment. Tasks such as identifying a place or direction, retracing one’s path, or navigating a large-scale space, are essential elements to mobile organisms. Most of these spatial abilities have evolved in natural environments over a very long time, using properties present in nature as cues for spatial orientation and wayfinding. With the rise of complex social structure and culture, humans began to modify their natural environment to better fit their needs. The emergence of primitive dwellings mainly provided shelter, but at the same time allowed builders to create environments whose spatial structure “regulated” the chaotic natural environment. They did this by using basic measurements and geometric relations, such as straight lines, right angles, etc., as the basic elements of design (Le Corbusier, 1931, p. 69ff.) In modern society, most of our lives take place in similar regulated, human-made spatial environments, with paths, tracks, streets, and hallways as the main arteries of human locomotion. Architecture and landscape architecture embody the human effort to structure space in meaningful and useful ways. Architectural design of space has multiple functions. Architecture is designed to satisfy the different representational, functional, aesthetic, and emotional needs of organizations and the people who live or work in these structures. In this chapter, emphasis lies on a specific functional aspect of architectural design: human wayfinding. Many approaches to improving architecture focus on functional issues, like improved ecological design, the creation of improved workplaces, better climate control, lighting conditions, or social meeting areas. Similarly, when focusing on the mobility of humans, the ease of wayfinding within a building can be seen as an essential function of a building’s design (Arthur & Passini, 1992; Passini, 1984). When focusing on wayfinding issues in buildings, cities, and landscapes, the designed spatial environment can be seen as an important tool in achieving a particular goal, e.g., reaching a destination or finding an exit in case of emergency. This view, if taken to a literal extreme, is summarized by Le Corbusier’s (1931) notion of the building as a “machine,” mirroring in architecture the engineering ideals of efficiency and functionality found in airplanes and cars. In the narrow sense of wayfinding, a building thus can be considered of good design if it allows easy and error-free navigation. This view is also adopted by Passini (1984), who states that “although the architecture and the spatial configuration of a building generate the wayfinding problems people have to solve, they are also a wayfinding support system in that they contain the information necessary to solve the problem” (p. 110). Like other problems of engineering, the wayfinding problem in architecture should have one or more solutions that can be evaluated. This view of architecture can be contrasted with the alternative view of architecture as “built philosophy”. According to this latter view, architecture, like art, expresses ideas and cultural progress by shaping the spatial structure of the world – a view which gives consideration to the users as part of the philosophical approach but not necessarily from a usability perspective. Viewing wayfinding within the built environment as a “man-machine-interaction” problem makes clear that good architectural design with respect to navigability needs to take two factors into account. First, the human user comes equipped with particular sensory, perceptual, motoric, and cognitive abilities. Knowledge of these abilities and the limitations of an average user or special user populations thus is a prerequisite for good design. Second, structural, functional, financial, and other design considerations restrict the degrees of freedom architects have in designing usable spaces. In the following sections, we first focus on basic research on human spatial cognition. Even though not all of it is directly applicable to architectural design and wayfinding, it lays the foundation for more specific analyses in part 3 and 4. In part 3, the emphasis is on a specific research question that recently has attracted some attention: the role of environmental structure (e.g., building and street layout) for the selection of a spatial reference frame. In part 4, implications for architectural design are discussed by means of two real-world examples. 2 The human user in wayfinding 2.1 Navigational strategies Finding one’s way in the environment, reaching a destination, or remembering the location of relevant objects are some of the elementary tasks of human activity. Fortunately, human navigators are well equipped with an array of flexible navigational strategies, which usually enable them to master their spatial environment (Allen, 1999). In addition, human navigation can rely on tools that extend human sensory and mnemonic abilities. Most spatial or navigational strategies are so common that they do not occur to us when we perform them. Walking down a hallway we hardly realize that the optical and acoustical flows give us rich information about where we are headed and whether we will collide with other objects (Gibson, 1979). Our perception of other objects already includes physical and social models on how they will move and where they will be once we reach the point where paths might cross. Following a path can consist of following a particular visual texture (e.g., asphalt) or feeling a handrail in the dark by touch. At places where multiple continuing paths are possible, we might have learned to associate the scene with a particular action (e.g., turn left; Schölkopf & Mallot, 1995), or we might try to approximate a heading direction by choosing the path that most closely resembles this direction. When in doubt about our path we might ask another person or consult a map. As is evident from this brief (and not exhaustive) description, navigational strategies and activities are rich in diversity and adaptability (for an overview see Golledge, 1999; Werner, Krieg-Brückner, & Herrmann, 2000), some of which are aided by architectural design and signage (see Arthur & Passini, 1992; Passini, 1984). Despite the large number of different navigational strategies, people still experience problems finding their way or even feel lost momentarily. This feeling of being lost might reflect the lack of a key component of human wayfinding: knowledge about where one is located in an environment – with respect to one’s goal, one’s starting location, or with respect to the global environment one is in. As Lynch put it, “the terror of being lost comes from the necessity that a mobile organism be oriented in its surroundings” (1960, p. 125.) Some wayfinding strategies, like vector navigation, rely heavily on this information. Other strategies, e.g. piloting or path-following, which are based on purely local information can benefit from even vague locational knowledge as a redundant source of information to validate or question navigational decisions (see Werner et al., 2000, for examples.) Proficient signage in buildings, on the other hand, relies on a different strategy. It relieves a user from keeping track of his or her position in space by indicating the correct navigational choice whenever the choice becomes relevant. Keeping track of one’s position during navigation can be done quite easily if access to global landmarks, reference directions, or coordinates is possible. Unfortunately, the built environment often does not allow for simple navigational strategies based on these types of information. Instead, spatial information has to be integrated across multiple places, paths, turns, and extended periods of time (see Poucet, 1993, for an interesting model of how this can be achieved). In the next section we will describe an essential ingredient of this integration – the mental representation of spatial information in memory. 2.2 Alignment effects in spatial memory When observing tourists in an unfamiliar environment, one often notices people frantically turning maps to align the noticeable landmarks depicted in the map with the visible landmarks as seen from the viewpoint of the tourist. This type of behavior indicates a well-established cognitive principle (Levine, Jankovic, & Palij, 1982). Observers more easily comprehend and use information depicted in “You-are-here” (YAH) maps if the up-down direction of the map coincides with the front-back direction of the observer. In this situation, the natural preference of directional mapping of top to front and bottom to back is used, and left and right in the map stay left and right in the depicted world. While th",
"title": ""
},
{
"docid": "30f48021bca12899d6f2e012e93ba12d",
"text": "There are several locomotion mechanisms in Nature. The study of mechanics of any locomotion is very useful for scientists and researchers. Many locomotion principles from Nature have been adapted in robotics. There are several species which are capable of multimode locomotion such as walking and swimming, and flying etc. Frogs are such species, capable of jumping, walking, and swimming. Multimode locomotion is important for robots to work in unknown environment. Frogs are widely known as good multimode locomotors. Webbed feet help them to swim efficiently in water. This paper presents the study of frog's swimming locomotion and adapting the webbed feet for swimming locomotion of the robots. A simple mechanical model of robotic leg with webbed foot, which can be used for multi-mode locomotion and robotic frog, is put forward. All the joints of the legs are designed to be driven by tendon-pulley arrangement with the actuators mounted on the body, which allows the legs to be lighter and compact.",
"title": ""
},
{
"docid": "be2137514d2c1431d82c28a4ae2719ad",
"text": "The Exact Set Similarity Join problem aims to find all similar sets between two collections of sets, with respect to a threshold and a similarity function such as overlap, Jaccard, dice or cosine. The näıve approach verifies all pairs of sets and it is often considered impractical due the high number of combinations. So, Exact Set Similarity Join algorithms are usually based on the Filter-Verification Framework, that applies a series of filters to reduce the number of verified pairs. This paper presents a new filtering technique called Bitmap Filter, which is able to accelerate state-of-the-art algorithms for the exact Set Similarity Join problem. The Bitmap Filter uses hash functions to create bitmaps of fixed b bits, representing characteristics of the sets. Then, it applies bitwise operations (such as xor and population count) on the bitmaps in order to infer a similarity upper bound for each pair of sets. If the upper bound is below a given similarity threshold, the pair of sets is pruned. The Bitmap Filter benefits from the fact that bitwise operations are efficiently implemented by many modern general-purpose processors and it was easily applied to four state-of-the-art algorithms implemented in CPU: AllPairs, PPJoin, AdaptJoin and GroupJoin. Furthermore, we propose a Graphic Processor Unit (GPU) algorithm based on the näıve approach but using the Bitmap Filter to speedup the computation. The experiments considered 9 collections containing from 100 thousands up to 10 million sets and the joins were made using Jaccard thresholds from 0.50 to 0.95. The Bitmap Filter was able to improve 90% of the experiments in CPU, with speedups of up to 4.50× and 1.43× on average. Using the GPU algorithm, the experiments were able to speedup the original CPU algorithms by up to 577× using an Nvidia Geforce GTX 980 Ti.",
"title": ""
},
{
"docid": "bdae3fb85df9de789a9faa2c08a5c0fb",
"text": "The rapid, exponential growth of modern electronics has brought about profound changes to our daily lives. However, maintaining the growth trend now faces significant challenges at both the fundamental and practical levels [1]. Possible solutions include More Moore?developing new, alternative device structures and materials while maintaining the same basic computer architecture, and More Than Moore?enabling alternative computing architectures and hybrid integration to achieve increased system functionality without trying to push the devices beyond limits. In particular, an increasing number of computing tasks today are related to handling large amounts of data, e.g. image processing as an example. Conventional von Neumann digital computers, with separate memory and processer units, become less and less efficient when large amount of data have to be moved around and processed quickly. Alternative approaches such as bio-inspired neuromorphic circuits, with distributed computing and localized storage in networks, become attractive options [2]?[6].",
"title": ""
},
{
"docid": "2272fb00555b1aef0f1861fe83111b5d",
"text": "JavaScript is a dynamic programming language adopted in a variety of applications, including web pages, PDF Readers, widget engines, network platforms, office suites. Given its widespread presence throughout different software platforms, JavaScript is a primary tool for the development of novel -rapidly evolving- malicious exploits. If the classical signature- and heuristic-based detection approaches are clearly inadequate to cope with this kind of threat, machine learning solutions proposed so far suffer from high false-alarm rates or require special instrumentation that make them not suitable for protecting end-user systems.\n In this paper we present Lux0R \"Lux 0n discriminant References\", a novel, lightweight approach to the detection of malicious JavaScript code. Our method is based on the characterization of JavaScript code through its API references, i.e., functions, constants, objects, methods, keywords as well as attributes natively recognized by a JavaScript Application Programming Interface (API). We exploit machine learning techniques to select a subset of API references that characterize malicious code, and then use them to detect JavaScript malware. The selection algorithm has been thought to be \"secure by design\" against evasion by mimicry attacks. In this investigation, we focus on a relevant application domain, i.e., the detection of malicious JavaScript code within PDF documents. We show that our technique is able to achieve excellent malware detection accuracy, even on samples exploiting never-before-seen vulnerabilities, i.e., for which there are no examples in training data. Finally, we experimentally assess the robustness of Lux0R against mimicry attacks based on feature addition.",
"title": ""
},
{
"docid": "f4438c21802e244d4021ef3390aecf89",
"text": "Ship detection has been playing a significant role in the field of remote sensing for a long time but it is still full of challenges. The main limitations of traditional ship detection methods usually lie in the complexity of application scenarios, the difficulty of intensive object detection and the redundancy of detection region. In order to solve such problems above, we propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN) which can effectively detect ship in different scenes including ocean and port. Specifically, we put forward the Dense Feature Pyramid Network (DFPN), which is aimed at solving the problem resulted from the narrow width of the ship. Compared with previous multi-scale detectors such as Feature Pyramid Network (FPN), DFPN builds the high-level semantic feature-maps for all scales by means of dense connections, through which enhances the feature propagation and encourages the feature reuse. Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall. Furthermore, we also propose multi-scale ROI Align for the purpose of maintaining the completeness of semantic and spatial information. Experiments based on remote sensing images from Google Earth for ship detection show that our detection method based on RDFPN representation has a state-of-the-art performance.",
"title": ""
},
{
"docid": "e9c52fb24425bff6ed514de6b92e8ba2",
"text": "This paper proposes a ultra compact Wilkinson power combiner (WPC) incorporating synthetic transmission lines at K-band in CMOS technology. The 50 % improvement on the size reduction can be achieved by increasing the slow-wave factor of synthetic transmission line. The presented Wilkinson power combiner design is analyzed and fabricated by using standard 0.18 µm 1P6M CMOS technology. The prototype has only a chip size of 480 µm × 90 µm, corresponding to 0.0002λ02 at 21.5 GHz. The measured insertion losses and return losses are less and higher than 4 dB and 17.5 dB from 16 GHz to 27 GHz, respectively. Furthermore, the proposed WPC is also integrated into the phase shifter to confirm its feasibility. The prototype of phase shifter shows 15 % size reduction and on-wafer measurements show good linearity of full 360-degree phase shifting from 21 GHz to 27 GHz.",
"title": ""
},
{
"docid": "4a227bddcaed44777eb7a29dcf940c6c",
"text": "Deep neural networks have achieved great success on a variety of machine learning tasks. There are many fundamental and open questions yet to be answered, however. We introduce the Extended Data Jacobian Matrix (EDJM) as an architecture-independent tool to analyze neural networks at the manifold of interest. The spectrum of the EDJM is found to be highly correlated with the complexity of the learned functions. After studying the effect of dropout, ensembles, and model distillation using EDJM, we propose a novel spectral regularization method, which improves network performance.",
"title": ""
},
{
"docid": "3830c568e6b9b56bab1c971d2a99757c",
"text": "Lagrangian theory provides a diverse set of tools for continuous motion analysis. Existing work shows the applicability of Lagrangian method for video analysis in several aspects. In this paper we want to utilize the concept of Lagrangian measures to detect violent scenes. Therefore we propose a local feature based on the SIFT algorithm that incooperates appearance and Lagrangian based motion models. We will show that the temporal interval of the used motion information is a crucial aspect and study its influence on the classification performance. The proposed LaSIFT feature outperforms other state-of-the-art local features, in particular in uncontrolled realistic video data. We evaluate our algorithm with a bag-of-word approach. The experimental results show a significant improvement over the state-of-the-art on current violent detection datasets, i.e. Crowd Violence, Hockey Fight.",
"title": ""
},
{
"docid": "f0d5a4bb917a8dd40f0f38fcc9460d3b",
"text": "Simple decisions arise from the evaluation of sensory evidence. But decisions are determined by more than just evidence. Individuals establish internal decision criteria that influence how they respond. Where or how decision criteria are established in the brain remains poorly understood. Here, we show that neuronal activity in the superior colliculus (SC) predicts changes in decision criteria. Using a novel \"Yes-No\" task that isolates changes in decision criterion from changes in decision sensitivity, and computing neuronal measures of sensitivity and criterion, we find that SC neuronal activity correlates with the decision criterion regardless of the location of the choice report. We also show that electrical manipulation of activity within the SC produces changes in decisions consistent with changes in decision criteria and are largely independent of the choice report location. Our correlational and causal results together provide strong evidence that SC activity signals the position of a decision criterion. VIDEO ABSTRACT.",
"title": ""
},
{
"docid": "925709dfe0d0946ca06d05b290f2b9bd",
"text": "Mentalization, operationalized as reflective functioning (RF), can play a crucial role in the psychological mechanisms underlying personality functioning. This study aimed to: (a) study the association between RF, personality disorders (cluster level) and functioning; (b) investigate whether RF and personality functioning are influenced by (secure vs. insecure) attachment; and (c) explore the potential mediating effect of RF on the relationship between attachment and personality functioning. The Shedler-Westen Assessment Procedure (SWAP-200) was used to assess personality disorders and levels of psychological functioning in a clinical sample (N = 88). Attachment and RF were evaluated with the Adult Attachment Interview (AAI) and Reflective Functioning Scale (RFS). Findings showed that RF had significant negative associations with cluster A and B personality disorders, and a significant positive association with psychological functioning. Moreover, levels of RF and personality functioning were influenced by attachment patterns. Finally, RF completely mediated the relationship between (secure/insecure) attachment and adaptive psychological features, and thus accounted for differences in overall personality functioning. Lack of mentalization seemed strongly associated with vulnerabilities in personality functioning, especially in patients with cluster A and B personality disorders. These findings provide support for the development of therapeutic interventions to improve patients' RF.",
"title": ""
},
{
"docid": "c6035abd67504564fbf4b8c6015beb2e",
"text": "Intermediaries can choose between functioning as a marketplace (on which suppliers sell their products directly to buyers) or as a reseller (purchasing products from suppliers and selling them to buyers). We model this as a decision between whether control rights over a non-contractible decision variable (the choice of some marketing activity) are better held by suppliers (the marketplacemode) or by the intermediary (the reseller-mode). Whether the marketplace or the reseller mode is preferred depends on whether independent suppliers or the intermediary have more important information relevant to the optimal tailoring of marketing activities for each specific product. We show that this tradeoff is shifted towards the reseller-mode when marketing activities create spillovers across products and when network effects lead to unfavorable expectations about supplier participation. If the reseller has a variable cost advantage (respectively, disadvantage) relative to the marketplace then the tradeoff is shifted towards the marketplace for long-tail (respectively, shorttail) products. We thus provide a theory of which products an intermediary should offer in each mode. We also provide some empirical evidence that supports our main results. JEL classification: D4, L1, L5",
"title": ""
},
{
"docid": "8756441420669a6845254242030e0a79",
"text": "We propose a recurrent neural network (RNN) based model for image multi-label classification. Our model uniquely integrates and learning of visual attention and Long Short Term Memory (LSTM) layers, which jointly learns the labels of interest and their co-occurrences, while the associated image regions are visually attended. Different from existing approaches utilize either model in their network architectures, training of our model does not require pre-defined label orders. Moreover, a robust inference process is introduced so that prediction errors would not propagate and thus affect the performance. Our experiments on NUS-WISE and MS-COCO datasets confirm the design of our network and its effectiveness in solving multi-label classification problems.",
"title": ""
},
{
"docid": "3473e7d5f49374339d12120d1644ec3d",
"text": "Patients with chronic conditions make day-to-day decisions about--self-manage--their illnesses. This reality introduces a new chronic disease paradigm: the patient-professional partnership, involving collaborative care and self-management education. Self-management education complements traditional patient education in supporting patients to live the best possible quality of life with their chronic condition. Whereas traditional patient education offers information and technical skills, self-management education teaches problem-solving skills. A central concept in self-management is self-efficacy--confidence to carry out a behavior necessary to reach a desired goal. Self-efficacy is enhanced when patients succeed in solving patient-identified problems. Evidence from controlled clinical trials suggests that (1) programs teaching self-management skills are more effective than information-only patient education in improving clinical outcomes; (2) in some circumstances, self-management education improves outcomes and can reduce costs for arthritis and probably for adult asthma patients; and (3) in initial studies, a self-management education program bringing together patients with a variety of chronic conditions may improve outcomes and reduce costs. Self-management education for chronic illness may soon become an integral part of high-quality primary care.",
"title": ""
}
] |
scidocsrr
|
7e45f0f26f2bdce07b23ce2c2383ec40
|
Does sexual selection explain human sex differences in aggression?
|
[
{
"docid": "0688abcb05069aa8a0956a0bd1d9bf54",
"text": "Sex differences in mortality rates stem from genetic, physiological, behavioral, and social causes that are best understood when integrated in an evolutionary life history framework. This paper investigates the Male-to-Female Mortality Ratio (M:F MR) from external and internal causes and across contexts to illustrate how sex differences shaped by sexual selection interact with the environment to yield a pattern with some consistency, but also with expected variations due to socioeconomic and other factors.",
"title": ""
}
] |
[
{
"docid": "ba8cddc6ed18f941ed7409524137c28c",
"text": "This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent’s past good decisions. This algorithm is designed to verify our hypothesis that exploiting past good experiences can indirectly drive deep exploration. Our empirical results show that SIL significantly improves advantage actor-critic (A2C) on several hard exploration Atari games and is competitive to the state-of-the-art count-based exploration methods. We also show that SIL improves proximal policy optimization (PPO) on MuJoCo tasks.",
"title": ""
},
{
"docid": "88530d3d70df372b915556eab919a3fe",
"text": "The airway mucosa is lined by a continuous epithelium comprised of multiple cell phenotypes, several of which are secretory. Secretions produced by these cells mix with a variety of macromolecules, ions and water to form a respiratory tract fluid that protects the more distal airways and alveoli from injury and infection. The present article highlights the structure of the mucosa, particularly its secretory cells, gives a synopsis of the structure of mucus, and provides new information on the localization of mucin (MUC) genes that determine the peptide sequence of the protein backbone of the glycoproteins, which are a major component of mucus. Airway secretory cells comprise the mucous, serous, Clara and dense-core granulated cells of the surface epithelium, and the mucous and serous acinar cells of the submucosal glands. Several transitional phenotypes may be found, especially during irritation or disease. Respiratory tract mucins constitute a heterogeneous group of high molecular weight, polydisperse richly glycosylated molecules: both secreted and membrane-associated forms of mucin are found. Several mucin (MUC) genes encoding the protein core of mucin have been identified. We demonstrate the localization of MUC gene expression to a number of distinct cell types and their upregulation both in response to experimentally administered lipopolysaccharide and cystic fibrosis.",
"title": ""
},
{
"docid": "46f3f27a88b4184a15eeb98366e599ec",
"text": "Radiomics is an emerging field in quantitative imaging that uses advanced imaging features to objectively and quantitatively describe tumour phenotypes. Radiomic features have recently drawn considerable interest due to its potential predictive power for treatment outcomes and cancer genetics, which may have important applications in personalized medicine. In this technical review, we describe applications and challenges of the radiomic field. We will review radiomic application areas and technical issues, as well as proper practices for the designs of radiomic studies.",
"title": ""
},
{
"docid": "0d2f933b139f50ff9195118d9d1466aa",
"text": "Ambient Intelligence (AmI) and Smart Environments (SmE) are based on three foundations: ubiquitous computing, ubiquitous communication and intelligent adaptive interfaces [41]. This type of systems consists of a series of interconnected computing and sensing devices which surround the user pervasively in his environment and are invisible to him, providing a service that is dynamically adapted to the interaction context, so that users can naturally interact with the system and thus perceive it as intelligent. To ensure such a natural and intelligent interaction, it is necessary to provide an effective, easy, safe and transparent interaction between the user and the system. With this objective, as an attempt to enhance and ease human-to-computer interaction, in the last years there has been an increasing interest in simulating human-tohuman communication, employing the so-called multimodal dialogue systems [46]. These systems go beyond both the desktop metaphor and the traditional speech-only interfaces by incorporating several communication modalities, such as speech, gaze, gestures or facial expressions. Multimodal dialogue systems offer several advantages. Firstly, they can make use of automatic recognition techniques to sense the environment allowing the user to employ different input modalities, some of these technologies are automatic speech recognition [62], natural language processing [12], face location and tracking [77], gaze tracking [58], lipreading recognition [13], gesture recognition [39], and handwriting recognition [78].",
"title": ""
},
{
"docid": "f0f16472cdb6b52b05d1d324e55da081",
"text": "We propose a new distributed algorithm for empirical risk minimization in machine learning. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method. We analyze its iteration complexity and communication efficiency for minimizing self-concordant empirical loss functions, and discuss the results for distributed ridge regression, logistic regression and binary classification with a smoothed hinge loss. In a standard setting for supervised learning, where the n data points are i.i.d. sampled and when the regularization parameter scales as 1/ √ n, we show that the proposed algorithm is communication efficient: the required round of communication does not increase with the sample size n, and only grows slowly with the number of machines.",
"title": ""
},
{
"docid": "e32e17bb36f39d6020bced297b3989fe",
"text": "Memory networks are a recently introduced model that combines reasoning, attention and memory for solving tasks in the areas of language understanding and dialogue -- where one exciting direction is the use of these models for dialogue-based recommendation. In this talk we describe these models and how they can learn to discuss, answer questions about, and recommend sets of items to a user. The ultimate goal of this research is to produce a full dialogue-based recommendation assistant. We will discuss recent datasets and evaluation tasks that have been built to assess these models abilities to see how far we have come.",
"title": ""
},
{
"docid": "2f48ab4d20f0928837bf10d2f638fed3",
"text": "Duchenne muscular dystrophy (DMD), a recessive sex-linked hereditary disorder, is characterized by degeneration, atrophy, and weakness of skeletal and cardiac muscle. The purpose of this study was to document the prevalence of abnormally low resting BP recordings in patients with DMD in our outpatient clinic. The charts of 31 patients with DMD attending the cardiology clinic at Rush University Medical Center were retrospectively reviewed. Demographic data, systolic, diastolic, and mean blood pressures along with current medications, echocardiograms, and documented clinical appreciation and management of low blood pressure were recorded in the form of 104 outpatient clinical visits. Blood pressure (BP) was classified as low if the systolic and/or mean BP was less than the fifth percentile for height for patients aged ≤17 years (n = 23). For patients ≥18 years (n = 8), systolic blood pressure (SBP) <90 mmHg or a mean arterial pressure (MAP) <60 mmHg was recorded as a low reading. Patients with other forms of myopathy or unclear diagnosis were excluded. Statistical analysis was done using PASW version 18. BP was documented at 103 (99.01 %) outpatient encounters. Low systolic and mean BP were recorded in 35 (33.7 %) encounters. This represented low recordings for 19 (61.3 %) out of a total 31 patients with two or more successive low recordings for 12 (38.7 %) patients. Thirty-one low BP encounters were in patients <18 years old. Hispanic patients accounted for 74 (71.2 %) visits and had low BP recorded in 32 (43.2 %) instances. The patients were non-ambulant in 71 (68.3 %) encounters. Out of 35 encounters with low BP, 17 patients (48.6 %) were taking heart failure medication. In instances when patients had low BP, 22 (66.7 %) out of 33 echocardiography encounters had normal left ventricular ejection fraction. Clinician comments on low BP reading were present in 11 (10.6 %) encounters, and treatment modification occurred in only 1 (1 %) patient. Age in years (p = .031) and ethnicity (p = .035) were independent predictors of low BP using stepwise multiple regression analysis. Low BP was recorded in a significant number of patient encounters in patients with DMD. Age 17 years or less and Hispanic ethnicity were significant predictors associated with low BP readings in our DMD cohort. Concomitant heart failure therapy was not a statistically significant association. There is a need for enhanced awareness of low BP in DMD patients among primary care and specialty physicians. The etiology and clinical impact of these findings are unclear but may impact escalation of heart failure therapy.",
"title": ""
},
{
"docid": "3c1cc57db29b8c86de4f314163ccaca0",
"text": "We are motivated by the need for a generic object proposal generation algorithm which achieves good balance between object detection recall, proposal localization quality and computational efficiency. We propose a novel object proposal algorithm, BING++, which inherits the virtue of good computational efficiency of BING [1] but significantly improves its proposal localization quality. At high level we formulate the problem of object proposal generation from a novel probabilistic perspective, based on which our BING++ manages to improve the localization quality by employing edges and segments to estimate object boundaries and update the proposals sequentially. We propose learning the parameters efficiently by searching for approximate solutions in a quantized parameter space for complexity reduction. We demonstrate the generalization of BING++ with the same fixed parameters across different object classes and datasets. Empirically our BING++ can run at half speed of BING on CPU, but significantly improve the localization quality by 18.5 and 16.7 percent on both VOC2007 and Microhsoft COCO datasets, respectively. Compared with other state-of-the-art approaches, BING++ can achieve comparable performance, but run significantly faster.",
"title": ""
},
{
"docid": "7b463b290988262db44984a89846129c",
"text": "We describe an integrated strategy for planning, perception, state-estimation and action in complex mobile manipulation domains based on planning in the belief space of probability distributions over states using hierarchical goal regression (pre-image back-chaining). We develop a vocabulary of logical expressions that describe sets of belief states, which are goals and subgoals in the planning process. We show that a relatively small set of symbolic operators can give rise to task-oriented perception in support of the manipulation goals. An implementation of this method is demonstrated in simulation and on a real PR2 robot, showing robust, flexible solution of mobile manipulation problems with multiple objects and substantial uncertainty.",
"title": ""
},
{
"docid": "a1018c89d326274e4b71ffc42f4ebba2",
"text": "We describe a method for improving the classification of short text strings using a combination of labeled training data plus a secondary corpus of unlabeled but related longer documents. We show that such unlabeled background knowledge can greatly decrease error rates, particularly if the number of examples or the size of the strings in the training set is small. This is particularly useful when labeling text is a labor-intensive job and when there is a large amount of information available about a particular problem on the World Wide Web. Our approach views the task as one of information integration using WHIRL, a tool that combines database functionalities with techniques from the information-retrieval literature.",
"title": ""
},
{
"docid": "d2b545b4f9c0e7323760632c65206480",
"text": "This brief presents a quantitative analysis of the operating characteristics of three-phase diode bridge rectifiers with ac-side reactance and constant-voltage loads. We focus on the case where the ac-side currents vary continuously (continuous ac-side conduction mode). This operating mode is of particular importance in alternators and generators, for example. Simple approximate expressions are derived for the line and output current characteristics as well as the input power factor. Expressions describing the necessary operating conditions for continuous ac-side conduction are also developed. The derived analytical expressions are applied to practical examples and both simulations and experimental results are utilized to validate the analytical results. It is shown that the derived expressions are far more accurate than calculations based on traditional constant-current models.",
"title": ""
},
{
"docid": "afae709279cd8adeda2888089872d70e",
"text": "One-class classification problemhas been investigated thoroughly for past decades. Among one of themost effective neural network approaches for one-class classification, autoencoder has been successfully applied for many applications. However, this classifier relies on traditional learning algorithms such as backpropagation to train the network, which is quite time-consuming. To tackle the slow learning speed in autoencoder neural network, we propose a simple and efficient one-class classifier based on extreme learning machine (ELM).The essence of ELM is that the hidden layer need not be tuned and the output weights can be analytically determined, which leads to much faster learning speed.The experimental evaluation conducted on several real-world benchmarks shows that the ELM based one-class classifier can learn hundreds of times faster than autoencoder and it is competitive over a variety of one-class classification methods.",
"title": ""
},
{
"docid": "1c80fdc30b2b37443367dae187fbb376",
"text": "The web is a catalyst for drawing people together around shared goals, but many groups never reach critical mass. It can thus be risky to commit time or effort to a goal: participants show up only to discover that nobody else did, and organizers devote significant effort to causes that never get off the ground. Crowdfunding has lessened some of this risk by only calling in donations when an effort reaches a collective monetary goal. However, it leaves unsolved the harder problem of mobilizing effort, time and participation. We generalize the concept into activation thresholds, commitments that are conditioned on others' participation. With activation thresholds, supporters only need to show up for an event if enough other people commit as well. Catalyst is a platform that introduces activation thresholds for on-demand events. For more complex coordination needs, Catalyst also provides thresholds based on time or role (e.g., a bake sale requiring commitments for bakers, decorators, and sellers). In a multi-month field deployment, Catalyst helped users organize events including food bank volunteering, on-demand study groups, and mass participation events like a human chess game. Our results suggest that activation thresholds can indeed catalyze a large class of new collective efforts.",
"title": ""
},
{
"docid": "d013bf1a031dd8a4e546c963cd8bde84",
"text": "Parallel text is the fuel that drives modern machine translation systems. The Web is a comprehensive source of preexisting parallel text, but crawling the entire web is impossible for all but the largest companies. We bring web-scale parallel text to the masses by mining the Common Crawl, a public Web crawl hosted on Amazon’s Elastic Cloud. Starting from nothing more than a set of common two-letter language codes, our open-source extension of the STRAND algorithm mined 32 terabytes of the crawl in just under a day, at a cost of about $500. Our large-scale experiment uncovers large amounts of parallel text in dozens of language pairs across a variety of domains and genres, some previously unavailable in curated datasets. Even with minimal cleaning and filtering, the resulting data boosts translation performance across the board for five different language pairs in the news domain, and on open domain test sets we see improvements of up to 5 BLEU. We make our code and data available for other researchers seeking to mine this rich new data resource.1",
"title": ""
},
{
"docid": "f676c503bcf59a8916995a6db3908792",
"text": "Bone tissue engineering has been increasingly studied as an alternative approach to bone defect reconstruction. In this approach, new bone cells are stimulated to grow and heal the defect with the aid of a scaffold that serves as a medium for bone cell formation and growth. Scaffolds made of metallic materials have preferably been chosen for bone tissue engineering applications where load-bearing capacities are required, considering the superior mechanical properties possessed by this type of materials to those of polymeric and ceramic materials. The space holder method has been recognized as one of the viable methods for the fabrication of metallic biomedical scaffolds. In this method, temporary powder particles, namely space holder, are devised as a pore former for scaffolds. In general, the whole scaffold fabrication process with the space holder method can be divided into four main steps: (i) mixing of metal matrix powder and space-holding particles; (ii) compaction of granular materials; (iii) removal of space-holding particles; (iv) sintering of porous scaffold preform. In this review, detailed procedures in each of these steps are presented. Technical challenges encountered during scaffold fabrication with this specific method are addressed. In conclusion, strategies are yet to be developed to address problematic issues raised, such as powder segregation, pore inhomogeneity, distortion of pore sizes and shape, uncontrolled shrinkage and contamination.",
"title": ""
},
{
"docid": "3194a0dd979b668bb25afb10260c30d2",
"text": "An octa-band antenna for 5.7-in mobile phones with the size of 80 mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times6$ </tex-math></inline-formula> mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times5.8$ </tex-math></inline-formula> mm is proposed and studied. The proposed antenna is composed of a coupled line, a monopole branch, and a ground branch. By using the 0.25-, 0.5-, and 0.75-wavelength modes, the lower band (704–960 MHz) and the higher band (1710–2690 MHz) are covered. The working mechanism is analyzed based on the S-parameters and the surface current distributions. The attractive merits of the proposed antenna are that the nonground portion height is only 6 mm and any lumped element is not used. A prototype of the proposed antenna is fabricated and measured. The measured −6 dB impedance bandwidths are 350 MHz (0.67–1.02 GHz) and 1.27 GHz (1.65–2.92 GHz) at the lower and higher bands, respectively, which can cover the LTE700, GSM850, GSM900, GSM1800, GSM1900, UMTS, LTE2300, and LTE2500 bands. The measured patterns, gains, and efficiencies are presented.",
"title": ""
},
{
"docid": "8bf7524cf8f4696833cfc3d7b5d57349",
"text": "This article is concerned with the design, implementation and control of a redundant robotic tool for Minimal Invasive Surgical (MIS) operations. The robotic tool is modular, comprised of identical stages of dual rotational Degrees of Freedom (DoF). An antagonistic tendon-driven mechanism using two DC-motors in a puller-follower configuration is used for each DoF. The inherent Coulomb friction is compensated using an adaptive scheme while varying the follower's reaction. Preliminary experimental results are provided to investigate the efficiency of the robot in typical surgical manoeuvres.",
"title": ""
},
{
"docid": "408696a41684af20733b25833e741259",
"text": "We propose a method for accurate 3D shape reconstruction using uncalibrated multiview photometric stereo. A coarse mesh reconstructed using multiview stereo is first parameterized using a planar mesh parameterization technique. Subsequently, multiview photometric stereo is performed in the 2D parameter domain of the mesh, where all geometric and photometric cues from multiple images can be treated uniformly. Unlike traditional methods, there is no need for merging view-dependent surface normal maps. Our key contribution is a new photometric stereo based mesh refinement technique that can efficiently reconstruct meshes with extremely fine geometric details by directly estimating a displacement texture map in the 2D parameter domain. We demonstrate that intricate surface geometry can be reconstructed using several challenging datasets containing surfaces with specular reflections, multiple albedos and complex topologies.",
"title": ""
},
{
"docid": "413b9fe872843974cc4c1fcb9839ce0e",
"text": "1. I N T R O D U C T I O N Despite the long history of machine translation projects, and the well-known effects that evaluations such as the ALPAC Report (Pierce et al., 1966) have had on that history, optimal MT evaluation methodologies remain elusive. This is perhaps due in part to the subjectivity inherent in judging the quality of any translation output (human or machine). The difficulty also lies in the heterogeneity of MT language pairs, computational approaches, and intended end-use. The DARPA machine translation initiative is faced with all of these issues in evaluation, and so requires a suite of evaluation methodologies which minimize subjectivity and transcend the heterogeneity problems. At the same time, the initiative seeks to formulate this suite in such a way that it is economical to administer and portable to other MT development initiatives. This paper describes an evaluation of three research MT systems along with benchmark haman and external MT outputs. Two sets of evaluations were performed, one using a relatively complex suite of methodologies, and the other using a simpler set on the same data. The test procedure is described, along The authors would like to express their gratitude to Michael Naber for his assistance in compiling, expressing and interpreting data. with a comparison of the results of the different methodologies.",
"title": ""
},
{
"docid": "8e03f4410676fb4285596960880263e9",
"text": "Fuzzy computing (FC) has made a great impact in capturing human domain knowledge and modeling non-linear mapping of input-output space. In this paper, we describe the design and implementation of FC systems for detection of money laundering behaviors in financial transactions and monitoring of distributed storage system load. Our objective is to demonstrate the power of FC for real-world applications which are characterized by imprecise, uncertain data, and incomplete domain knowledge. For both applications, we designed fuzzy rules based on experts’ domain knowledge, depending on money laundering scenarios in transactions or the “health” of a distributed storage system. In addition, we developped a generic fuzzy inference engine and contributed to the open source community.",
"title": ""
}
] |
scidocsrr
|
95006eeb2a4eb63e5e3007fa6348b76e
|
Facebook and privacy: it's complicated
|
[
{
"docid": "e4aecd0346609d2c372c8d354704358d",
"text": "The sharing of personal data has emerged as a popular activity over online social networking sites like Facebook. As a result, the issue of online social network privacy has received significant attention in both the research literature and the mainstream media. Our overarching goal is to improve defaults and provide better tools for managing privacy, but we are limited by the fact that the full extent of the privacy problem remains unknown; there is little quantification of the incidence of incorrect privacy settings or the difficulty users face when managing their privacy.\n In this paper, we focus on measuring the disparity between the desired and actual privacy settings, quantifying the magnitude of the problem of managing privacy. We deploy a survey, implemented as a Facebook application, to 200 Facebook users recruited via Amazon Mechanical Turk. We find that 36% of content remains shared with the default privacy settings. We also find that, overall, privacy settings match users' expectations only 37% of the time, and when incorrect, almost always expose content to more users than expected. Finally, we explore how our results have potential to assist users in selecting appropriate privacy settings by examining the user-created friend lists. We find that these have significant correlation with the social network, suggesting that information from the social network may be helpful in implementing new tools for managing privacy.",
"title": ""
}
] |
[
{
"docid": "1f7e17d46250205565223d0838a1940e",
"text": "Augmenting a processor with special hardware that is able to apply a Single Instruction to Multiple Data(SIMD) at the same time is a cost effective way of improving processor performance. It also offers a means of improving the ratio of processor performance to power usage due to reduced and more effective data movement and intrinsically lower instruction counts. This paper considers and compares the NEON SIMD instruction set used on the ARM Cortex-A series of RISC processors with the SSE2 SIMD instruction set found on Intel platforms within the context of the Open Computer Vision (OpenCV) library. The performance obtained using compiler auto-vectorization is compared with that achieved using hand-tuning across a range of five different benchmarks and ten different hardware platforms. On the ARM platforms the hand-tuned NEON benchmarks were between 1.05× and 13.88× faster than the auto-vectorized code, while for the Intel platforms the hand-tuned SSE benchmarks were between 1.34× and 5.54× faster.",
"title": ""
},
{
"docid": "a3bff96ab2a6379d21abaea00bc54391",
"text": "In view of the advantages of deep networks in producing useful representation, the generated features of different modality data (such as image, audio) can be jointly learned using Multimodal Restricted Boltzmann Machines (MRB-M). Recently, audiovisual speech recognition based the M-RBM has attracted much attention, and the MRBM shows its effectiveness in learning the joint representation across audiovisual modalities. However, the built networks have weakness in modeling the multimodal sequence which is the natural property of speech signal. In this paper, we will introduce a novel temporal multimodal deep learning architecture, named as Recurrent Temporal Multimodal RB-M (RTMRBM), that models multimodal sequences by transforming the sequence of connected MRBMs into a probabilistic series model. Compared with existing multimodal networks, it's simple and efficient in learning temporal joint representation. We evaluate our model on audiovisual speech datasets, two public (AVLetters and AVLetters2) and one self-build. The experimental results demonstrate that our approach can obviously improve the accuracy of recognition compared with standard MRBM and the temporal model based on conditional RBM. In addition, RTMRBM still outperforms non-temporal multimodal deep networks in the presence of the weakness of long-term dependencies.",
"title": ""
},
{
"docid": "67e7b542e876c213540c747934fd3557",
"text": "This paper presents preliminary work on musical instruments ontology design, and investigates heterogeneity and limitations in existing instrument classification schemes. Numerous research to date aims at representing information about musical instruments. The works we examined are based on the well known Hornbostel and Sach’s classification scheme. We developed representations using the Ontology Web Language (OWL), and compared terminological and conceptual heterogeneity using SPARQL queries. We found evidence to support that traditional designs based on taxonomy trees lead to ill-defined knowledge representation, especially in the context of an ontology for the Semantic Web. In order to overcome this issue, it is desirable to have an instrument ontology that exhibits a semantically rich structure.",
"title": ""
},
{
"docid": "02e9379c0661c22188e9dbd64728035d",
"text": "PURPOSE\nThe objective of this study is to examine the effects of acute ingestion of dietary nitrate on endurance running performance in highly trained cross-country skiers. Dietary nitrate has been shown to reduce the oxygen cost of submaximal exercise and improve tolerance of high-intensity exercise, but it is not known if this holds true for highly trained endurance athletes.\n\n\nMETHODS\nTen male junior cross-country skiers (V˙O(2max)) ≈ 70 mL·kg·min) each completed two trials in a randomized, double-blind design. Participants ingested potassium nitrate (614-mg nitrate) or a nitrate-free placebo 2.5 h before two 5-min submaximal tests on a treadmill at 10 km·h (≈55% of V˙O(2max)) and 14 km·h (≈75% of V˙O(2max)), followed by a 5-km running time trial on an indoor track.\n\n\nRESULTS\nPlasma nitrite concentrations were higher after nitrate supplementation (325 ± 95 nmol·L) compared with placebo (143 ± 59 nmol·L, P < 0.001). There was no significant difference in 5-km time-trial performance between nitrate (1005 ± 53 s) and placebo treatments (996 ± 49 s, P = 0.12). The oxygen cost of submaximal running was not significantly different between placebo and nitrate trials at 10 km·h (both 2.84 ± 0.34 L·min) and 14 km·h (3.89 ± 0.39 vs. 3.77 ± 0.62 L·min).\n\n\nCONCLUSIONS\nAcute ingestion of dietary nitrate may not represent an effective strategy for reducing the oxygen cost of submaximal exercise or for enhancing endurance exercise performance in highly trained cross-country skiers.",
"title": ""
},
{
"docid": "3204def0de796db05e4fcc2a86743bb6",
"text": "This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3 party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.",
"title": ""
},
{
"docid": "bf7d502a818ac159cf402067b4416858",
"text": "We present algorithms for evaluating and performing modeling operatyons on NURBS surfaces using the programmable fragment processor on the Graphics Processing Unit (GPU). We extend our GPU-based NURBS evaluator that evaluates NURBS surfaces to compute exact normals for either standard or rational B-spline surfaces for use in rendering and geometric modeling. We build on these calculations in our new GPU algorithms to perform standard modeling operations such as inverse evaluations, ray intersections, and surface-surface intersections on the GPU. Our modeling algorithms run in real time, enabling the user to sketch on the actual surface to create new features. In addition, the designer can edit the surface by interactively trimming it without the need for re-tessellation. We also present a GPU-accelerated algorithm to perform surface-surface intersection operations with NURBS surfaces that can output intersection curves in the model space as well as in the parametric spaces of both the intersecting surfaces at interactive rates.",
"title": ""
},
{
"docid": "83f8b57090e5290acbf8fd7586232891",
"text": "In this paper, we propose a new clustering model, called DEeP Embedded Regularized ClusTering (DEPICT), which efficiently maps data into a discriminative embedding subspace and precisely predicts cluster assignments. DEPICT generally consists of a multinomial logistic regression function stacked on top of a multi-layer convolutional autoencoder. We define a clustering objective function using relative entropy (KL divergence) minimization, regularized by a prior for the frequency of cluster assignments. An alternating strategy is then derived to optimize the objective by updating parameters and estimating cluster assignments. Furthermore, we employ the reconstruction loss functions in our autoencoder, as a data-dependent regularization term, to prevent the deep embedding function from overfitting. In order to benefit from end-to-end optimization and eliminate the necessity for layer-wise pre-training, we introduce a joint learning framework to minimize the unified clustering and reconstruction loss functions together and train all network layers simultaneously. Experimental results indicate the superiority and faster running time of DEPICT in real-world clustering tasks, where no labeled data is available for hyper-parameter tuning.",
"title": ""
},
{
"docid": "5da804fa4c1474e27a1c91fcf5682e20",
"text": "We present an overview of Candide, a system for automatic translat ion of French text to English text. Candide uses methods of information theory and statistics to develop a probabili ty model of the translation process. This model, which is made to accord as closely as possible with a large body of French and English sentence pairs, is then used to generate English translations of previously unseen French sentences. This paper provides a tutorial in these methods, discussions of the training and operation of the system, and a summary of test results. 1. I n t r o d u c t i o n Candide is an experimental computer program, now in its fifth year of development at IBM, for translation of French text to Enghsh text. Our goal is to perform fuRy-automatic, high-quality text totext translation. However, because we are still far from achieving this goal, the program can be used in both fully-automatic and translator 's-assistant modes. Our approach is founded upon the statistical analysis of language. Our chief tools axe the source-channel model of communication, parametric probabili ty models of language and translation, and an assortment of numerical algorithms for training such models from examples. This paper presents elementary expositions of each of these ideas, and explains how they have been assembled to produce Caadide. In Section 2 we introduce the necessary ideas from information theory and statistics. The reader is assumed to know elementary probabili ty theory at the level of [1]. In Sections 3 and 4 we discuss our language and translation models. In Section 5 we describe the operation of Candide as it translates a French document. In Section 6 we present results of our internal evaluations and the AB.PA Machine Translation Project evaluations. Section 7 is a summary and conclusion. 2 . Stat is t ical Trans la t ion Consider the problem of translating French text to English text. Given a French sentence f , we imagine that it was originally rendered as an equivalent Enghsh sentence e. To obtain the French, the Enghsh was t ransmit ted over a noisy communication channel, which has the curious property that English sentences sent into it emerge as their French translations. The central assumption of Candide's design is that the characteristics of this channel can be determined experimentally, and expressed mathematically. *Current address: Renaissance Technologies, Stony Brook, NY ~ English-to-French I f e Channel \" _[ French-to-English -] Decoder 6 Figure 1: The Source-Channel Formalism of Translation. Here f is the French text to be translated, e is the putat ive original English rendering, and 6 is the English translation. This formalism can be exploited to yield French-to-English translations as follows. Let us write P r (e I f ) for the probability that e was the original English rendering of the French f. Given a French sentence f, the problem of automatic translation reduces to finding the English sentence tha t maximizes P.r(e I f) . That is, we seek 6 = argmsx e Pr (e I f) . By virtue of Bayes' Theorem, we have = argmax Pr(e If ) = argmax Pr(f I e)Pr(e) (1) e e The term P r ( f l e ) models the probabili ty that f emerges from the channel when e is its input. We call this function the translation model; its domain is all pairs (f, e) of French and English word-strings. The term Pr (e ) models the a priori probability that e was supp led as the channel input. We call this function the language model. Each of these fac tors the translation model and the language model independent ly produces a score for a candidate English translat ion e. The translation model ensures that the words of e express the ideas of f, and the language model ensures that e is a grammatical sentence. Candide sehcts as its translat ion the e that maximizes their product. This discussion begs two impor tant questions. First , where do the models P r ( f [ e) and Pr (e ) come from? Second, even if we can get our hands on them, how can we search the set of all English strings to find 6? These questions are addressed in the next two sections. 2.1. P robab i l i ty Models We begin with a brief detour into probabili ty theory. A probability model is a mathematical formula that purports to express the chance of some observation. A parametric model is a probability model with adjustable parameters, which can be changed to make the model bet ter match some body of data. Let us write c for a body of da ta to be modeled, and 0 for a vector of parameters. The quanti ty Prs (c ) , computed according to some formula involving c and 0, is called the hkelihood 157 [Human Language Technology, Plainsboro, 1994]",
"title": ""
},
{
"docid": "7654ff9f2e55d3831baa82700471c2ef",
"text": "Satellite imaging is well known as a useful tool in many scientific disciplines and various applications. Google Earth, with its free access, is now - thanks to increasing resolution and precision - such a tool. It improves the visualization and dissemination of scientific data, and opens doors to new discoveries. For example, many Nasca geoglyphs are now visible to Google Earth and so are the orientations of Chinese pyramids, which appear to be laid out with the aid of a magnetic compass. Google Earth can also \"see\" a previously unknown \"Monte Alban II\" close to the well known \"Monte Alban\" in the Valley of Oaxaca (Mexico), as well as prehistoric causeways in Mesoamerica (namely, in north Yucatan) and in the Chaco valley, New Mexico. We find that Google Earth can save time and resources significantly: before, during and after field measurements.",
"title": ""
},
{
"docid": "4eeef9a48f282bc6214c39d4c40303e7",
"text": "Failure management is a particular challenge problem in the automotive domain. Today's cars host a network of 30 to 80 electronic control units (ECUs), distributed over up to five interconnected in-car networks supporting hundreds to thousands of softwaredefined functions. This high degree of distribution of hard- and software components is a key contributor to the difficulty of failure management in vehicle. This paper addresses comprehensive failure management, starting from domain models for logical and deployment models of automotive software. These models capture interaction patterns as a critical part of both logical and deployment architectures, introducing failure detection and mitigation as \"wrapper\" services to \"unmanaged services\", i.e. services without failure management. We show how these models can be embedded into an interaction-centric development process, which captures failure management information across development phases. Finally, we exploit the failure management models to verify that a particular architecture meets its requirements under the stated failure hypothesis.",
"title": ""
},
{
"docid": "1e139fa9673f83ac619a5da53391b1ef",
"text": "In this paper we propose a new no-reference (NR) image quality assessment (IQA) metric using the recently revealed free-energy-based brain theory and classical human visual system (HVS)-inspired features. The features used can be divided into three groups. The first involves the features inspired by the free energy principle and the structural degradation model. Furthermore, the free energy theory also reveals that the HVS always tries to infer the meaningful part from the visual stimuli. In terms of this finding, we first predict an image that the HVS perceives from a distorted image based on the free energy theory, then the second group of features is composed of some HVS-inspired features (such as structural information and gradient magnitude) computed using the distorted and predicted images. The third group of features quantifies the possible losses of “naturalness” in the distorted image by fitting the generalized Gaussian distribution to mean subtracted contrast normalized coefficients. After feature extraction, our algorithm utilizes the support vector machine based regression module to derive the overall quality score. Experiments on LIVE, TID2008, CSIQ, IVC, and Toyama databases confirm the effectiveness of our introduced NR IQA metric compared to the state-of-the-art.",
"title": ""
},
{
"docid": "ef99799bf977ba69a63c9f030fc65c7f",
"text": "In this paper, we propose a novel transductive learning framework named manifold-ranking based image retrieval (MRBIR). Given a query image, MRBIR first makes use of a manifold ranking algorithm to explore the relationship among all the data points in the feature space, and then measures relevance between the query and all the images in the database accordingly, which is different from traditional similarity metrics based on pair-wise distance. In relevance feedback, if only positive examples are available, they are added to the query set to improve the retrieval result; if examples of both labels can be obtained, MRBIR discriminately spreads the ranking scores of positive and negative examples, considering the asymmetry between these two types of images. Furthermore, three active learning methods are incorporated into MRBIR, which select images in each round of relevance feedback according to different principles, aiming to maximally improve the ranking result. Experimental results on a general-purpose image database show that MRBIR attains a significant improvement over existing systems from all aspects.",
"title": ""
},
{
"docid": "43ccb8a421fbcdd9a45e00782d0eaf5a",
"text": "Research in open office design has shown that it is negatively related to workers’ satisfaction with their physical environment and perceived productivity. A longitudinal study was conducted within a large private organization to investigate the effects of relocating employees from traditional offices to open offices. A measure was constructed that assessed employees’satisfaction with the physical environment, physical stress, coworker relations, perceived job performance, and the use of open office protocols. The sample consisted of 21 employees who completed the surveys at all three measurement intervals: prior to the move, 4 weeks after the move, and 6 279 ENVIRONMENT AND BEHAVIOR, Vol. 34 No. 3, May 2002 279-299 © 2002 Sage Publications at UCSF LIBRARY & CKM on November 26, 2013 eab.sagepub.com Downloaded from months after the move. Results indicated decreased employee satisfaction with all of the dependent measures following the relocation. Moreover, the employees’ dissatisfaction did not abate, even after an adjustment period. Reasons for these findings are discussed and recommendations are presented. Thepurpose of this studywas to determine the effects of relocating employees from traditional to open offices. The organization in which this study was conducted requested a longitudinal study to assess the long-term impact of office redesign on their employees’ satisfaction with the physical environment and productivity. Employees’ satisfaction with their work environment is important to organizations, as it has been shown to be directly related to employees’ job satisfaction and indirectly related to commitment and turnover intentions (Carlopio, 1996). There are many different types of office designs, ranging from traditional, private offices to open offices. Open offices also range in their design complexity from the “bull pen” in which the desks are arranged in neat rows to “landscaped”—or Bürolandschaft—offices that include “systems furniture” and panels of varying heights. In open offices, people who work together are physically located together with the geometry of the layout reflecting the pattern of the work groups. The various areas can be separated by plants, low movable screens, cabinets, shelving, or other furniture (Sanders & McCormick, 1993). Thus, within the broad category of open office, fine-grained differences can be rendered. For example, the number of partitions surrounding employees’ workspaces, spatial density (the amount of usable space per employee), openness (the overall openness of the office or the ratio of total square footage of the office to the total length of its interior walls and partitions), and architectural accessibility (the extent to which an employee’s individual workspace is accessible to the external intrusions of others) (Oldham, 1988; Oldham & Rotchford, 1983) can all vary. For example, Marans and Yan (1989) divided their national sample of offices into six different design categories based on the number of walls and partitions surrounding the employees’ workspace. For purposes of this study, offices were classified into one of the following five categories: (a) private closed, (b) private shared, (c) individual open, (d) shared open, or (e) bull pen. Open offices were designed in the 1950s and reached their height of popularity in the early 1970s, when many companies converted to these types of designs. Original claims by the designers of open offices were that they created flexible space, allowing layout to be more sensitive to changes in organizational size and structure. Workstations can be easily reconfigured at minimal cost to meet changing needs. It was also believed that the absence of internal physical barriers would facilitate communication between 280 ENVIRONMENT AND BEHAVIOR / May 2002 at UCSF LIBRARY & CKM on November 26, 2013 eab.sagepub.com Downloaded from individuals, groups, and even whole departments, which consequently, would improve morale and productivity. In addition, there was an estimated 20% savings in costs associated with creating and maintaining this type of office space (Hedge, 1982). Although many claims have been made regarding improvements in communication and productivity with open office designs, research findings have been mixed, with some studies reporting positive outcomes such as increased communication among coworkers (Allen & Gerstberger, 1973; Hundert & Greenfield, 1969; Ives & Ferdinands, 1974; Zahn, 1991) and supervisors (Sundstrom, Burt, & Kamp, 1980), higher judgments of aesthetic value (Brookes & Kaplan, 1972; Riland, 1970), and more group sociability (Brookes & Kaplan, 1972), whereas other studies have reported negative findings such as decreased performance (Becker, Gield, Gaylin, & Sayer, 1983; Oldham & Brass, 1979), lower judgments of functional efficiency (Brookes & Kaplan, 1972), lower levels of psychological privacy (Brookes & Kaplan, 1972; Hedge, 1982; Sundstrom, Town, Brown, Forman, & McGee, 1982; Sundstrom et al., 1980), environmental dissatisfaction (Marans & Yan, 1989; Oldham & Brass, 1979; Spreckelmeyer, 1993), fewer friendship opportunities (Oldham & Brass, 1979), supervisor feedback (Oldham & Brass, 1979), privacy (Brookes & Kaplan, 1972; Hundert & Greenfield, 1969), increased noise (Brookes & Kaplan, 1972; Sundstrom, et al., 1980), increased disturbances and distractions (Brookes & Kaplan, 1972; Hedge, 1982; Hundert & Greenfield, 1969; Ives & Ferdinands, 1974; Mercer, 1979; Nemecek & Grandjean, 1973; Oldham & Brass, 1979; Sundstrom, et al., 1980), and increased feelings of crowding (Sundstrom, et al., 1980). In a study by Zalesny and Farace (1987), employees relocated from traditional to open offices. Managers reported that their new work areas were less adequate than before the office change, that they had less privacy, and that they were less satisfied with the physical environment. Given these reported increases in disturbances and distractions, one would expect productivity to be negatively affected, especially in light of the findings from the Steelcase (Louis Harris & Associates, Inc., 1978) study in which 41% of a sample of office workers indicated that the most important office characteristic in getting their work done well was the ability to concentrate without noise or other distractions. However, these respondents rated the level of noise and other distractions in their work environments as the third worst characteristic of their workplace. In a follow-up study 2 years later, more than half of another sample of office workers reported that quiet was important to completing their work, yet only 48% reported that they actually experienced quiet offices. Recent statistics suggest that disturbances from office noise has reached epidemic proportions, with 54% of a sample of more than 2,000 U.S. and Canadian office workers in various office plans from 58 different sites Brennan et al. / OFFICE DESIGN 281 at UCSF LIBRARY & CKM on November 26, 2013 eab.sagepub.com Downloaded from reporting that they are bothered often by one or more sources of noise, such as telephones, people talking, ventilation systems, piped-in music, and office equipment (Sundstrom, Town, Rice, Osborn, & Brill, 1994). Furthermore, reported disturbances from combined sources of noise were found to be negatively related to environmental satisfaction and job satisfaction. Contrary to expectations, however, Sundstrom et al. (1994) found no relationship between disturbances and self or supervisor ratings of performance. Many companies continue to adopt open office designs primarily because of the reduced costs in construction and maintenance. However, another reason why open plan offices are so popular is the belief that they facilitate greater communication, which in turn, facilitates greater productivity (Boje, 1971; Pile, 1978). This belief is based on the social facilitation hypothesis, which states that performance of routine tasks will improve in nonprivate areas (Geen & Gange, 1977). The theory suggests that employees who find their jobs boring may find that contact with other people provides a source of stimulation. However, Sundstrom (1978) found that social contact can exceed an optimum level, causing a worker to feel crowded, especially in areas with minimal privacy. As a result of crowding, discomfort may occur, which then causes decreased job performance. Research findings have shown a high correlation between architectural privacy (the visual and acoustic isolation supplied by an environment) and psychological privacy (a sense of control over access to oneself or one’s group), even among people with the least complex jobs (Sundstrom et al., 1980). Furthermore, no relationship has been found between architectural accessibility and social contact among coworkers. These findings directly contradict the claims of open office designers regarding increases in communication. Moreover, whereas one of the proposed advantages of the open office design was increased communication, they have actually been found to prohibit confidential conversations (Sundstrom, 1986). In short, empirical findings suggest that employees prefer privacy over accessibility because of the increases in noise and distractions experienced in nonprivate workspaces (Sundstrom et al., 1980). McCarrey, Peterson, Edwards, and Von Kulmiz (1974) suggested that the findings of lower satisfaction in open offices are due to employees’perceived lack of control over input to and from the environment. This occurs through lack of auditory privacy, lack of personal privacy, and lack of confidentiality of communications. This is supported by the concept of overload (Cohen, 1978), which posits that workers prefer quiet workplaces where neighboring coworkers are relatively few and far apart because exposure to sources of overload can then be controlled. Empirical research on open offices has supported the theory of overload, finding that employees tend to prefer lower 282",
"title": ""
},
{
"docid": "d80070cf7ab3d3e75c2da1525e59be67",
"text": "This paper presents for the first time the analysis and experimental validation of a six-slot four-pole synchronous reluctance motor with nonoverlapping fractional slot-concentrated windings. The machine exhibits high torque density and efficiency due to its high fill factor coils with very short end windings, facilitated by a segmented stator and bobbin winding of the coils. These advantages are coupled with its inherent robustness and low cost. The topology is presented as a logical step forward in advancing synchronous reluctance machines that have been universally wound with a sinusoidally distributed winding. The paper presents the motor design, performance evaluation through finite element studies and validation of the electromagnetic model, and thermal specification through empirical testing. It is shown that high performance synchronous reluctance motors can be constructed with single tooth wound coils, but considerations must be given regarding torque quality and the d-q axis inductances.",
"title": ""
},
{
"docid": "e01d5be587c73aaa133acb3d8aaed996",
"text": "This paper presents a new optimization-based method to control three micro-scale magnetic agents operating in close proximity to each other for applications in microrobotics. Controlling multiple magnetic microrobots close to each other is difficult due to magnetic interactions between the agents, and here we seek to control those interactions for the creation of desired multi-agent formations. Our control strategy arises from physics that apply force in the negative direction of states errors. The objective is to regulate the inter-agent spacing, heading and position of the set of agents, for motion in two dimensions, while the system is inherently underactuated. Simulation results on three agents and a proof-of-concept experiment on two agents show the feasibility of the idea to shed light on future micro/nanoscale multi-agent explorations. Average tracking error of less than 50 micrometers and 1.85 degrees is accomplished for the regulation of the inter-agent space and the pair heading angle, respectively, for identical spherical-shape agents with nominal radius less than of 250 micrometers operating within several body-lengths of each other.",
"title": ""
},
{
"docid": "eb3a07c2295ba09c819c7a998b2fb337",
"text": "Recent advances have demonstrated the potential of network MIMO (netMIMO), which combines a practical number of distributed antennas as a virtual netMIMO AP (nAP) to improve spatial multiplexing of an WLAN. Existing solutions, however, either simply cluster nearby antennas as static nAPs, or dynamically cluster antennas on a per-packet basis so as to maximize the sum rate of the scheduled clients. To strike the balance between the above two extremes, in this paper, we present the design, implementation and evaluation of FlexNEMO, a practical two-phase netMIMO clustering system. Unlike previous per-packet clustering approaches, FlexNEMO only clusters antennas when client distribution and traffic pattern change, as a result being more practical to be implemented. A medium access control protocol is then designed to allow the clients at the center of nAPs to have a higher probability to gain access opportunities, but still ensure long-term fairness among clients. By combining on-demand clustering and priority-based access control, FlexNEMO not only improves antenna utilization, but also optimizes the channel condition for every individual client. We evaluated our design via both testbed experiments on USRPs and trace-driven emulations. The results demonstrate that FlexNEMO can deliver 94.7% and 93.7% throughput gains over static antenna clustering in a 4-antenna testbed and 16-antenna emulation, respectively.",
"title": ""
},
{
"docid": "63f0ff6663f334e1ab05d0ce5d2239cf",
"text": "Railroad tracks need to be periodically inspected and monitored to ensure safe transportation. Automated track inspection using computer vision and pattern recognition methods has recently shown the potential to improve safety by allowing for more frequent inspections while reducing human errors. Achieving full automation is still very challenging due to the number of different possible failure modes, as well as the broad range of image variations that can potentially trigger false alarms. In addition, the number of defective components is very small, so not many training examples are available for the machine to learn a robust anomaly detector. In this paper, we show that detection performance can be improved by combining multiple detectors within a multitask learning framework. We show that this approach results in improved accuracy for detecting defects on railway ties and fasteners.",
"title": ""
},
{
"docid": "fb7c268419d798587e1675a5a1a37232",
"text": "Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image reranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets.",
"title": ""
},
{
"docid": "1c6b67ae3b069e40b600f1479782ef19",
"text": "The evolution of IoT network is basically the effect of requirement of a system that is having better capabilities compared to existing ones. One main feature of the system that we need to satisfy is long range and low power. The protocols IEEE 802.15.4, 6LoWPAN, IPv6 and CoAP are used for achieving low rate, low power, large address space and reducing the data space respectively. In this paper we discuss other protocols which are intended mainly for providing long distance transmission with the less amount of power consumption. It mainly says about LoRa (Long Range protocol), IEEE 802.22 and weightless.",
"title": ""
},
{
"docid": "0c975acb5ab3f413078171840b17b232",
"text": "We have analysed associated factors in 164 patients with acute compartment syndrome whom we treated over an eight-year period. In 69% there was an associated fracture, about half of which were of the tibial shaft. Most patients were men, usually under 35 years of age. Acute compartment syndrome of the forearm, with associated fracture of the distal end of the radius, was again seen most commonly in young men. Injury to soft tissues, without fracture, was the second most common cause of the syndrome and one-tenth of the patients had a bleeding disorder or were taking anticoagulant drugs. We found that young patients, especially men, were at risk of acute compartment syndrome after injury. When treating such injured patients, the diagnosis should be made early, utilising measurements of tissue pressure.",
"title": ""
}
] |
scidocsrr
|
5ddac02311b8e3bda5b0039980d0ca71
|
SCREENING OF PLANT ESSENTIAL OILS FOR ANTIFUNGAL ACTIVITY AGAINST MALASSEZIA FURFUR
|
[
{
"docid": "39db226d1f8980b3f0bc008c42248f2f",
"text": "In vitro studies have demonstrated antibacterial activity of essential oils (EOs) against Listeria monocytogenes, Salmonella typhimurium, Escherichia coli O157:H7, Shigella dysenteria, Bacillus cereus and Staphylococcus aureus at levels between 0.2 and 10 microl ml(-1). Gram-negative organisms are slightly less susceptible than gram-positive bacteria. A number of EO components has been identified as effective antibacterials, e.g. carvacrol, thymol, eugenol, perillaldehyde, cinnamaldehyde and cinnamic acid, having minimum inhibitory concentrations (MICs) of 0.05-5 microl ml(-1) in vitro. A higher concentration is needed to achieve the same effect in foods. Studies with fresh meat, meat products, fish, milk, dairy products, vegetables, fruit and cooked rice have shown that the concentration needed to achieve a significant antibacterial effect is around 0.5-20 microl g(-1) in foods and about 0.1-10 microl ml(-1) in solutions for washing fruit and vegetables. EOs comprise a large number of components and it is likely that their mode of action involves several targets in the bacterial cell. The hydrophobicity of EOs enables them to partition in the lipids of the cell membrane and mitochondria, rendering them permeable and leading to leakage of cell contents. Physical conditions that improve the action of EOs are low pH, low temperature and low oxygen levels. Synergism has been observed between carvacrol and its precursor p-cymene and between cinnamaldehyde and eugenol. Synergy between EO components and mild preservation methods has also been observed. Some EO components are legally registered flavourings in the EU and the USA. Undesirable organoleptic effects can be limited by careful selection of EOs according to the type of food.",
"title": ""
}
] |
[
{
"docid": "a6c9ff64c9c007e71192eb7023c8617f",
"text": "Elderly individuals can access online 3D virtual stores from their homes to make purchases. However, most virtual environments (VEs) often elicit physical responses to certain types of movements in the VEs. Some users exhibit symptoms that parallel those of classical motion sickness, called cybersickness, both during and after the VE experience. This study investigated the factors that contribute to cybersickness among the elderly when immersed in a 3D virtual store. The results of the first experiment show that the simulator sickness questionnaire (SSQ) scores increased significantly by the reasons of navigational rotating speed and duration of exposure. Based on these results, a warning system with fuzzy control for combating cybersickness was developed. The results of the second and third experiments show that the proposed system can efficiently determine the level of cybersickness based on the fuzzy sets analysis of operating signals from scene rotating speed and exposure duration, and subsequently combat cybersickness. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "32a45d3c08e24d29ad5f9693253c0e9e",
"text": "This paper presents comparative study of high-speed, low-power and low voltage full adder circuits. Our approach is based on XOR-XNOR design full adder circuits in a single unit. A low power and high performance 9T full adder cell using a design style called “XOR (3T)” is discussed. The designed circuit commands a high degree of regularity and symmetric higher density than the conventional CMOS design style as well as it lowers power consumption by using XOR (3T) logic circuits. Gate Diffusion Input (GDI) technique of low-power digital combinatorial circuit design is also described. This technique helps in reducing the power consumption and the area of digital circuits while maintaining low complexity of logic design. This paper analyses, evaluates and compares the performance of various adder circuits. Several simulations conducted using different voltage supplies, load capacitors and temperature variation demonstrate the superiority of the XOR (3T) based full adder designs in term of delay, power and power delay product (PDP) compared to the other full adder circuits. Simulation results illustrate the superiority of the designed adder circuits against the conventional CMOS, TG and Hybrid full adder circuits in terms of power, delay and power delay product (PDP). .",
"title": ""
},
{
"docid": "d10c17324f8f6d4523964f10bc689d8e",
"text": "This article studied a novel Log-Periodic Dipole Antenna (LPDA) with distributed inductive load for size reduction. By adding a short circuit stub at top of the each element, the dimensions of the LPDA are reduced by nearly 50% compared to the conventional one. The impedance bandwidth of the presented antenna is nearly 122% (54~223MHz) (S11<;10dB), and this antenna is very suited for BROADCAST and TV applications.",
"title": ""
},
{
"docid": "0551e9faef769350102a404fa0b61dc1",
"text": "Lignocellulosic biomass is a complex biopolymer that is primary composed of cellulose, hemicellulose, and lignin. The presence of cellulose in biomass is able to depolymerise into nanodimension biomaterial, with exceptional mechanical properties for biocomposites, pharmaceutical carriers, and electronic substrate's application. However, the entangled biomass ultrastructure consists of inherent properties, such as strong lignin layers, low cellulose accessibility to chemicals, and high cellulose crystallinity, which inhibit the digestibility of the biomass for cellulose extraction. This situation offers both challenges and promises for the biomass biorefinery development to utilize the cellulose from lignocellulosic biomass. Thus, multistep biorefinery processes are necessary to ensure the deconstruction of noncellulosic content in lignocellulosic biomass, while maintaining cellulose product for further hydrolysis into nanocellulose material. In this review, we discuss the molecular structure basis for biomass recalcitrance, reengineering process of lignocellulosic biomass into nanocellulose via chemical, and novel catalytic approaches. Furthermore, review on catalyst design to overcome key barriers regarding the natural resistance of biomass will be presented herein.",
"title": ""
},
{
"docid": "6951f051c3fe9ab24259dcc6f812fc68",
"text": "User Generated Content has become very popular since the birth of web services such as YouTube allowing the distribution of such user-produced media content in an easy manner. YouTube-like services are different from existing traditional VoD services because the service provider has only limited control over the creation of new content. We analyze how the content distribution in YouTube is realized and then conduct a measurement study of YouTube traffic in a large university campus network. The analysis of the traffic shows that: (1) No strong correlation is observed between global and local popularity; (2) neither time scale nor user population has an impact on the local popularity distribution; (3) video clips of local interest have a high local popularity. Using our measurement data to drive trace-driven simulations, we also demonstrate the implications of alternative distribution infrastructures on the performance of a YouTube-like VoD service. The results of these simulations show that client-based local caching, P2P-based distribution, and proxy caching can reduce network traffic significantly and allow faster access to video clips.",
"title": ""
},
{
"docid": "d3fc62a9858ddef692626b1766898c9f",
"text": "In order to detect the Cross-Site Script (XSS) vulnerabilities in the web applications, this paper proposes a method of XSS vulnerability detection using optimal attack vector repertory. This method generates an attack vector repertory automatically, optimizes the attack vector repertory using an optimization model, and detects XSS vulnerabilities in web applications dynamically. To optimize the attack vector repertory, an optimization model is built in this paper with a machine learning algorithm, reducing the size of the attack vector repertory and improving the efficiency of XSS vulnerability detection. Based on this method, an XSS vulnerability detector is implemented, which is tested on 50 real-world websites. The testing results show that the detector can detect a total of 848 XSS vulnerabilities effectively in 24 websites.",
"title": ""
},
{
"docid": "0f25f9bc31f4913e8ad8e5015186c0d4",
"text": "Fractures of the scaphoid bone mainly occur in young adults and constitute 2-7% of all fractures. The specific blood supply in combination with the demanding functional requirements can easily lead to disturbed fracture healing. Displaced scaphoid fractures are seen on radiographs. The diagnostic strategy of suspected scaphoid fractures, however, is surrounded by controversy. Bone scintigraphy, magnetic resonance imaging and computed tomography have their shortcomings. Early treatment leads to a better outcome. Scaphoid fractures can be treated conservatively and operatively. Proximal scaphoid fractures and displaced scaphoid fractures have a worse outcome and might be better off with an open or closed reduction and internal fixation. The incidence of scaphoid non-unions has been reported to be between 5 and 15%. Non-unions are mostly treated operatively by restoring the anatomy to avoid degenerative wrist arthritis.",
"title": ""
},
{
"docid": "6cc8164c14c6a95617590e66817c0db7",
"text": "nor fazila k & ku Halim kH. 2012. Effects of soaking on yield and quality of agarwood oil. The aims of this study were to investigate vaporisation temperature of agarwood oil, determine enlargement of wood pore size, analyse chemical components in soaking solvents and examine the chemical composition of agarwood oil extracted from soaked and unsoaked agarwood. Agarwood chips were soaked in two different acids, namely, sulphuric and lactic acids for 168 hours at room temperature (25 °C). Effects of soaking were determined using thermogravimetric analysis (TGA), scanning electron microscope (SEM) and gas chromatography-mass spectrum analysis. With regard to TGA curve, a small portion of weight loss was observed between 110 and 200 °C for agarwood soaked in lactic acid. SEM micrograph showed that the lactic acid-soaked agarwood demonstrated larger pore size. High quality agarwood oil was obtained from soaked agarwood. In conclusion, agarwood soaked in lactic acid with concentration of 0.1 M had the potential to reduce the vaporisation temperature of agarwood oil and enlarge the pore size of wood, hence, improving the yield and quality of agarwood oil.",
"title": ""
},
{
"docid": "fcd5bdd4e7e4d240638c84f7d61f8f4b",
"text": "We investigate the performance of hysteresis-free short-channel negative-capacitance FETs (NCFETs) by combining quantum-mechanical calculations with the Landau–Khalatnikov equation. When the subthreshold swing (SS) becomes smaller than 60 mV/dec, a negative value of drain-induced barrier lowering is obtained. This behavior, drain-induced barrier rising (DIBR), causes negative differential resistance in the output characteristics of the NCFETs. We also examine the performance of an inverter composed of hysteresis-free NCFETs to assess the effects of DIBR at the circuit level. Contrary to our expectation, although hysteresis-free NCFETs are used, hysteresis behavior is observed in the transfer properties of the inverter. Furthermore, it is expected that the NCFET inverter with hysteresis behavior can be used as a Schmitt trigger inverter.",
"title": ""
},
{
"docid": "4a86a0707e6ac99766f89e81cccc5847",
"text": "Magnetic core loss is an emerging concern for integrated POL converters. As switching frequency increases, core loss is comparable to or even higher than winding loss. Accurate measurement of core loss is important for magnetic design and converter loss estimation. And exploring new high frequency magnetic materials need a reliable method to evaluate their losses. However, conventional method is limited to low frequency due to sensitivity to phase discrepancy. In this paper, a new method is proposed for high frequency (1MHz∼50MHz) core loss measurement. The new method reduces the phase induced error from over 100% to <5%. So with the proposed methods, the core loss can be accurately measured.",
"title": ""
},
{
"docid": "643e97c3bc0cdde54bf95720fe52f776",
"text": "Ego-motion estimation based on images from a stereo camera has become a common function for autonomous mobile systems and is gaining increasing importance in the automotive sector. Unlike general robotic platforms, vehicles have a suspension adding degrees of freedom and thus complexity to their dynamics model. Some parameters of the model, such as the vehicle mass, are non-static as they depend on e.g. the specific load conditions and thus need to be estimated online to guarantee a concise and safe autonomous maneuvering of the vehicle. In this paper, a novel visual odometry based approach to simultaneously estimate ego-motion and selected vehicle parameters using a dual Ensemble Kalman Filter and a non-linear single-track model with pitch dynamics is presented. The algorithm has been validated using simulated data and showed a good performance for both the estimation of the ego-motion and of the relevant vehicle parameters.",
"title": ""
},
{
"docid": "e7f91b90eab54dfd7f115a3a0225b673",
"text": "The recent trend of outsourcing network functions, aka. middleboxes, raises confidentiality and integrity concern on redirected packet, runtime state, and processing result. The outsourced middleboxes must be protected against cyber attacks and malicious service provider. It is challenging to simultaneously achieve strong security, practical performance, complete functionality and compatibility. Prior software-centric approaches relying on customized cryptographic primitives fall short of fulfilling one or more desired requirements. In this paper, after systematically addressing key challenges brought to the fore, we design and build a secure SGX-assisted system, LightBox, which supports secure and generic middlebox functions, efficient networking, and most notably, lowoverhead stateful processing. LightBox protects middlebox from powerful adversary, and it allows stateful network function to run at nearly native speed: it adds only 3μs packet processing delay even when tracking 1.5M concurrent flows.",
"title": ""
},
{
"docid": "6816bb15dba873244306f22207525bee",
"text": "Imbalance suggests a feeling of dynamism and movement in static objects. It is therefore not surprising that many 3D models stand in impossibly balanced configurations. As long as the models remain in a computer this is of no consequence: the laws of physics do not apply. However, fabrication through 3D printing breaks the illusion: printed models topple instead of standing as initially intended. We propose to assist users in producing novel, properly balanced designs by interactively deforming an existing model. We formulate balance optimization as an energy minimization, improving stability by modifying the volume of the object, while preserving its surface details. This takes place during interactive editing: the user cooperates with our optimizer towards the end result. We demonstrate our method on a variety of models. With our technique, users can produce fabricated objects that stand in one or more surprising poses without requiring glue or heavy pedestals.",
"title": ""
},
{
"docid": "46a47931c51a3b5580580d27a9a6d132",
"text": "In airline service industry, it is difficult to collect data about customers' feedback by questionnaires, but Twitter provides a sound data source for them to do customer sentiment analysis. However, little research has been done in the domain of Twitter sentiment classification about airline services. In this paper, an ensemble sentiment classification strategy was applied based on Majority Vote principle of multiple classification methods, including Naive Bayes, SVM, Bayesian Network, C4.5 Decision Tree and Random Forest algorithms. In our experiments, six individual classification approaches, and the proposed ensemble approach were all trained and tested using the same dataset of 12864 tweets, in which 10 fold evaluation is used to validate the classifiers. The results show that the proposed ensemble approach outperforms these individual classifiers in this airline service Twitter dataset. Based on our observations, the ensemble approach could improve the overall accuracy in twitter sentiment classification for other services as well.",
"title": ""
},
{
"docid": "ba34f6120b08c57cec8794ec2b9256d2",
"text": "Principles of reconstruction dictate a number of critical points for successful repair. To achieve aesthetic and functional goals, the dermatologic surgeon should avoid deviation of anatomical landmarks and free margins, maintain shape and symmetry, and repair with skin of similar characteristics. Reconstruction of the ear presents a number of unique challenges based on the limited amount of adjacent lax tissue within the cosmetic unit and the structure of the auricle, which consists of a relatively thin skin surface and flexible cartilaginous framework.",
"title": ""
},
{
"docid": "e96cf46cc99b3eff60d32f3feb8afc47",
"text": "We present an field programmable gate arrays (FPGA) based implementation of the popular Viola-Jones face detection algorithm, which is an essential building block in many applications such as video surveillance and tracking. Our implementation is a complete system level hardware design described in a hardware description language and validated on the affordable DE2-115 evaluation board. Our primary objective is to study the achievable performance with a low-end FPGA chip based implementation. In addition, we release to the public domain the entire project. We hope that this will enable other researchers to easily replicate and compare their results to ours and that it will encourage and facilitate further research and educational ideas in the areas of image processing, computer vision, and advanced digital design and FPGA prototyping. 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "0e2fdb9fc054e47a3f0b817f68de68b1",
"text": "Recent regulatory guidance suggests that drug metabolites identified in human plasma should be present at equal or greater levels in at least one of the animal species used in safety assessments (MIST). Often synthetic standards for the metabolites do not exist, thus this has introduced multiple challenges regarding the quantitative comparison of metabolites between human and animals. Various bioanalytical approaches are described to evaluate the exposure of metabolites in animal vs. human. A simple LC/MS/MS peak area ratio comparison approach is the most facile and applicable approach to make a first assessment of whether metabolite exposures in animals exceed that in humans. In most cases, this measurement is sufficient to demonstrate that an animal toxicology study of the parent drug has covered the safety of the human metabolites. Methods whereby quantitation of metabolites can be done in the absence of chemically synthesized authentic standards are also described. Only in rare cases, where an actual exposure measurement of a metabolite is needed, will a validated or qualified method requiring a synthetic standard be needed. The rigor of the bioanalysis is increased accordingly based on the results of animal:human ratio measurements. This data driven bioanalysis strategy to address MIST issues within standard drug development processes is described.",
"title": ""
},
{
"docid": "18defc8666f7fea7ae89ff3d5d833e0a",
"text": "[1] We present a new approach to extracting spatially and temporally continuous ground deformation fields from interferometric synthetic aperture radar (InSAR) data. We focus on unwrapped interferograms from a single viewing geometry, estimating ground deformation along the line-of-sight. Our approach is based on a wavelet decomposition in space and a general parametrization in time. We refer to this approach as MInTS (Multiscale InSAR Time Series). The wavelet decomposition efficiently deals with commonly seen spatial covariances in repeat-pass InSAR measurements, since the coefficients of the wavelets are essentially spatially uncorrelated. Our time-dependent parametrization is capable of capturing both recognized and unrecognized processes, and is not arbitrarily tied to the times of the SAR acquisitions. We estimate deformation in the wavelet-domain, using a cross-validated, regularized least squares inversion. We include a model-resolution-based regularization, in order to more heavily damp the model during periods of sparse SAR acquisitions, compared to during times of dense acquisitions. To illustrate the application of MInTS, we consider a catalog of 92 ERS and Envisat interferograms, spanning 16 years, in the Long Valley caldera, CA, region. MInTS analysis captures the ground deformation with high spatial density over the Long Valley region.",
"title": ""
},
{
"docid": "be1b9731df45408571e75d1add5dfe9c",
"text": "We investigate a new commonsense inference task: given an event described in a short free-form text (“X drinks coffee in the morning”), a system reasons about the likely intents (“X wants to stay awake”) and reactions (“X feels alert”) of the event’s participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people’s intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.",
"title": ""
},
{
"docid": "93e5ed1d67fe3d20c7b0177539e509c4",
"text": "Business models that rely on social media and user-generated content have shifted from the more traditional business model, where value for the organization is derived from the one-way delivery of products and/or services, to the provision of intangible value based on user engagement. This research builds a model that hypothesizes that the user experiences from social interactions among users, operationalized as personalization, transparency, access to social resources, critical mass of social acquaintances, and risk, as well as with the technical features of the social media platform, operationalized as the completeness, flexibility, integration, and evolvability, influence user engagement and subsequent usage behavior. Using survey responses from 408 social media users, findings suggest that both social and technical factors impact user engagement and ultimately usage with additional direct impacts on usage by perceptions of the critical mass of social acquaintances and risk. KEywORdS Social Interactions, Social Media, Social Networking, Technical Features, Use, User Engagement, User Experience",
"title": ""
}
] |
scidocsrr
|
3e39a8a2ed84ab07d64bc8a385a1d969
|
6 Seconds of Sound and Vision: Creativity in Micro-videos
|
[
{
"docid": "c87fa26d080442b1527fcc6a74df7ec4",
"text": "We present MIRtoolbox, an integrated set of functions written in Matlab, dedicated to the extraction of musical features from audio files. The design is based on a modular framework: the different algorithms are decomposed into stages, formalized using a minimal set of elementary mechanisms, and integrating different variants proposed by alternative approaches – including new strategies we have developed –, that users can select and parametrize. This paper offers an overview of the set of features, related, among others, to timbre, tonality, rhythm or form, that can be extracted with MIRtoolbox. Four particular analyses are provided as examples. The toolbox also includes functions for statistical analysis, segmentation and clustering. Particular attention has been paid to the design of a syntax that offers both simplicity of use and transparent adaptiveness to a multiplicity of possible input types. Each feature extraction method can accept as argument an audio file, or any preliminary result from intermediary stages of the chain of operations. Also the same syntax can be used for analyses of single audio files, batches of files, series of audio segments, multichannel signals, etc. For that purpose, the data and methods of the toolbox are organised in an object-oriented architecture. 1. MOTIVATION AND APPROACH MIRToolbox is a Matlab toolbox dedicated to the extraction of musically-related features from audio recordings. It has been designed in particular with the objective of enabling the computation of a large range of features from databases of audio files, that can be applied to statistical analyses. Few softwares have been proposed in this area. The most important one, Marsyas [1], provides a general architecture for connecting audio, soundfiles, signal processing blocks and machine learning (see section 5 for more details). One particularity of our own approach relies in the use of the Matlab computing environment, which offers good visualisation capabilities and gives access to a large variety of other toolboxes. In particular, the MIRToolbox makes use of functions available in recommended public-domain toolboxes such as the Auditory Toolbox [2], NetLab [3], or SOMtoolbox [4]. Other toolboxes, such as the Statistics toolbox or the Neural Network toolbox from MathWorks, can be directly used for further analyses of the features extracted by MIRToolbox without having to export the data from one software to another. Such computational framework, because of its general objectives, could be useful to the research community in Music Information Retrieval (MIR), but also for educational purposes. For that reason, particular attention has been paid concerning the ease of use of the toolbox. In particular, complex analytic processes can be designed using a very simple syntax, whose expressive power comes from the use of an object-oriented paradigm. The different musical features extracted from the audio files are highly interdependent: in particular, as can be seen in figure 1, some features are based on the same initial computations. In order to improve the computational efficiency, it is important to avoid redundant computations of these common components. Each of these intermediary components, and the final musical features, are therefore considered as building blocks that can been freely articulated one with each other. Besides, in keeping with the objective of optimal ease of use of the toolbox, each building block has been conceived in a way that it can adapt to the type of input data. For instance, the computation of the MFCCs can be based on the waveform of the initial audio signal, or on the intermediary representations such as spectrum, or mel-scale spectrum (see Fig. 1). Similarly, autocorrelation is computed for different range of delays depending on the type of input data (audio waveform, envelope, spectrum). This decomposition of all the set of feature extraction algorithms into a common set of building blocks has the advantage of offering a synthetic overview of the different approaches studied in this domain of research. 2. FEATURE EXTRACTION 2.1. Feature overview Figure 1 shows an overview of the main features implemented in the toolbox. All the different processes start from the audio signal (on the left) and form a chain of operations proceeding to right. The vertical disposition of the processes indicates an increasing order of complexity of the operations, from simplest computation (top) to more detailed auditory modelling (bottom). Each musical feature is related to one of the musical dimensions traditionally defined in music theory. Boldface characters highlight features related to pitch, to tonality (chromagram, key strength and key Self-Organising Map, or SOM) and to dynamics (Root Mean Square, or RMS, energy). Bold italics indicate features related to rhythm, namely tempo, pulse clarity and fluctuation. Simple italics highlight a large set of features that can be associated to timbre. Among them, all the operators in grey italics can be in fact applied to many others different representations: for instance, statistical moments such as centroid, kurtosis, etc., can be applied to either spectra, envelopes, but also to histograms based on any given feature. One of the simplest features, zero-crossing rate, is based on a simple description of the audio waveform itself: it counts the number of sign changes of the waveform. Signal energy is computed using root mean square, or RMS [5]. The envelope of the audio signal offers timbral characteristics of isolated sonic event. FFT-based spectrum can be computed along the frequency domain or along Mel-bands, with linear or decibel energy scale, and",
"title": ""
}
] |
[
{
"docid": "f6294233564c0d84c72c37fc3c88c2df",
"text": "The structural theory of average case com plexity introduced by Levin gives a for mal setting for discussing the types of inputs for which a problem is di cult This is vital to understanding both when a seemingly di cult e g NP complete problem is actually easy on almost all in stances and to determining which prob lems might be suitable for applications re quiring hard problems such as cryptog raphy This paper attempts to summarize the state of knowledge in this area includ ing some folklore results that have not explicitly appeared in print We also try to standardize and unify de nitions Fi nally we indicate what we feel are inter esting research directions We hope that this paper will motivate more research in this area and provide an introduction to the area for people new to it Research Supported by NSF YI Award CCR Sloan Research Fellowship BR and USA Israel BSF Grant Introduction There is a large gap between a problem not being easy and the same problem be ing di cult A problem could have no e cient worst case algorithm but still be solvable for most instances or on in stances that arise in practice Thus a con ventional completeness result can be rel atively meaningless in terms of the real life di culty of the problem since two problems can both be NP complete but one can be solvable quickly on most in stances that arise in practice and the other not However average run time argu ments of particular algorithms for partic ular distributions are also unenlightening as to the complexity of real instances of a problem First they only analyze the performance of speci c algorithms rather than describing the inherent complexity of the problem Secondly the distributions of inputs that arise in practice are often di cult to characterize so analysis of al gorithms on nice distributions does not capture the real life average di culty Thus a structural theory of distribu tional complexity is necessary Such a the ory should allow one to compare the inher ent intractability of distributional prob lems computational problems together with distributions on instances It should also provide results that are meaningful with respect to instances from an arbitrary distribution that might arise Besides capturing more accurately the real world di culty of problems the average case complexity of a problem is important in determining its suitability for applications such as cryptography and the de randomization of algorithms For such applications one needs more than the mere existence of hard instances of the problem one needs to be able to generate instances in a way that guarantees that al most all generated instances are hard For these reasons Levin in L intro duced a structural theory of the average case complexity of problems The main contributions of his paper were a gen eral notion of a distributional problem a machine independent de nition of the average case performance of an algorithm an appropriate notion of reduction be tween distributional problems and an ex ample of a problem that was complete for the class of all NP problems on su ciently uniform distributions Since he and many others have built on this foundation see e g BCGL G VL G Despite the above work I feel the struc ture of average case complexity has not re ceived the attention due to a central prob lem in complexity theory The goal of this paper is to motivate more research in this area and to make the research frontier more accessible to people starting work in this area Several caveats are necessary with re spect to this goal As this is basically a propaganda piece I will present my own personal view of what makes the eld ex citing I will not present a comprehensive summary or bibliography of work in the area nor do I claim that the work men tioned here is the best in the area I will also attempt to clarify and sim plify concepts in the area by presenting both my own equivalent formulations and also by trying to make a uniform taxon omy for concepts The current de nitions are the product of much thought and work by top researchers so many researchers in the area will consider my attempts to do this as a confusion and complicating of the issues rather than a clari cation and simpli cation of them However I feel someone starting out in the area might bene t from seeing a variety of perspec tives Many of the results mentioned in this paper should be considered folklore in that they merely formally state ideas that are well known to researchers in the area but may not be obvious to beginners and to the best of my knowledge do not appear elsewhere in print Five possible worlds To illustrate the central role in complex ity theory of questions regarding the aver age case complexity of problems in NP we will now take a guided tour of ve possible i e not currently known to be false outcomes for these questions and see how they would a ect computer sci ence In each such world we will look at the in uence of the outcomes of these questions on algorithm design for such ar eas as arti cial intelligence and VLSI de sign and for cryptography and computer security We will also consider the more technical issue of derandomization of al gorithms the simulation of probabilistic algorithms by deterministic algorithms This will have a much smaller impact on society than the other issues but we in clude it as another situation besides cryp tography where having di cult problems is actually useful Finally to provide a human angle we will consider the impact these questions would have had on the sad story of Profes sor Grouse the teacher who assigned the young Gauss s class the problem of sum ming the numbers from to The be ginning of this story is well known but few people realize that Professor Grouse then became obsessed with getting his re venge by humiliating Gauss in front of the class by inventing problems Gauss could not solve In real life this led to Grouse s commitment to a lunatic asylum not a pleasant end especially in the th cen tury and to Gauss s developing a life long interest in number theoretic algorithms Here we imagine how the story might have turned out had Grouse been an expert in computational complexity at a time when the main questions about average case complexity had been resolved We believe that this story inspired Gurevich s Challenger Solver Game G In this section we will leave unresolved the questions of how to properly formal ize the complexity assumptions behind the worlds In particular we will leave open which model of computation we are talk ing about e g deterministic algorithms probabilistic algorithms Boolean circuits or even quantum computers and we shall ignore quantitative issues such as whether an n time algorithm for satis ability would be feasible We also assume that if an algorithm exists then it is known to the inhabitants of the world We also ig nore the issue of whether it might be possi ble that algorithms are fast for some input sizes but not others which would have the e ect of bouncing us from world to world as technology advanced We will take as our standard for whether these worlds are indeed possible the ex istence of an oracle relative to which the appropriate assumptions hold Of course this is far from a de nitive answer and the existence of an oracle should not stop the researcher from attempting to nd non relativizing techniques to narrow the range of possibilities Indeed it would be won derful to eliminate one or more of these worlds from consideration preferably the pestilent Pessiland We will try to suc cinctly and informally describe what type of algorithm and or lower bound would be needed to conclude that we are in a partic ular world Barring the caveats mentioned in the previous paragraph these condi tions will basically cover all eventualities thus showing that these are the only possi ble worlds This is an informal statement and will be more true for some worlds than",
"title": ""
},
{
"docid": "8a1ba356c34935a2f3a14656138f0414",
"text": "We propose an efficient approach to exploiting motion information from consecutive frames of a video sequence to recover the 3D pose of people. Previous approaches typically compute candidate poses in individual frames and then link them in a post-processing step to resolve ambiguities. By contrast, we directly regress from a spatio-temporal volume of bounding boxes to a 3D pose in the central frame. We further show that, for this approach to achieve its full potential, it is essential to compensate for the motion in consecutive frames so that the subject remains centered. This then allows us to effectively overcome ambiguities and improve upon the state-of-the-art by a large margin on the Human3.6m, HumanEva, and KTH Multiview Football 3D human pose estimation benchmarks.",
"title": ""
},
{
"docid": "ee4f354f43b27e0275ae4b06869dddde",
"text": "We report on the design of the new clearinghouse adopted by the National Resident Matching Program, which annually fills approximately 20,000 jobs for new physicians. Because the market has complementarities between applicants and between positions, the theory of simple matching markets does not apply directly. However, computational experiments show the theory provides good approximations. Furthermore, the set of stable matchings, and the opportunities for strategic manipulation, are surprisingly small. A new kind of \"core convergence\" result explains this; that each applicant interviews only a small fraction of available positions is important. We also describe engineering aspects of the design process.",
"title": ""
},
{
"docid": "bcd81794f9e1fc6f6b92fd36ccaa8dac",
"text": "Reliable detection and avoidance of obstacles is a crucial prerequisite for autonomously navigating robots as both guarantee safety and mobility. To ensure safe mobility, the obstacle detection needs to run online, thereby taking limited resources of autonomous systems into account. At the same time, robust obstacle detection is highly important. Here, a too conservative approach might restrict the mobility of the robot, while a more reckless one might harm the robot or the environment it is operating in. In this paper, we present a terrain-adaptive approach to obstacle detection that relies on 3D-Lidar data and combines computationally cheap and fast geometric features, like step height and steepness, which are updated with the frequency of the lidar sensor, with semantic terrain information, which is updated with at lower frequency. We provide experiments in which we evaluate our approach on a real robot on an autonomous run over several kilometers containing different terrain types. The experiments demonstrate that our approach is suitable for autonomous systems that have to navigate reliable on different terrain types including concrete, dirt roads and grass.",
"title": ""
},
{
"docid": "2494840a6f833bd5b20b9b1fadcfc2f8",
"text": "Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists.",
"title": ""
},
{
"docid": "8ee1abcf16433d333e530f83be29722f",
"text": "Since the evolution of the internet, many small and large companies have moved their businesses to the internet to provide services to customers worldwide. Cyber credit‐card fraud or no card present fraud is increasingly rampant in the recent years for the reason that the credit‐card i s majorly used to request payments by these companies on the internet. Therefore the need to ensure secured transactions for credit-card owners when consuming their credit cards to make electronic payments for goods and services provided on the internet is a criterion. Data mining has popularly gained recognition in combating cyber credit-card fraud because of its effective artificial intelligence (AI) techniques and algorithms that can be implemented to detect or predict fraud through Knowledge Discovery from unusual patterns derived from gathered data. In this study, a system’s model for cyber credit card fraud detection is discussed and designed. This system implements the supervised anomaly detection algorithm of Data mining to detect fraud in a real time transaction on the internet, and thereby classifying the transaction as legitimate, suspicious fraud and illegitimate transaction. The anomaly detection algorithm is designed on the Neural Networks which implements the working principal of the human brain (as we humans learns from past experience and then make our present day decisions on what we have learned from our past experience). To understand how cyber credit card fraud are being committed, in this study the different types of cyber fraudsters that commit cyber credit card fraud and the techniques used by these cyber fraudsters to commit fraud on the internet is discussed.",
"title": ""
},
{
"docid": "940f460457b117c156b6e39e9586a0b9",
"text": "The flipped classroom is an innovative pedagogical approach that focuses on learner-centered instruction. The purposes of this report were to illustrate how to implement the flipped classroom and to describe students' perceptions of this approach within 2 undergraduate nutrition courses. The template provided enables faculty to design before, during, and after class activities and assessments based on objectives using all levels of Bloom's taxonomy. The majority of the 142 students completing the evaluation preferred the flipped method compared with traditional pedagogical strategies. The process described in the report was successful for both faculty and students.",
"title": ""
},
{
"docid": "2a8f464e709dcae4e34f73654aefe31f",
"text": "LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems.\n In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.",
"title": ""
},
{
"docid": "2eff84064f1d9d183eddc7e048efa8e6",
"text": "Rupinder Kaur, Dr. Jyotsna Sengupta Abstract— The software process model consists of a set of activities undertaken to design, develop and maintain software systems. A variety of software process models have been designed to structure, describe and prescribe the software development process. The software process models play a very important role in software development, so it forms the core of the software product. Software project failure is often devastating to an organization. Schedule slips, buggy releases and missing features can mean the end of the project or even financial ruin for a company. Oddly, there is disagreement over what it means for a project to fail. In this paper, discussion is done on current process models and analysis on failure of software development, which shows the need of new research.",
"title": ""
},
{
"docid": "f34e6c34a499b7b88c18049eec221d36",
"text": "The double-gimbal mechanism (DGM) is a multibody mechanical device composed of three rigid bodies, namely, a base, an inner gimbal, and an outer gimbal, interconnected by two revolute joints. A typical DGM, where the cylindrical base is connected to the outer gimbal by a revolute joint, and the inner gimbal, which is the disk-shaped payload, is connected to the outer gimbal by a revolute joint. The DGM is an integral component of an inertially stabilized platform, which provides motion to maintain line of sight between a target and a platform payload sensor. Modern, commercially available gimbals use two direct-drive or gear-driven motors on orthogonal axes to actuate the joints. Many of these mechanisms are constrained to a reduced operational region, while moresophisticated models use a slip ring to allow continuous rotation about an axis. Angle measurements for each axis are obtained from either a rotary encoder or a resolver. The DGM is a fundamental component of pointing and tracking applications that include missile guidance systems, ground-based telescopes, antenna assemblies, laser communication systems, and close-in weapon systems (CIWSs) such as the Phalanx 1B.",
"title": ""
},
{
"docid": "461d0b9ca1d0f1395d98cb18b2f45a0f",
"text": "Semantic maps augment metric-topological maps with meta-information, i.e. semantic knowledge aimed at the planning and execution of high-level robotic tasks. Semantic knowledge typically encodes human-like concepts, like types of objects and rooms, which are connected to sensory data when symbolic representations of percepts from the robot workspace are grounded to those concepts. This symbol grounding is usually carried out by algorithms that individually categorize each symbol and provide a crispy outcome – a symbol is either a member of a category or not. Such approach is valid for a variety of tasks, but it fails at: (i) dealing with the uncertainty inherent to the grounding process, and (ii) jointly exploiting the contextual relations among concepts (e.g. microwaves are usually in kitchens). This work provides a solution for probabilistic symbol grounding that overcomes these limitations. Concretely, we rely on Conditional Random Fields (CRFs) to model and exploit contextual relations, and to provide measurements about the uncertainty coming from the possible groundings in the form of beliefs (e.g. an object can be categorized (grounded) as a microwave or as a nightstand with beliefs 0.6 and 0.4, respectively). Our solution is integrated into a novel semantic map representation called Multiversal Semantic Map (MvSmap ), which keeps the different groundings, or universes, as instances of ontologies annotated with the obtained beliefs for their posterior exploitation. The suitability of our proposal has been proven with the Robot@Home dataset, a repository that contains challenging multi-modal sensory information gathered by a mobile robot in home environments.",
"title": ""
},
{
"docid": "4ecf150613d45ae0f92485b8faa0deef",
"text": "Query optimizers in current database systems are designed to pick a single efficient plan for a given query based on current statistical properties of the data. However, different subsets of the data can sometimes have very different statistical properties. In such scenarios it can be more efficient to process different subsets of the data for a query using different plans. We propose a new query processing technique called content-based routing (CBR) that eliminates the single-plan restriction in current systems. We present low-overhead adaptive algorithms that partition input data based on statistical properties relevant to query execution strategies, and efficiently route individual tuples through customized plans based on their partition. We have implemented CBR as an extension to the Eddies query processor in the TelegraphCQ system, and we present an extensive experimental evaluation showing the significant performance benefits of CBR.",
"title": ""
},
{
"docid": "5da9811fb60b5f6334e05ba71902ddfd",
"text": "In this paper, a numerical TRL calibration technique is used to accurately extract the equivalent circuit parameters of post-wall iris and input/output coupling structure which are used for the design of directly-coupled substrate integrated waveguide (SIW) filter with the first/last SIW cavities directly excited by 50 Ω microstrip line. On the basis of this dimensional design process, the entire procedure of filter design can meet all of the design specifications without resort to any time-consuming tuning and optimization. A K-band 5th-degree SIW filter with relative bandwidth of 6% was designed and fabricated by low-cost PCB process on Rogers RT/duroid 5880. Measured results which agree well with simulated results validate the accurate dimensional synthesis procedure.",
"title": ""
},
{
"docid": "249543df444c1a5e0d37de8c017e5167",
"text": "This review provides an overview of the changing US epidemiology of cannabis use and associated problems. Adults and adolescents increasingly view cannabis as harmless, and some can use cannabis without harm. However, potential problems include harms from prenatal exposure and unintentional childhood exposure; decline in educational or occupational functioning after early adolescent use, and in adulthood, impaired driving and vehicle crashes; cannabis use disorders (CUD), cannabis withdrawal, and psychiatric comorbidity. Evidence suggests national increases in cannabis potency, prenatal and unintentional childhood exposure; and in adults, increased use, CUD, cannabis-related emergency room visits, and fatal vehicle crashes. Twenty-nine states have medical marijuana laws (MMLs) and of these, 8 have recreational marijuana laws (RMLs). Many studies indicate that MMLs or their specific provisions did not increase adolescent cannabis use. However, the more limited literature suggests that MMLs have led to increased cannabis potency, unintentional childhood exposures, adult cannabis use, and adult CUD. Ecological-level studies suggest that MMLs have led to substitution of cannabis for opioids, and also possibly for psychiatric medications. Much remains to be determined about cannabis trends and the role of MMLs and RMLs in these trends. The public, health professionals, and policy makers would benefit from education about the risks of cannabis use, the increases in such risks, and the role of marijuana laws in these increases.",
"title": ""
},
{
"docid": "037042318b99bf9c32831a6b25dcd50e",
"text": "Autoencoders are popular among neural-network-based matrix completion models due to their ability to retrieve potential latent factors from the partially observed matrices. Nevertheless, when training data is scarce their performance is significantly degraded due to overfitting. In this paper, we mitigate overfitting with a data-dependent regularization technique that relies on the principles of multi-task learning. Specifically, we propose an autoencoder-based matrix completion model that performs prediction of the unknown matrix values as a main task, and manifold learning as an auxiliary task. The latter acts as an inductive bias, leading to solutions that generalize better. The proposed model outperforms the existing autoencoder-based models designed for matrix completion, achieving high reconstruction accuracy in well-known datasets.",
"title": ""
},
{
"docid": "ba2f7eb97611cb3a75f236436b048820",
"text": "Learning interpretable disentangled representations is a crucial yet challenging task. In this paper, we propose a weakly semi-supervised method, termed as Dual Swap Disentangling (DSD), for disentangling using both labeled and unlabeled data. Unlike conventional weakly supervised methods that rely on full annotations on the group of samples, we require only limited annotations on paired samples that indicate their shared attribute like the color. Our model takes the form of a dual autoencoder structure. To achieve disentangling using the labeled pairs, we follow a “encoding-swap-decoding” process, where we first swap the parts of their encodings corresponding to the shared attribute, and then decode the obtained hybrid codes to reconstruct the original input pairs. For unlabeled pairs, we follow the “encoding-swap-decoding” process twice on designated encoding parts and enforce the final outputs to approximate the input pairs. By isolating parts of the encoding and swapping them back and forth, we impose the dimension-wise modularity and portability of the encodings of the unlabeled samples, which implicitly encourages disentangling under the guidance of labeled pairs. This dual swap mechanism, tailored for semi-supervised setting, turns out to be very effective. Experiments on image datasets from a wide domain show that our model yields state-of-the-art disentangling performances.",
"title": ""
},
{
"docid": "c1981c3b0ccd26d4c8f02c2aa5e71c7a",
"text": "Functional genomics studies have led to the discovery of a large amount of non-coding RNAs from the human genome; among them are long non-coding RNAs (lncRNAs). Emerging evidence indicates that lncRNAs could have a critical role in the regulation of cellular processes such as cell growth and apoptosis as well as cancer progression and metastasis. As master gene regulators, lncRNAs are capable of forming lncRNA–protein (ribonucleoprotein) complexes to regulate a large number of genes. For example, lincRNA-RoR suppresses p53 in response to DNA damage through interaction with heterogeneous nuclear ribonucleoprotein I (hnRNP I). The present study demonstrates that hnRNP I can also form a functional ribonucleoprotein complex with lncRNA urothelial carcinoma-associated 1 (UCA1) and increase the UCA1 stability. Of interest, the phosphorylated form of hnRNP I, predominantly in the cytoplasm, is responsible for the interaction with UCA1. Moreover, although hnRNP I enhances the translation of p27 (Kip1) through interaction with the 5′-untranslated region (5′-UTR) of p27 mRNAs, the interaction of UCA1 with hnRNP I suppresses the p27 protein level by competitive inhibition. In support of this finding, UCA1 has an oncogenic role in breast cancer both in vitro and in vivo. Finally, we show a negative correlation between p27 and UCA in the breast tumor cancer tissue microarray. Together, our results suggest an important role of UCA1 in breast cancer.",
"title": ""
},
{
"docid": "16a3bf4df6fb8e61efad6f053f1c6f9c",
"text": "The objective of this paper is to improve large scale visual object retrieval for visual place recognition. Geo-localization based on a visual query is made difficult by plenty of non-distinctive features which commonly occur in imagery of urban environments, such as generic modern windows, doors, cars, trees, etc. The focus of this work is to adapt standard Hamming Embedding retrieval system to account for varying descriptor distinctiveness. To this end, we propose a novel method for efficiently estimating distinctiveness of all database descriptors, based on estimating local descriptor density everywhere in the descriptor space. In contrast to all competing methods, the (unsupervised) training time for our method (DisLoc) is linear in the number database descriptors and takes only a 100 seconds on a single CPU core for a 1 million image database. Furthermore, the added memory requirements are negligible (1%). The method is evaluated on standard publicly available large-scale place recognition benchmarks containing street-view imagery of Pittsburgh and San Francisco. DisLoc is shown to outperform all baselines, while setting the new state-of-the-art on both benchmarks. The method is compatible with spatial reranking, which further improves recognition results. Finally, we also demonstrate that 7% of the least distinctive features can be removed, therefore reducing storage requirements and improving retrieval speed, without any loss in place recognition accuracy.",
"title": ""
},
{
"docid": "0d45af6e2d038f7fbfd3c39c887b242e",
"text": "Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks. To this end, many defense approaches that attempt to improve the robustness of DNNs have been proposed. In a separate and yet related area, recent works have explored to quantize neural network weights and activation functions into low bit-width to compress model size and reduce computational complexity. In this work, we find that these two different tracks, namely the pursuit of network compactness and robustness, can be merged into one and give rise to networks of both advantages. To the best of our knowledge, this is the first work that uses quantization of activation functions to defend against adversarial examples. We also propose to train robust neural networks by using adaptive quantization techniques for the activation functions. Our proposed Dynamic Quantized Activation (DQA) is verified through a wide range of experiments with the MNIST and CIFAR-10 datasets under different white-box attack methods, including FGSM, PGD, and C&W attacks. Furthermore, Zeroth Order Optimization and substitute model based black-box attacks are also considered in this work. The experimental results clearly show that the robustness of DNNs could be greatly improved using the proposed DQA.",
"title": ""
},
{
"docid": "eb9f34cd2b10f1c8099aad5e9064578a",
"text": "Deep distance metric learning (DDML), which is proposed to learn image similarity metrics in an end-toend manner based on the convolution neural network, has achieved encouraging results in many computer vision tasks. L2-normalization in the embedding space has been used to improve the performance of several DDML methods. However, the commonly used Euclidean distance is no longer an accurate metric for L2-normalized embedding space, i.e., a hyper-sphere. Another challenge of current DDML methods is that their loss functions are usually based on rigid data formats, such as the triplet tuple. Thus, an extra process is needed to prepare data in specific formats. In addition, their losses are obtained from a limited number of samples, which leads to a lack of the global view of the embedding space. In this paper, we replace the Euclidean distance with the cosine similarity to better utilize the L2-normalization, which is able to attenuate the curse of dimensionality. More specifically, a novel loss function based on the von Mises-Fisher distribution is proposed to learn a compact hyper-spherical embedding space. Moreover, a new efficient learning algorithm is developed to better capture the global structure of the embedding space. Experiments for both classification and retrieval tasks on several standard datasets show that our method achieves state-of-the-art performance with a simpler training procedure. Furthermore, we demonstrate that, even with a small number of convolutional layers, our model can still obtain significantly better classification performance than the widely used softmax loss.",
"title": ""
}
] |
scidocsrr
|
030f5fe0356b20354dbf83aa8447bcbe
|
Face Recognition by Humans: Nineteen Results All Computer Vision Researchers Should Know About
|
[
{
"docid": "a1147a7b8bc6777ebb2ab7b4f308cc80",
"text": "We present a new graph-theoretic approach to the problem of image segmentation. Our method uses local criteria and yet produces results that reflect global properties of the image. We develop a framework that provides specific definitions of what it means for an image to be underor over-segmented. We then present an efficient algorithm for computing a segmentation that is neither undernor over-segmented according to these definitions. Our segmentation criterion is based on intensity differences between neighboring pixels. An important characteristic of the approach is that it is able to preserve detail in low-variability regions while ignoring detail in high-variability regions, which we illustrate with several examples on both real and sythetic images.",
"title": ""
}
] |
[
{
"docid": "6bbcbe9f4f4ede20d2b86f6da9167110",
"text": "Avoiding vehicle-to-pedestrian crashes is a critical requirement for nowadays advanced driver assistant systems (ADAS) and future self-driving vehicles. Accordingly, detecting pedestrians from raw sensor data has a history of more than 15 years of research, with vision playing a central role. During the last years, deep learning has boosted the accuracy of image-based pedestrian detectors. However, detection is just the first step towards answering the core question, namely is the vehicle going to crash with a pedestrian provided preventive actions are not taken? Therefore, knowing as soon as possible if a detected pedestrian has the intention of crossing the road ahead of the vehicle is essential for performing safe and comfortable maneuvers that prevent a crash. However, compared to pedestrian detection, there is relatively little literature on detecting pedestrian intentions. This paper aims to contribute along this line by presenting a new vision-based approach which analyzes the pose of a pedestrian along several frames to determine if he or she is going to enter the road or not. We present experiments showing 750 ms of anticipation for pedestrians crossing the road, which at a typical urban driving speed of 50 km/h can provide 15 additional meters (compared to a pure pedestrian detector) for vehicle automatic reactions or to warn the driver. Moreover, in contrast with state-of-the-art methods, our approach is monocular, neither requiring stereo nor optical flow information.",
"title": ""
},
{
"docid": "5a97d79641f7006d7b5d0decd3a7ad3e",
"text": "We present a cognitive model of inducing verb selectional preferences from individual verb usages. The selectional preferences for each verb argument are represented as a probability distribution over the set of semantic properties that the argument can possess—asemantic profile . The semantic profiles yield verb-specific conceptualizations of the arguments associated with a syntactic position. The proposed model can learn appropriate verb profiles from a small set of noisy training data, and can use them in simulating human plausibility judgments and analyzing implicit object alternation.",
"title": ""
},
{
"docid": "1880bb9c3229cab3e614ca39079c7781",
"text": "Emerging low-power radio triggering techniques for wireless motes are a promising approach to prolong the lifetime of Wireless Sensor Networks (WSNs). By allowing nodes to activate their main transceiver only when data need to be transmitted or received, wake-up-enabled solutions virtually eliminate the need for idle listening, thus drastically reducing the energy toll of communication. In this paper we describe the design of a novel wake-up receiver architecture based on an innovative pass-band filter bank with high selectivity capability. The proposed concept, demonstrated by a prototype implementation, combines both frequency-domain and time-domain addressing space to allow selective addressing of nodes. To take advantage of the functionalities of the proposed receiver, as well as of energy-harvesting capabilities modern sensor nodes are equipped with, we present a novel wake-up-enabled harvesting-aware communication stack that supports both interest dissemination and converge casting primitives. This stack builds on the ability of the proposed WuR to support dynamic address assignment, which is exploited to optimize system performance. Comparison against traditional WSN protocols shows that the proposed concept allows to optimize performance tradeoffs with respect to existing low-power communication stacks.",
"title": ""
},
{
"docid": "9099ced3a3dc2207e997d2cbdae6b84f",
"text": "In this paper, we propose a generalized minimum-sum decoding algorithm using a linear approximation (LAMS) for protograph-based low-density parity-check (PB-LDPC) codes with quasi-cyclic (QC) structures. The linear approximation introduces some factors in each decoding iteration, which linearly adjust the check node updating and channel output. These factors are optimized iteratively using machine learning, where the optimization can be efficiently solved by a small and shallow neural network with training data produced by the LAMS decoder. The neural network is built according to the parity check matrix of a PB-LDPC code with a QC structure which can greatly reduce the size of the neural network. Since, we optimize the factors once per decoding iteration, the optimization is not limited by the number of the iterations. Then, we give the optimized results of the factors in the LAMS decoder and perform decoding simulations for PB-LDPC codes in fifth generation mobile networks (5G). In the simulations, the LAMS algorithm shows noticeable improvement over the normalized and the offset minimum-sum algorithms and even better performance than the belief propagation algorithm in some high signal-to-noise ratio regions.",
"title": ""
},
{
"docid": "83f18d74ca28f615899f185bc592c9a4",
"text": "A simple circuit technique is presented for improving poor midband power supply rejection ratio (PSRR) of single ended amplifiers that use Miller capacitance to set the location of the dominant pole. The principle of the technique is to create an additional parallel signal path from the power supply to the output, which cancels the dominating unity gain signal path through the output stage and Miller capacitor above the dominant pole frequency. Simulation results of a two-stage amplifier show that more than a 20dB improvement in the midband PSRR is obtainable as compared with an amplifier without the suggested circuit",
"title": ""
},
{
"docid": "86b8f11b19fec6a120edddc12e107215",
"text": "This paper presents the design procedure, optimization strategy, theoretical analysis, and experimental results of a wideband dual-polarized base station antenna element with superior performance. The proposed antenna element consists of four electric folded dipoles arranged in an octagon shape that are excited simultaneously for each polarization. It provides ±45° slant-polarized radiation that meets all the requirements for base station antenna elements, including stable radiation patterns, low cross polarization level, high port-to-port isolation, and excellent matching across the wide band. The problem of beam squint for beam-tilted arrays is discussed and it is found that the geometry of this element serves to reduce beam squint. Experimental results show that this element has a wide bandwidth of 46.4% from 1.69 to 2.71 GHz with ≥15-dB return loss and 9.8 ± 0.9-dBi gain. Across this wide band, the variations of the half-power-beamwidths of the two polarizations are all within 66.5° ± 5.5°, the port-to-port isolation is >28 dB, the cross-polarization discrimination is >25 dB, and most importantly, the beam squint is <4° with a maximum 10° down-tilt.",
"title": ""
},
{
"docid": "4ba81ce5756f2311dde3fa438f81e527",
"text": "To prevent password breaches and guessing attacks, banks increasingly turn to two-factor authentication (2FA), requiring users to present at least one more factor, such as a one-time password generated by a hardware token or received via SMS, besides a password. We can expect some solutions – especially those adding a token – to create extra work for users, but little research has investigated usability, user acceptance, and perceived security of deployed 2FA. This paper presents an in-depth study of 2FA usability with 21 UK online banking customers, 16 of whom had accounts with more than one bank. We collected a rich set of qualitative and quantitative data through two rounds of semi-structured interviews, and an authentication diary over an average of 11 days. Our participants reported a wide range of usability issues, especially with the use of hardware tokens, showing that the mental and physical workload involved shapes how they use online banking. Key targets for improvements are (i) the reduction in the number of authentication steps, and (ii) removing features that do not add any security but negatively affect the user experience.",
"title": ""
},
{
"docid": "6b55931c9945a71de6b28789323f191b",
"text": "Resistant hypertension-uncontrolled hypertension with 3 or more antihypertensive agents-is increasingly common in clinical practice. Clinicians should exclude pseudoresistant hypertension, which results from nonadherence to medications or from elevated blood pressure related to the white coat syndrome. In patients with truly resistant hypertension, thiazide diuretics, particularly chlorthalidone, should be considered as one of the initial agents. The other 2 agents should include calcium channel blockers and angiotensin-converting enzyme inhibitors for cardiovascular protection. An increasing body of evidence has suggested benefits of mineralocorticoid receptor antagonists, such as eplerenone and spironolactone, in improving blood pressure control in patients with resistant hypertension, regardless of circulating aldosterone levels. Thus, this class of drugs should be considered for patients whose blood pressure remains elevated after treatment with a 3-drug regimen to maximal or near maximal doses. Resistant hypertension may be associated with secondary causes of hypertension including obstructive sleep apnea or primary aldosteronism. Treating these disorders can significantly improve blood pressure beyond medical therapy alone. The role of device therapy for treating the typical patient with resistant hypertension remains unclear.",
"title": ""
},
{
"docid": "92e955705aa333923bb7b14af946fc2f",
"text": "This study examines the role of online daters’ physical attractiveness in their profile selfpresentation and, in particular, their use of deception. Sixty-nine online daters identified the deceptions in their online dating profiles and had their photograph taken in the lab. Independent judges rated the online daters’ physical attractiveness. Results show that the lower online daters’ attractiveness, the more likely they were to enhance their profile photographs and lie about their physical descriptors (height, weight, age). The association between attractiveness and deception did not extend to profile elements unrelated to their physical appearance (e.g., income, occupation), suggesting that their deceptions were limited and strategic. Results are discussed in terms of (a) evolutionary theories about the importance of physical attractiveness in the dating realm and (b) the technological affordances that allow online daters to engage in selective self-presentation.",
"title": ""
},
{
"docid": "0afb2a40553e1bef9d8250a3c5012180",
"text": "Attacks to networks are becoming more complex and sophisticated every day. Beyond the so-called script-kiddies and hacking newbies, there is a myriad of professional attackers seeking to make serious profits infiltrating in corporate networks. Either hostile governments, big corporations or mafias are constantly increasing their resources and skills in cybercrime in order to spy, steal or cause damage more effectively. With the ability and resources of hackers growing, the traditional approaches to Network Security seem to start hitting their limits and it’s being recognized the need for a smarter approach to threat detections. This paper provides an introduction on the need for evolution of Cyber Security techniques and how Artificial Intelligence (AI) could be of application to help solving some of the problems. It provides also, a high-level overview of some state of the art AI Network Security techniques, to finish analysing what is the foreseeable future of the application of AI to Network Security. Applications of Artificial Intelligence (AI) to Network Security 3",
"title": ""
},
{
"docid": "951c2ce5816ffd7be55b8ae99a82f5fc",
"text": "Many Android apps have a legitimate need to communicate over the Internet and are then responsible for protecting potentially sensitive data during transit. This paper seeks to better understand the potential security threats posed by benign Android apps that use the SSL/TLS protocols to protect data they transmit. Since the lack of visual security indicators for SSL/TLS usage and the inadequate use of SSL/TLS can be exploited to launch Man-in-the-Middle (MITM) attacks, an analysis of 13,500 popular free apps downloaded from Google's Play Market is presented. \n We introduce MalloDroid, a tool to detect potential vulnerability against MITM attacks. Our analysis revealed that 1,074 (8.0%) of the apps examined contain SSL/TLS code that is potentially vulnerable to MITM attacks. Various forms of SSL/TLS misuse were discovered during a further manual audit of 100 selected apps that allowed us to successfully launch MITM attacks against 41 apps and gather a large variety of sensitive data. Furthermore, an online survey was conducted to evaluate users' perceptions of certificate warnings and HTTPS visual security indicators in Android's browser, showing that half of the 754 participating users were not able to correctly judge whether their browser session was protected by SSL/TLS or not. We conclude by considering the implications of these findings and discuss several countermeasures with which these problems could be alleviated.",
"title": ""
},
{
"docid": "19b537f7356da81830c8f7908af83669",
"text": "Investigation of the hippocampus has historically focused on computations within the trisynaptic circuit. However, discovery of important anatomical and functional variability along its long axis has inspired recent proposals of long-axis functional specialization in both the animal and human literatures. Here, we review and evaluate these proposals. We suggest that various long-axis specializations arise out of differences between the anterior (aHPC) and posterior hippocampus (pHPC) in large-scale network connectivity, the organization of entorhinal grid cells, and subfield compositions that bias the aHPC and pHPC towards pattern completion and separation, respectively. The latter two differences give rise to a property, reflected in the expression of multiple other functional specializations, of coarse, global representations in anterior hippocampus and fine-grained, local representations in posterior hippocampus.",
"title": ""
},
{
"docid": "a51a8dbf4b44953e4cee202099d46a0e",
"text": "The effects of selected nonionic emulsifiers on the physicochemical characteristics of astaxanthin nanodispersions produced by an emulsification/evaporation technique were studied. The emulsifiers used were polysorbates (Polysorbate 20, Polysorbate 40, Polysorbate 60 and Polysorbate 80) and sucrose esters of fatty acids (sucrose laurate, palmitate, stearate and oleate). The mean particle diameters of the nanodispersions ranged from 70 nm to 150 nm, depending on the emulsifier used. In the prepared nanodispersions, the astaxanthin particle diameter decreased with increasing emulsifier hydrophilicity and decreasing carbon number of the fatty acid in the emulsifier structure. Astaxanthin nanodispersions with the smallest particle diameters were produced with Polysorbate 20 and sucrose laurate among the polysorbates and the sucrose esters, respectively. We also found that the Polysorbate 80- and sucrose oleate-stabilized nanodispersions had the highest astaxanthin losses (i.e., the lowest astaxanthin contents in the final products) among the nanodispersions. This work demonstrated the importance of emulsifier type in determining the physicochemical characteristics of astaxanthin nano-dispersions.",
"title": ""
},
{
"docid": "dd9d776dbc470945154d460921005204",
"text": "The Ant Colony System (ACS) is, next to Ant Colony Optimization (ACO) and the MAX-MIN Ant System (MMAS), one of the most efficient metaheuristic algorithms inspired by the behavior of ants. In this article we present three novel parallel versions of the ACS for the graphics processing units (GPUs). To the best of our knowledge, this is the first such work on the ACS which shares many key elements of the ACO and the MMAS, but differences in the process of building solutions and updating the pheromone trails make obtaining an efficient parallel version for the GPUs a difficult task. The proposed parallel versions of the ACS differ mainly in their implementations of the pheromone memory. The first two use the standard pheromone matrix, and the third uses a novel selective pheromone memory. Computational experiments conducted on several Travelling Salesman Problem (TSP) instances of sizes ranging from 198 to 2392 cities showed that the parallel ACS on Nvidia Kepler GK104 GPU (1536 CUDA cores) is able to obtain a speedup up to 24.29x vs the sequential ACS running on a single core of Intel Xeon E5-2670 CPU. The parallel ACS with the selective pheromone memory achieved speedups up to 16.85x, but in most cases the obtained solutions were of significantly better quality than for the sequential ACS.",
"title": ""
},
{
"docid": "248a447eb07f0939fa479b0eb8778756",
"text": "The present study was done to determine the long-term success and survival of fixed partial dentures (FPDs) and to evaluate the risks for failures due to specific biological and technical complications. A MEDLINE search (PubMed) from 1966 up to March 2004 was conducted, as well as hand searching of bibliographies from relevant articles. Nineteen studies from an initial yield of 3658 titles were finally selected and data were extracted independently by three reviewers. Prospective and retrospective cohort studies with a mean follow-up time of at least 5 years in which patients had been examined clinically at the follow-up visits were included in the meta-analysis. Publications only based on patients records, questionnaires or interviews were excluded. Survival of the FPDs was analyzed according to in situ and intact failure risks. Specific biological and technical complications such as caries, loss of vitality and periodontal disease recurrence as well as loss of retention, loss of vitality, tooth and material fractures were also analyzed. The 10-year probability of survival for fixed partial dentures was 89.1% (95% confidence interval (CI): 81-93.8%) while the probability of success was 71.1% (95% CI: 47.7-85.2%). The 10-year risk for caries and periodontitis leading to FPD loss was 2.6% and 0.7%, respectively. The 10-year risk for loss of retention was 6.4%, for abutment fracture 2.1% and for material fractures 3.2%.",
"title": ""
},
{
"docid": "14ca9dfee206612e36cd6c3b3e0ca61e",
"text": "Radio-frequency identification (RFID) technology promises to revolutionize the way we track items in supply chain, retail store, and asset management applications. The size and different characteristics of RFID data pose many interesting challenges in the current data management systems. In this paper, we provide a brief overview of RFID technology and highlight a few of the data management challenges that we believe are suitable topics for exploratory research.",
"title": ""
},
{
"docid": "8d7a7bc2b186d819b36a0a8a8ba70e39",
"text": "Recent stereo algorithms have achieved impressive results by modelling the disparity image as a Markov Random Field (MRF). An important component of an MRF-based approach is the inference algorithm used to find the most likely setting of each node in the MRF. Algorithms have been proposed which use Graph Cuts or Belief Propagation for inference. These stereo algorithms differ in both the inference algorithm used and the formulation of the MRF. It is unknown whether to attribute the responsibility for differences in performance to the MRF or the inference algorithm. We address this through controlled experiments by comparing the Belief Propagation algorithm and the Graph Cuts algorithm on the same MRF’s, which have been created for calculating stereo disparities. We find that the labellings produced by the two algorithms are comparable. The solutions produced by Graph Cuts have a lower energy than those produced with Belief Propagation, but this does not necessarily lead to increased performance relative to the ground-truth.",
"title": ""
},
{
"docid": "70cad4982e42d44eec890faf6ddc5c75",
"text": "Both translation arrest and proteasome stress associated with accumulation of ubiquitin-conjugated protein aggregates were considered as a cause of delayed neuronal death after transient global brain ischemia; however, exact mechanisms as well as possible relationships are not fully understood. The aim of this study was to compare the effect of chemical ischemia and proteasome stress on cellular stress responses and viability of neuroblastoma SH-SY5Y and glioblastoma T98G cells. Chemical ischemia was induced by transient treatment of the cells with sodium azide in combination with 2-deoxyglucose. Proteasome stress was induced by treatment of the cells with bortezomib. Treatment of SH-SY5Y cells with sodium azide/2-deoxyglucose for 15 min was associated with cell death observed 24 h after treatment, while glioblastoma T98G cells were resistant to the same treatment. Treatment of both SH-SY5Y and T98G cells with bortezomib was associated with cell death, accumulation of ubiquitin-conjugated proteins, and increased expression of Hsp70. These typical cellular responses to proteasome stress, observed also after transient global brain ischemia, were not observed after chemical ischemia. Finally, chemical ischemia, but not proteasome stress, was in SH-SY5Y cells associated with increased phosphorylation of eIF2α, another typical cellular response triggered after transient global brain ischemia. Our results showed that short chemical ischemia of SH-SY5Y cells is not sufficient to induce both proteasome stress associated with accumulation of ubiquitin-conjugated proteins and stress response at the level of heat shock proteins despite induction of cell death and eIF2α phosphorylation.",
"title": ""
}
] |
scidocsrr
|
1edd1ffbef283d1cebfa1a3ce9e8a1ac
|
LabelRankT: incremental community detection in dynamic networks via label propagation
|
[
{
"docid": "a50ec2ab9d5d313253c6656049d608b3",
"text": "A cluster algorithm for graphs called the Markov Cluster algorithm (MCL algorithm) is introduced. The algorithm provides basically an interface to an algebraic process de ned on stochastic matrices, called the MCL process. The graphs may be both weighted (with nonnegative weight) and directed. Let G be such a graph. The MCL algorithm simulates ow in G by rst identifying G in a canonical way with a Markov graph G1. Flow is then alternatingly expanded and contracted, leading to a row of Markov Graphs G(i). Flow expansion corresponds with taking the k power of a stochastic matrix, where k 2 IN . Flow contraction corresponds with a parametrized operator r, r 0, which maps the set of (column) stochastic matrices onto itself. The image rM is obtained by raising each entry in M to the r th power and rescaling each column to have sum 1 again. The heuristic underlying this approach is the expectation that ow between dense regions which are sparsely connected will evaporate. The invariant limits of the process are easily derived and in practice the process converges very fast to such a limit, the structure of which has a generic interpretation as an overlapping clustering of the graph G. Overlap is limited to cases where the input graph has a symmetric structure inducing it. The contraction and expansion parameters of the MCL process in uence the granularity of the output. The algorithm is space and time e cient and lends itself to drastic scaling. This report describes the MCL algorithm and process, convergence towards equilibrium states, interpretation of the states as clusterings, and implementation and scalability. The algorithm is introduced by rst considering several related proposals towards graph clustering, of both combinatorial and probabilistic nature. 2000 Mathematics Subject Classi cation: 05B20, 15A48, 15A51, 62H30, 68R10, 68T10, 90C35.",
"title": ""
},
{
"docid": "f96bf84a4dfddc8300bb91227f78b3af",
"text": "Membership diversity is a characteristic aspect of social networks in which a person may belong to more than one social group. For this reason, discovering overlapping structures is necessary for realistic social analysis. In this paper, we present a fast algorithm, called SLPA, for overlapping community detection in large-scale networks. SLPA spreads labels according to dynamic interaction rules. It can be applied to both unipartite and bipartite networks. It is also able to uncover overlapping nested hierarchy. The time complexity of SLPA scales linearly with the number of edges in the network. Experiments in both synthetic and realworld networks show that SLPA has an excellent performance in identifying both node and community level overlapping structures.",
"title": ""
}
] |
[
{
"docid": "dde5083017c2db3ffdd90668e28bab4b",
"text": "Current industry standards for describing Web Services focus on ensuring interoperability across diverse platforms, but do not provide a good foundation for automating the use of Web Services. Representational techniques being developed for the Semantic Web can be used to augment these standards. The resulting Web Service specifications enable the development of software programs that can interpret descriptions of unfamiliar Web Services and then employ those services to satisfy user goals. OWL-S (“OWL for Services”) is a set of notations for expressing such specifications, based on the Semantic Web ontology language OWL. It consists of three interrelated parts: a profile ontology, used to describe what the service does; a process ontology and corresponding presentation syntax, used to describe how the service is used; and a grounding ontology, used to describe how to interact with the service. OWL-S can be used to automate a variety of service-related activities involving service discovery, interoperation, and composition. A large body of research on OWL-S has led to the creation of many open-source tools for developing, reasoning about, and dynamically utilizing Web Services.",
"title": ""
},
{
"docid": "5d624fadc5502ef0b65c227d4dd47a9a",
"text": "In this work, highly selective filters based on periodic arrays of electrically small resonators are pointed out. The high-pass filters are implemented in microstrip technology by etching complementary split ring resonators (CSRRs), or complementary spiral resonators (CSRs), in the ground plane, and series capacitive gaps, or interdigital capacitors, in the signal strip. The structure exhibits a composite right/left handed (CRLH) behavior and, by properly tuning the geometry of the elements, a high pass response with a sharp transition band is obtained. The low-pass filters, also implemented in microstrip technology, are designed by cascading open complementary split ring resonators (OCSRRs) in the signal strip. These low pass filters do also exhibit a narrow transition band. The high selectivity of these microwave filters is due to the presence of a transmission zero. Since the resonant elements are small, filter dimensions are compact. Several prototype device examples are reported in this paper.",
"title": ""
},
{
"docid": "eaa6daff2f28ea7f02861e8c67b9c72b",
"text": "The demand of fused magnesium furnaces (FMFs) refers to the average value of the power of the FMFs over a fixed period of time before the current time. The demand is an indicator of the electricity consumption of high energy-consuming FMFs. When the demand exceeds the limit of the Peak Demand (a predetermined maximum demand), the power supply of some FMF will be cut off to ensure that the demand is no more than Peak Demand. But the power cutoff will destroy the heat balance, reduce the quality and yield of the product. The composition change of magnesite in FMFs will cause demand spike occasionally, which a sudden increase in demand exceeds the limit and then drops below the limit. As a result, demand spike cause the power cutoff. In order to avoid the power cutoff at the moment of demand spike, the demand of FMFs needs to be forecasted. This paper analyzes the dynamic model of the demand of FMFs, using the power data, presents a data-driven demand forecasting method. This method consists of the following: PACF based decision module for the number of the input variables of the forecasting model, RBF neural network (RBFNN) based power variation rate forecasting model and demand forecasting model. Simulations based on actual data and industrial experiments at a fused magnesia plant show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "323abed1a623e49db50bed383ab26a92",
"text": "Robust object detection is a critical skill for robotic applications in complex environments like homes and offices. In this paper we propose a method for using multiple cameras to simultaneously view an object from multiple angles and at high resolutions. We show that our probabilistic method for combining the camera views, which can be used with many choices of single-image object detector, can significantly improve accuracy for detecting objects from many viewpoints. We also present our own single-image object detection method that uses large synthetic datasets for training. Using a distributed, parallel learning algorithm, we train from very large datasets (up to 100 million image patches). The resulting object detector achieves high performance on its own, but also benefits substantially from using multiple camera views. Our experimental results validate our system in realistic conditions and demonstrates significant performance gains over using standard single-image classifiers, raising accuracy from 0.86 area-under-curve to 0.97.",
"title": ""
},
{
"docid": "9ffd665d6fe680fc4e7b9e57df48510c",
"text": "BACKGROUND\nIn light of the increasing rate of dengue infections throughout the world despite vector-control measures, several dengue vaccine candidates are in development.\n\n\nMETHODS\nIn a phase 3 efficacy trial of a tetravalent dengue vaccine in five Latin American countries where dengue is endemic, we randomly assigned healthy children between the ages of 9 and 16 years in a 2:1 ratio to receive three injections of recombinant, live, attenuated, tetravalent dengue vaccine (CYD-TDV) or placebo at months 0, 6, and 12 under blinded conditions. The children were then followed for 25 months. The primary outcome was vaccine efficacy against symptomatic, virologically confirmed dengue (VCD), regardless of disease severity or serotype, occurring more than 28 days after the third injection.\n\n\nRESULTS\nA total of 20,869 healthy children received either vaccine or placebo. At baseline, 79.4% of an immunogenicity subgroup of 1944 children had seropositive status for one or more dengue serotypes. In the per-protocol population, there were 176 VCD cases (with 11,793 person-years at risk) in the vaccine group and 221 VCD cases (with 5809 person-years at risk) in the control group, for a vaccine efficacy of 60.8% (95% confidence interval [CI], 52.0 to 68.0). In the intention-to-treat population (those who received at least one injection), vaccine efficacy was 64.7% (95% CI, 58.7 to 69.8). Serotype-specific vaccine efficacy was 50.3% for serotype 1, 42.3% for serotype 2, 74.0% for serotype 3, and 77.7% for serotype 4. Among the severe VCD cases, 1 of 12 was in the vaccine group, for an intention-to-treat vaccine efficacy of 95.5%. Vaccine efficacy against hospitalization for dengue was 80.3%. The safety profile for the CYD-TDV vaccine was similar to that for placebo, with no marked difference in rates of adverse events.\n\n\nCONCLUSIONS\nThe CYD-TDV dengue vaccine was efficacious against VCD and severe VCD and led to fewer hospitalizations for VCD in five Latin American countries where dengue is endemic. (Funded by Sanofi Pasteur; ClinicalTrials.gov number, NCT01374516.).",
"title": ""
},
{
"docid": "c3c7c392b4e7afedb269aa39e2b4680a",
"text": "The temporal-difference (TD) algorithm from reinforcement learning provides a simple method for incrementally learning predictions of upcoming events. Applied to classical conditioning, TD models suppose that animals learn a real-time prediction of the unconditioned stimulus (US) on the basis of all available conditioned stimuli (CSs). In the TD model, similar to other error-correction models, learning is driven by prediction errors--the difference between the change in US prediction and the actual US. With the TD model, however, learning occurs continuously from moment to moment and is not artificially constrained to occur in trials. Accordingly, a key feature of any TD model is the assumption about the representation of a CS on a moment-to-moment basis. Here, we evaluate the performance of the TD model with a heretofore unexplored range of classical conditioning tasks. To do so, we consider three stimulus representations that vary in their degree of temporal generalization and evaluate how the representation influences the performance of the TD model on these conditioning tasks.",
"title": ""
},
{
"docid": "907b8a8a8529b09114ae60e401bec1bd",
"text": "Studies of information seeking and workplace collaboration often find that social relationships are a strong factor in determining who collaborates with whom. Social networks provide one means of visualizing existing and potential interaction in organizational settings. Groupware designers are using social networks to make systems more sensitive to social situations and guide users toward effective collaborations. Yet, the implications of embedding social networks in systems have not been systematically studied. This paper details an evaluation of two different social networks used in a system to recommend individuals for possible collaboration. The system matches people looking for expertise with individuals likely to have expertise. The effectiveness of social networks for matching individuals is evaluated and compared. One finding is that social networks embedded into systems do not match individuals' perceptions of their personal social network. This finding and others raise issues for the use of social networks in groupware. Based on the evaluation results, several design considerations are discussed.",
"title": ""
},
{
"docid": "39351cdf91466aa12576d9eb475fb558",
"text": "Fault tolerance is a remarkable feature of biological systems and their self-repair capability influence modern electronic systems. In this paper, we propose a novel plastic neural network model, which establishes homeostasis in a spiking neural network. Combined with this plasticity and the inspiration from inhibitory interneurons, we develop a fault-resilient robotic controller implemented on an FPGA establishing obstacle avoidance task. We demonstrate the proposed methodology on a spiking neural network implemented on Xilinx Artix-7 FPGA. The system is able to maintain stable firing (tolerance ±10%) with a loss of up to 75% of the original synaptic inputs to a neuron. Our repair mechanism has minimal hardware overhead with a tuning circuit (repair unit) which consumes only three slices/neuron for implementing a threshold voltage-based homeostatic fault-tolerant unit. The overall architecture has a minimal impact on power consumption and, therefore, supports scalable implementations. This paper opens a novel way of implementing the behavior of natural fault tolerant system in hardware establishing homeostatic self-repair behavior.",
"title": ""
},
{
"docid": "8d02b303ad5fc96a082880d703682de4",
"text": "Feature engineering remains a major bottleneck when creating predictive systems from electronic medical records. At present, an important missing element is detecting predictive <italic>regular clinical motifs</italic> from <italic> irregular episodic records</italic>. We present <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math> </inline-formula> (short for <italic>Deep</italic> <italic>r</italic>ecord), a new <italic>end-to-end</italic> deep learning system that learns to extract features from medical records and predicts future risk automatically. <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math></inline-formula> transforms a record into a sequence of discrete elements separated by coded time gaps and hospital transfers. On top of the sequence is a convolutional neural net that detects and combines predictive local clinical motifs to stratify the risk. <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math></inline-formula> permits transparent inspection and visualization of its inner working. We validate <inline-formula><tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$ </tex-math></inline-formula> on hospital data to predict unplanned readmission after discharge. <inline-formula> <tex-math notation=\"LaTeX\">$\\mathtt {Deepr}$</tex-math></inline-formula> achieves superior accuracy compared to traditional techniques, detects meaningful clinical motifs, and uncovers the underlying structure of the disease and intervention space.",
"title": ""
},
{
"docid": "ca8c262513466709a9d1eee198c804cc",
"text": "Theories of language production have long been expressed as connectionist models. We outline the issues and challenges that must be addressed by connectionist models of lexical access and grammatical encoding, and review three recent models. The models illustrate the value of an interactive activation approach to lexical access in production, the need for sequential output in both phonological and grammatical encoding, and the potential for accounting for structural effects on errors and structural priming from learning.",
"title": ""
},
{
"docid": "38d650cb945dc50d97762186585659a4",
"text": "Sustainable biofuels, biomaterials, and fine chemicals production is a critical matter that research teams around the globe are focusing on nowadays. Polyhydroxyalkanoates represent one of the biomaterials of the future due to their physicochemical properties, biodegradability, and biocompatibility. Designing efficient and economic bioprocesses, combined with the respective social and environmental benefits, has brought together scientists from different backgrounds highlighting the multidisciplinary character of such a venture. In the current review, challenges and opportunities regarding polyhydroxyalkanoate production are presented and discussed, covering key steps of their overall production process by applying pure and mixed culture biotechnology, from raw bioprocess development to downstream processing.",
"title": ""
},
{
"docid": "f923a3a18e8000e4094d4a6d6e69b18f",
"text": "We describe the functional and architectural breakdown of a monocular pedestrian detection system. We describe in detail our approach for single-frame classification based on a novel scheme of breaking down the class variability by repeatedly training a set of relatively simple classifiers on clusters of the training set. Single-frame classification performance results and system level performance figures for daytime conditions are presented with a discussion about the remaining gap to meet a daytime normal weather condition production system.",
"title": ""
},
{
"docid": "b4b66392aec0c4e00eb6b1cabbe22499",
"text": "ADJ: Adjectives that occur with the NP CMC: Orthographic features of the NP CPL: Phrases that occur with the NP VERB: Verbs that appear with the NP Task: Predict whether a noun phrase (NP) belongs to a category (e.g. “city”) Category # Examples animal 20,733 beverage 18,932 bird 19,263 bodypart 21,840 city 21,778 disease 21,827 drug 20,452 fish 19,162 food 19,566 fruit 18,911 muscle 21,606 person 21,700 protein 21,811 river 21,723 vegetable 18,826",
"title": ""
},
{
"docid": "4a6dc591d385d0fb02a98067d8a42f33",
"text": "A new field has emerged to investigate the cognitive neuroscience of social behaviour, the popularity of which is attested by recent conferences, special issues of journals and by books. But the theoretical underpinnings of this new field derive from an uneasy marriage of two different approaches to social behaviour: sociobiology and evolutionary psychology on the one hand, and social psychology on the other. The first approach treats the study of social behaviour as a topic in ethology, continuous with studies of motivated behaviour in other animals. The second approach has often emphasized the uniqueness of human behaviour, and the uniqueness of the individual person, their environment and their social surroundings. These two different emphases do not need to conflict with one another. In fact, neuroscience might offer a reconciliation between biological and psychological approaches to social behaviour in the realization that its neural regulation reflects both innate, automatic and COGNITIVELY IMPENETRABLE mechanisms, as well as acquired, contextual and volitional aspects that include SELF-REGULATION. We share the first category of features with other species, and we might be distinguished from them partly by elaborations on the second category of features. In a way, an acknowledgement of such an architecture simply provides detail to the way in which social cognition is complex — it is complex because it is not monolithic, but rather it consists of several tracks of information processing that can be variously recruited depending on the circumstances. Specifying those tracks, the conditions under which they are engaged, how they interact, and how they must ultimately be coordinated to regulate social behaviour in an adaptive fashion, is the task faced by a neuroscientific approach to social cognition.",
"title": ""
},
{
"docid": "95395c693b4cdfad722ae0c3545f45ef",
"text": "Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.",
"title": ""
},
{
"docid": "175229c7b756a2ce40f86e27efe28d53",
"text": "This paper describes a comparative study of the envelope extraction algorithms for the cardiac sound signal segmentation. In order to extract the envelope curves based on the time elapses of the first and the second heart sounds of cardiac sound signals, three representative algorithms such as the normalized average Shannon energy, the envelope information of Hilbert transform, and the cardiac sound characteristic waveform (CSCW) are introduced. Performance comparison of the envelope extraction algorithms, and the advantages and disadvantages of the methods are examined by some parameters. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5d80bf63f19f3aa271c0d16e179c90d6",
"text": "3D meshes are deployed in a wide range of application processes (e.g., transmission, compression, simplification, watermarking and so on) which inevitably introduce geometric distortions that may alter the visual quality of the rendered data. Hence, efficient model-based perceptual metrics, operating on the geometry of the meshes being compared, have been recently introduced to control and predict these visual artifacts. However, since the 3D models are ultimately visualized on 2D screens, it seems legitimate to use images of the models (i.e., snapshots from different viewpoints) to evaluate their visual fidelity. In this work we investigate the use of image metrics to assess the visual quality of 3D models. For this goal, we conduct a wide-ranging study involving several 2D metrics, rendering algorithms, lighting conditions and pooling algorithms, as well as several mean opinion score databases. The collected data allow (1) to determine the best set of parameters to use for this image-based quality assessment approach and (2) to compare this approach to the best performing model-based metrics and determine for which use-case they are respectively adapted. We conclude by exploring several applications that illustrate the benefits of image-based quality assessment.",
"title": ""
},
{
"docid": "19a28d8bbb1f09c56f5c85be003a9586",
"text": "ABSTRACT: Five questionnaires for assessing the usability of a website were compared in a study with 123 participants. The questionnaires studied were SUS, QUIS, CSUQ, a variant of Microsoft’s Product Reaction Cards, and one that we have used in our Usability Lab for several years. Each participant performed two tasks on each of two websites: finance.yahoo.com and kiplinger.com. All five questionnaires revealed that one site was significantly preferred over the other. The data were analyzed to determine what the results would have been at different sample sizes from 6 to 14. At a sample size of 6, only 30-40% of the samples would have identified that one of the sites was significantly preferred. Most of the data reach an apparent asymptote at a sample size of 12, where two of the questionnaires (SUS and CSUQ) yielded the same conclusion as the full dataset at least 90% of the time.",
"title": ""
},
{
"docid": "bd9064905ba4ed166ad1e9c41eca7b34",
"text": "Governments worldwide are encouraging public agencies to join e-Government initiatives in order to provide better services to their citizens and businesses; hence, methods of evaluating the readiness of individual public agencies to execute specific e-Government programs and directives are a key ingredient in the successful expansion of e-Government. To satisfy this need, a model called the eGovernment Maturity Model (eGov-MM) was developed, integrating the assessment of technological, organizational, operational, and human capital capabilities, under a multi-dimensional, holistic, and evolutionary approach. The model is strongly supported by international best practices, and provides tuning mechanisms to enable its alignment with nation-wide directives on e-Government. This article describes how the model was conceived, designed, developed, field tested by expert public officials from several government agencies, and finally applied to a selection of 30 public agencies in Chile, generating the first formal measurements, assessments, and rankings of their readiness for eGovernment. The implementation of the model also provided several recommendations to policymakers at the national and agency levels.",
"title": ""
},
{
"docid": "e36e318dd134fd5840d5a5340eb6e265",
"text": "Business Intelligence (BI) promises a range of technologies for using information to ensure compliance to strategic and tactical objectives, as well as government laws and regulations. These technologies can be used in conjunction with conceptual models of business objectives, processes and situations (aka business schemas) to drive strategic decision-making about opportunities and threats etc. This paper focuses on three key concepts for strategic business models -situation, influence and indicator -and how they are used for strategic analysis. The semantics of these concepts are defined using a state-ofthe-art upper ontology (DOLCE+). We also propose a method for building a business schema, and demonstrate alternative ways of formal analysis of the schema based on existing tools for goal and probabilistic reasoning.",
"title": ""
}
] |
scidocsrr
|
9e772c3fe6b03ee01c4d088fc6e18d19
|
The Alexa Meaning Representation Language
|
[
{
"docid": "7161122eaa9c9766e9914ba0f2ee66ef",
"text": "Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments. It is also useful for multilingual system development and comparative linguistic studies. Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. In this paper, we describe v1 of the universal guidelines, the underlying design principles, and the currently available treebanks for 33 languages.",
"title": ""
}
] |
[
{
"docid": "7959204dbaa087fc7c37e4157e057efc",
"text": "OBJECTIVE\nThe primary objective of this study was to compare the effectiveness of a water flosser plus sonic toothbrush to a sonic toothbrush alone on the reduction of bleeding, gingivitis, and plaque. The secondary objective was to compare the effectiveness of different sonic toothbrushes on bleeding, gingivitis, and plaque.\n\n\nMETHODS\nOne-hundred and thirty-nine subjects completed this randomized, four-week, single-masked, parallel clinical study. Subjects were assigned to one of four groups: Waterpik Complete Care, which is a combination of a water flosser plus power toothbrush (WFS); Sensonic Professional Plus Toothbrush (SPP); Sonicare FlexCare toothbrush (SF); or an Oral-B Indicator manual toothbrush (MT). Subjects were provided written and verbal instructions for all power products at baseline, and instructions were reviewed at the two-week visit. Data were evaluated for whole mouth, facial, and lingual surfaces for bleeding on probing (BOP) and gingivitis (MGI). Plaque data were evaluated for whole mouth, lingual, facial, approximal, and marginal areas of the tooth using the Rustogi Modification of the Navy Plaque Index (RMNPI). Data were recorded at baseline (BL), two weeks (W2), and four weeks (W4).\n\n\nRESULTS\nAll groups showed a significant reduction from BL in BOP, MGI, and RMNPI for all areas measured at the W2 and W4 visits (p < 0.001). The reduction of BOP was significantly higher for the WFS group than the other three groups at W2 and W4 for all areas measured (p < 0.001 for all, except p = 0.007 at W2 and p = 0.008 for W4 lingual comparison to SPP). The WFS group was 34% more effective than the SPP group, 70% more effective than the SF group, and 1.59 times more effective than the MT group for whole mouth bleeding scores (p < 0.001) at W4. The reduction of MGI was significantly higher for the WFS group; 23% more effective than SPP, 48% more effective than SF, and 1.35 times more effective than MT for whole mouth (p <0.001) at W4. The reduction of MGI was significantly higher for WFS than the SF and MT for facial and lingual surfaces, and more effective than the SPP for facial surfaces (p < 0.001) at W4. The WFS group showed significantly better reductions for plaque than the SF and MT groups for whole mouth, facial, lingual, approximal, and marginal areas at W4 (p < 0.001; SF facial p = 0.025). For plaque reduction, the WFS was significantly better than the SPP for whole mouth (p = 0.003) and comparable for all other areas and surfaces at W4. The WFS was 52% more effective for whole mouth, 31% for facial, 77% for lingual, 1.22 times for approximal, and 1.67 times for marginal areas compared to the SF for reducing plaque scores at W4 (p < 0.001; SF facial p = 0.025). The SPP had significantly higher reductions than the SF for whole mouth and lingual BOP and MGI scores, and whole mouth, approximal, marginal, and lingual areas for plaque at W4.\n\n\nCONCLUSION\nThe Waterpik Complete Care is significantly more effective than the Sonicare FlexCare toothbrush for reducing gingival bleeding, gingivitis, and plaque. The Sensonic Professional Plus Toothbrush is significantly more effective than the Sonicare Flex-Care for reducing gingival bleeding, gingivitis, and plaque.",
"title": ""
},
{
"docid": "2f0769d0f3a1c29a3b794f964a2a560c",
"text": "We propose a statistical method based on graphical Gaussian models for estimating large gene networks from DNA microarray data. In estimating large gene networks, the number of genes is larger than the number of samples, we need to consider some restrictions for model building. We propose weighted lasso estimation for the graphical Gaussian models as a model of large gene networks. In the proposed method, the structural learning for gene networks is equivalent to the selection of the regularization parameters included in the weighted lasso estimation. We investigate this problem from a Bayes approach and derive an empirical Bayesian information criterion for choosing them. Unlike Bayesian network approach, our method can find the optimal network structure and does not require to use heuristic structural learning algorithm. We conduct Monte Carlo simulation to show the effectiveness of the proposed method. We also analyze Arabidopsis thaliana microarray data and estimate gene networks.",
"title": ""
},
{
"docid": "c10adaa38fd3f832767daf5e0baf07f5",
"text": "Cellular senescence entails essentially irreversible replicative arrest, apoptosis resistance, and frequently acquisition of a pro-inflammatory, tissue-destructive senescence-associated secretory phenotype (SASP). Senescent cells accumulate in various tissues with aging and at sites of pathogenesis in many chronic diseases and conditions. The SASP can contribute to senescence-related inflammation, metabolic dysregulation, stem cell dysfunction, aging phenotypes, chronic diseases, geriatric syndromes, and loss of resilience. Delaying senescent cell accumulation or reducing senescent cell burden is associated with delay, prevention, or alleviation of multiple senescence-associated conditions. We used a hypothesis-driven approach to discover pro-survival Senescent Cell Anti-apoptotic Pathways (SCAPs) and, based on these SCAPs, the first senolytic agents, drugs that cause senescent cells to become susceptible to their own pro-apoptotic microenvironment. Several senolytic agents, which appear to alleviate multiple senescence-related phenotypes in pre-clinical models, are beginning the process of being translated into clinical interventions that could be transformative.",
"title": ""
},
{
"docid": "60664c058868f08a67d14172d87a4756",
"text": "The design of legged robots is often inspired by animals evolved to excel at different tasks. However, while mimicking morphological features seen in nature can be very powerful, robots may need to perform motor tasks that their living counterparts do not. In the absence of designs that can be mimicked, an alternative is to resort to mathematical models that allow the relationship between a robot's form and function to be explored. In this paper, we propose such a model to co-design the motion and leg configurations of a robot such that a measure of performance is optimized. The framework begins by planning trajectories for a simplified model consisting of the center of mass and feet. The framework then optimizes the length of each leg link while solving for associated full-body motions. Our model was successfully used to find optimized designs for legged robots performing tasks that include jumping, walking, and climbing up a step. Although our results are preliminary and our analysis makes a number of simplifying assumptions, our findings indicate that the cost function, the sum of squared joint torques over the duration of a task, varies substantially as the design parameters change.",
"title": ""
},
{
"docid": "504fcb97010d71fd07aca8bc9543af8b",
"text": "The presence of raindrop induced distortion can have a significant negative impact on computer vision applications. Here we address the problem of visual raindrop distortion in standard colour video imagery for use in non-static, automotive computer vision applications where the scene can be observed to be changing over subsequent consecutive frames. We utilise current state of the art research conducted into the investigation of salience mapping as means of initial detection of potential raindrop candidates. We further expand on this prior state of the art work to construct a combined feature rich descriptor of shape information (Hu moments), isolation of raindrops pixel information from context, and texture (saliency derived) within an improved visual bag of words verification framework. Support Vector Machine and Random Forest classification were utilised for verification of potential candidates, and the effects of increasing discrete cluster centre counts on detection rates were studied. This novel approach of utilising extended shape information, isolation of context, and texture, along with increasing cluster counts, achieves a notable 13% increase in precision (92%) and 10% increase in recall (86%) against prior state of the art. False positive rates were also observed to decrease with a minimal false positive rate of 14% observed. iv ACKNOWLEDGEMENTS I wish to thank Dr Toby Breckon for his time and commitment during my project, and for the help given in patiently explaining things for me. I also wish to thank Dr Mark Stillwell for his never tiring commitment to proofreading and getting up to speed with my project. Without them, this thesis would never have come together in the way it did. I wish to thank my partner, Dr Victoria Gortowski for allowing me to go back to university, supporting me and having faith that I could do it, without which, I do not think I would have. And last but not least, Lara and Mitsy. Thank you.",
"title": ""
},
{
"docid": "71428f1d968a25eb7df33f55557eb424",
"text": "BACKGROUND\nThe 'Choose and Book' system provides an online booking service which primary care professionals can book in real time or soon after a patient's consultation. It aims to offer patients choice and improve outpatient clinic attendance rates.\n\n\nOBJECTIVE\nAn audit comparing attendance rates of new patients booked into the Audiological Medicine Clinic using the 'Choose and Book' system with that of those whose bookings were made through the traditional booking system.\n\n\nMETHODS\nData accrued between 1 April 2008 and 31 October 2008 were retrospectively analysed for new patient attendance at the department, and the age and sex of the patients, method of appointment booking used and attendance record were collected. Patients were grouped according to booking system used - 'Choose and Book' or the traditional system. The mean ages of the groups were compared by a t test. The standard error of the difference between proportions was used to compare the data from the two groups. A P value of < or = 0.05 was considered to be significant.\n\n\nRESULTS\n'Choose and Book' patients had a significantly better rate of attendance than traditional appointment patients, P < 0.01 (95% CI 4.3, 20.5%). There was no significant difference between the two groups in terms of sex, P > 0.1 (95% CI-3.0, 16.2%). The 'Choose and Book' patients, however, were significantly older than the traditional appointment patients, P < 0.001 (95% CI 4.35, 12.95%).\n\n\nCONCLUSION\nThis audit suggests that when primary care agents book outpatient clinic appointments online it improves outpatient attendance.",
"title": ""
},
{
"docid": "e5ef0f63e08d4f70d086a212f185cf97",
"text": "Software defined network (SDN) can effectively improve the performance of traffic engineering and will be widely used in backbone networks. Therefore, new energy-saving schemes must take SDN into consideration; this action is extremely important owing to the rapidly increasing energy consumption in telecom and Internet service provider (ISP) networks. Meanwhile, the introduction of SDN in current networks must be incremental in most cases, for technical and economic reasons. During this period, operators must manage hybrid networks in which SDN and traditional protocols coexist. In this study, we investigate the energy-efficient traffic engineering problem in hybrid SDN/Internet protocol (IP) networks. First, we formulate the mathematical optimization model considering the SDN/IP hybrid routing mode. The problem is NP-hard; therefore, we propose a fast heuristic algorithm named hybrid energy-aware traffic engineering (HEATE) as a solution. In our proposed HEATE algorithm, the IP routers perform shortest-path routing by using distributed open shortest path first (OSPF) link weight optimization. The SDNs perform multi-path routing with traffic-flow splitting managed by the global SDN controller. The HEATE algorithm determines the optimal setting for the OSPF link weight and the splitting ratio of SDNs. Thus, the traffic flow is aggregated onto partial links, and the underutilized links can be turned off to save energy. Based on computer simulation results, we demonstrate that our algorithm achieves a significant improvement in energy efficiency in hybrid SDN/IP networks.",
"title": ""
},
{
"docid": "b7b2f1c59dfc00ab6776c6178aff929c",
"text": "Over the past four years, the Big Data and Exascale Computing (BDEC) project organized a series of five international workshops that aimed to explore the ways in which the new forms of data-centric discovery introduced by the ongoing revolution in high-end data analysis (HDA) might be integrated with the established, simulation-centric paradigm of the high-performance computing (HPC) community. Based on those meetings, we argue that the rapid proliferation of digital data generators, the unprecedented growth in the volume and diversity of the data they generate, and the intense evolution of the methods for analyzing and using that data are radically reshaping the landscape of scientific computing. The most critical problems involve the logistics of wide-area, multistage workflows that will move back and forth across the computing continuum, between the multitude of distributed sensors, instruments and other devices at the networks edge, and the centralized resources of commercial clouds and HPC centers. We suggest that the prospects for the future integration of technological infrastructures and research ecosystems need to be considered at three different levels. First, we discuss the convergence of research applications and workflows that establish a research paradigm that combines both HPC and HDA, where ongoing progress is already motivating efforts at the other two levels. Second, we offer an account of some of the problems involved with creating a converged infrastructure for peripheral environments, that is, a shared infrastructure that can be deployed throughout the network in a scalable manner to meet the highly diverse requirements for processing, communication, and buffering/storage of massive data workflows of many different scientific domains. Third, we focus on some opportunities for software ecosystem convergence in big, logically centralized facilities that execute large-scale simulations and models and/or perform large-scale data analytics. We close by offering some conclusions and recommendations for future investment and policy review.",
"title": ""
},
{
"docid": "885764d7e71711b8f9a086d43c6e4f9a",
"text": "In Indian economy, Agriculture is the most important branch and 70 percentage of rural population livelihood depends on agricultural work. Farming is the one of the important part of Agriculture. Crop yield depends on environment’s factors like precipitation, temperature, evapotranspiration, etc. Generally farmers cultivate crop, based on previous experience. But nowadays, the uncertainty increased in environment. So, accurate analysis of historic data of environment parameters should be done for successful farming. To get more harvest, we should also do the analysis of previous cultivation data. The Prediction of crop yield can be done based on historic crop cultivation data and weather data using data mining methods. This paper describes the role of data mining in Agriculture and crop yield prediction. This paper also describes Groundnut crop yield prediction analysis and Naive Bayes Method.",
"title": ""
},
{
"docid": "1a35d97c2160c2d8e3aef95b6b427c48",
"text": "We presented a comparison between several feature ranking methods used on two real datasets. We considered six ranking methods that can be divided into two broad categories: statistical and entropy-based. Four supervised learning algorithms are adopted to build models, namely, IB1, Naive Bayes, C4.5 decision tree and the RBF network. We showed that the selection of ranking methods could be important for classification accuracy. In our experiments, ranking methods with different supervised learning algorithms give quite different results for balanced accuracy. Our cases confirm that, in order to be sure that a subset of features giving the highest accuracy has been selected, the use of many different indices is recommended.",
"title": ""
},
{
"docid": "f2fc6440b95c9ed93f5925672798ae2d",
"text": "This paper presents a standalone 5.6 nV/√Hz chopper op-amp that operates from a 2.1-5.5 V supply. Frequency compensation is achieved in a power-and area-efficient manner by using a current attenuator and a dummy differential output. As a result, the overall op-amp only consumes 1.4 mA supply current and 1.26 mm2 die area. Up-modulated chopper ripple is suppressed by a local feedback technique, called auto correction feedback (ACFB). The charge injection of the input chopping switches can cause residual offset voltages, especially with the wider switches needed to reduce thermal noise. By employing an adaptive clock boosting technique with NMOS input switches, the amount of charge injection is minimized and kept constant as the input common-mode voltage changes. This results in a 0.5 μV maximum offset and 0.015 μV/°C maximum drift over the amplifier's entire rail-to-rail input common-mode range and from -40 °C to 125 °C. The design is implemented in a 0.35 μm CMOS process augmented by 5 V CMOS transistors.",
"title": ""
},
{
"docid": "f1bc297544e333f08387cfd410e1dc75",
"text": "Cascades are ubiquitous in various network environments. How to predict these cascades is highly nontrivial in several vital applications, such as viral marketing, epidemic prevention and traffic management. Most previous works mainly focus on predicting the final cascade sizes. As cascades are typical dynamic processes, it is always interesting and important to predict the cascade size at any time, or predict the time when a cascade will reach a certain size (e.g. an threshold for outbreak). In this paper, we unify all these tasks into a fundamental problem: cascading process prediction. That is, given the early stage of a cascade, how to predict its cumulative cascade size of any later time? For such a challenging problem, how to understand the micro mechanism that drives and generates the macro phenomena (i.e. cascading process) is essential. Here we introduce behavioral dynamics as the micro mechanism to describe the dynamic process of a node's neighbors getting infected by a cascade after this node getting infected (i.e. one-hop subcascades). Through data-driven analysis, we find out the common principles and patterns lying in behavioral dynamics and propose a novel Networked Weibull Regression model for behavioral dynamics modeling. After that we propose a novel method for predicting cascading processes by effectively aggregating behavioral dynamics, and present a scalable solution to approximate the cascading process with a theoretical guarantee. We extensively evaluate the proposed method on a large scale social network dataset. The results demonstrate that the proposed method can significantly outperform other state-of-the-art baselines in multiple tasks including cascade size prediction, outbreak time prediction and cascading process prediction.",
"title": ""
},
{
"docid": "024cbb734053b256fd7b20b1a757d780",
"text": "The IETF is currently working on service differentiation in the Internet. However, in wireless environments where bandwidth is scarce and channel conditions are variable, IP differentiated services are suboptimal without lower layers’ support. In this paper we present three service differentiation schemes for IEEE 802.11. The first one is based on scaling the contention window according to the priority of each flow or user. The second one assigns different inter frame spacings to different users. Finally, the last one uses different maximum frame lengths for different users. We simulate and analyze the performance of each scheme with TCP and UDP flows. Keywords—QoS, DiffServ, TCP, UDP, CBR, Wireless communications.",
"title": ""
},
{
"docid": "584645a035454682222a26870377703c",
"text": "Conventionally, the sum and difference signals of a tracking system are fixed up by sum and difference network and the network is often composed of four or more magic tees whose arms direct at four different directions, which give inconveniences to assemble. In this paper, a waveguide side-wall slot directional coupler and a double dielectric slab filled waveguide phase shifter is used to form a planar magic tee with four arms in the same H-plane. Four planar magic tees can be used to construct the W-band planar monopulse comparator. The planar magic tee is analyzed exactly with Ansoft HFSS software, and is optimized by genetic algorithm. Simulation results are presented, which show good performance.",
"title": ""
},
{
"docid": "54bcaafa495d6d778bddbbb5d5cf906e",
"text": "Low-shot visual learning—the ability to recognize novel object categories from very few examples—is a hallmark of human visual intelligence. Existing machine learning approaches fail to generalize in the same way. To make progress on this foundational problem, we present a novel protocol to evaluate low-shot learning on complex images where the learner is permitted to first build a feature representation. Then, we propose and evaluate representation regularization techniques that improve the effectiveness of convolutional networks at the task of low-shot learning, leading to a 2x reduction in the amount of training data required at equal accuracy rates on the challenging ImageNet dataset.",
"title": ""
},
{
"docid": "6c411f36e88a39684eb9779462117e6b",
"text": "Number of people who use internet and websites for various purposes is increasing at an astonishing rate. More and more people rely on online sites for purchasing songs, apparels, books, rented movies etc. The competition between the online sites forced the web site owners to provide personalized services to their customers. So the recommender systems came into existence. Recommender systems are active information filtering systems that attempt to present to the user, information items in which the user is interested in. The websites implement recommender system feature using collaborative filtering, content based or hybrid approaches. The recommender systems also suffer from issues like cold start, sparsity and over specialization. Cold start problem is that the recommenders cannot draw inferences for users or items for which it does not have sufficient information. This paper attempts to propose a solution to the cold start problem by combining association rules and clustering technique. Comparison is done between the performance of the recommender system when association rule technique is used and the performance when association rule and clustering is combined. The experiments with the implemented system proved that accuracy can be improved when association rules and clustering is combined. An accuracy improvement of 36% was achieved by using the combination technique over the association rule technique.",
"title": ""
},
{
"docid": "78fe279ca9a3e355726ffacb09302be5",
"text": "In present, dynamically developing organizations, that often realize business tasks using the project-based approach, effective project management is of paramount importance. Numerous reports and scientific papers present lists of critical success factors in project management, and communication management is usually at the very top of the list. But even though the communication practices are found to be associated with most of the success dimensions, they are not given enough attention and the communication processes and practices formalized in the company's project management methodology are neither followed nor prioritized by project managers. This paper aims at supporting project managers and teams in more effective implementation of best practices in communication management by proposing a set of communication management patterns, which promote a context-problem-solution approach to communication management in projects.",
"title": ""
},
{
"docid": "c796bc689e9b3e2b8d03525e5cd5908c",
"text": "As they grapple with increasingly large data sets, biologists and computer scientists uncork new bottlenecks. B iologists are joining the big-data club. With the advent of high-throughput genomics, life scientists are starting to grapple with massive data sets, encountering challenges with handling, processing and moving information that were once the domain of astronomers and high-energy physicists 1. With every passing year, they turn more often to big data to probe everything from the regulation of genes and the evolution of genomes to why coastal algae bloom, what microbes dwell where in human body cavities and how the genetic make-up of different cancers influences how cancer patients fare 2. The European Bioinformatics Institute (EBI) in Hinxton, UK, part of the European Molecular Biology Laboratory and one of the world's largest biology-data repositories, currently stores 20 petabytes (1 petabyte is 10 15 bytes) of data and backups about genes, proteins and small molecules. Genomic data account for 2 peta-bytes of that, a number that more than doubles every year 3 (see 'Data explosion'). This data pile is just one-tenth the size of the data store at CERN, Europe's particle-physics laboratory near Geneva, Switzerland. Every year, particle-collision events in CERN's Large Hadron Collider generate around 15 petabytes of data — the equivalent of about 4 million high-definition feature-length films. But the EBI and institutes like it face similar data-wrangling challenges to those at CERN, says Ewan Birney, associate director of the EBI. He and his colleagues now regularly meet with organizations such as CERN and the European Space Agency (ESA) in Paris to swap lessons about data storage, analysis and sharing. All labs need to manipulate data to yield research answers. As prices drop for high-throughput instruments such as automated Extremely powerful computers are needed to help biologists to handle big-data traffic jams.",
"title": ""
},
{
"docid": "7e5b18a0356a89a0285f80a2224d8b12",
"text": "Machine recognition of a handwritten mathematical expression (HME) is challenging due to the ambiguities of handwritten symbols and the two-dimensional structure of mathematical expressions. Inspired by recent work in deep learning, we present Watch, Attend and Parse (WAP), a novel end-to-end approach based on neural network that learns to recognize HMEs in a two-dimensional layout and outputs them as one-dimensional character sequences in LaTeX format. Inherently unlike traditional methods, our proposed model avoids problems that stem from symbol segmentation, and it does not require a predefined expression grammar. Meanwhile, the problems of symbol recognition and structural analysis are handled, respectively, using a watcher and a parser. We employ a convolutional neural network encoder that takes HME images as input as the watcher and employ a recurrent neural network decoder equipped with an attention mechanism as the parser to generate LaTeX sequences. Moreover, the correspondence between the input expressions and the output LaTeX sequences is learned automatically by the attention mechanism. We validate the proposed approach on a benchmark published by the CROHME international competition. Using the official training dataset, WAP significantly outperformed the state-of-the-art method with an expression recognition accuracy of 46.55% on CROHME 2014 and 44.55% on CROHME 2016. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
9fc8731e7b2f7d8c4f17816f1d3b0626
|
Clickstream Analytics: An Experimental Analysis of the Amazon Users' Simulated Monthly Traffic
|
[
{
"docid": "3429145583d25ba1d603b5ade11f4312",
"text": "Sequential pattern mining is an important data mining problem with broad applications. It is challenging since one may need to examine a combinatorially explosive number of possible subsequence patterns. Most of the previously developed sequential pattern mining methods follow the methodology of which may substantially reduce the number of combinations to be examined. However, still encounters problems when a sequence database is large and/or when sequential patterns to be mined are numerous and/or long. In this paper, we propose a novel sequential pattern mining method, called PrefixSpan (i.e., Prefix-projected Sequential pattern mining), which explores prefixprojection in sequential pattern mining. PrefixSpan mines the complete set of patterns but greatly reduces the efforts of candidate subsequence generation. Moreover, prefix-projection substantially reduces the size of projected databases and leads to efficient processing. Our performance study shows that PrefixSpan outperforms both the -based GSP algorithm and another recently proposed method, FreeSpan, in mining large sequence",
"title": ""
}
] |
[
{
"docid": "12680d4fcf57a8a18d9c2e2b1107bf2d",
"text": "Recent advances in computer and technology resulted into ever increasing set of documents. The need is to classify the set of documents according to the type. Laying related documents together is expedient for decision making. Researchers who perform interdisciplinary research acquire repositories on different topics. Classifying the repositories according to the topic is a real need to analyze the research papers. Experiments are tried on different real and artificial datasets such as NEWS 20, Reuters, emails, research papers on different topics. Term Frequency-Inverse Document Frequency algorithm is used along with fuzzy K-means and hierarchical algorithm. Initially experiment is being carried out on small dataset and performed cluster analysis. The best algorithm is applied on the extended dataset. Along with different clusters of the related documents the resulted silhouette coefficient, entropy and F-measure trend are presented to show algorithm behavior for each data set.",
"title": ""
},
{
"docid": "db53ffe2196586d570ad636decbf67de",
"text": "We present PredRNN++, a recurrent network for spatiotemporal predictive learning. In pursuit of a great modeling capability for short-term video dynamics, we make our network deeper in time by leveraging a new recurrent structure named Causal LSTM with cascaded dual memories. To alleviate the gradient propagation difficulties in deep predictive models, we propose a Gradient Highway Unit, which provides alternative quick routes for the gradient flows from outputs back to long-range previous inputs. The gradient highway units work seamlessly with the causal LSTMs, enabling our model to capture the short-term and the long-term video dependencies adaptively. Our model achieves state-of-the-art prediction results on both synthetic and real video datasets, showing its power in modeling entangled motions.",
"title": ""
},
{
"docid": "1d0c9c8c439f5fa41fee964caed7c2b1",
"text": "As interactive voice response systems become more prevalent and provide increasingly more complex functionality, it becomes clear that the challenges facing such systems are not solely in their synthesis and recognition capabilities. Issues such as the coordination of turn exchanges between system and user also play an important role in system usability. In particular, both systems and users have difficulty determining when the other is taking or relinquishing the turn. In this paper, we seek to identify turn-taking cues correlated with human–human turn exchanges which are automatically computable. We compare the presence of potential prosodic, acoustic, and lexico-syntactic turn-yielding cues in prosodic phrases preceding turn changes (smooth switches) vs. turn retentions (holds) vs. backchannels in the Columbia Games Corpus, a large corpus of task-oriented dialogues, to determine which features reliably distinguish between these three. We identify seven turn-yielding cues, all of which can be extracted automatically, for future use in turn generation and recognition in interactive voice response (IVR) systems. Testing Duncan’s (1972) hypothesis that these turn-yielding cues are linearly correlated with the occurrence of turn-taking attempts, we further demonstrate that, the greater the number of turn-yielding cues that are present, the greater the likelihood that a turn change will occur. We also identify six cues that precede backchannels, which will also be useful for IVR backchannel generation and recognition; these cues correlate with backchannel occurrence in a quadratic manner. We find similar results for overlapping and for non-overlapping speech. © 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1b0ebf54bc1d534affc758ced7aef8de",
"text": "We report our study of a silica-water interface using reactive molecular dynamics. This first-of-its-kind simulation achieves length and time scales required to investigate the detailed chemistry of the system. Our molecular dynamics approach is based on the ReaxFF force field of van Duin et al. [J. Phys. Chem. A 107, 3803 (2003)]. The specific ReaxFF implementation (SERIALREAX) and force fields are first validated on structural properties of pure silica and water systems. Chemical reactions between reactive water and dangling bonds on a freshly cut silica surface are analyzed by studying changing chemical composition at the interface. In our simulations, reactions involving silanol groups reach chemical equilibrium in approximately 250 ps. It is observed that water molecules penetrate a silica film through a proton-transfer process we call \"hydrogen hopping,\" which is similar to the Grotthuss mechanism. In this process, hydrogen atoms pass through the film by associating and dissociating with oxygen atoms within bulk silica, as opposed to diffusion of intact water molecules. The effective diffusion constant for this process, taken to be that of hydrogen atoms within silica, is calculated to be 1.68 x 10(-6) cm(2)/s. Polarization of water molecules in proximity of the silica surface is also observed. The subsequent alignment of dipoles leads to an electric potential difference of approximately 10.5 V between the silica slab and water.",
"title": ""
},
{
"docid": "1db72cafa214f41b5b6faa3a3c0c8be0",
"text": "Multiple-antenna receivers offer numerous advantages over single-antenna receivers, including sensitivity improvement, ability to reject interferers spatially and enhancement of data-rate or link reliability via MIMO. In the recent past, RF/analog phased-array receivers have been investigated [1-4]. On the other hand, digital beamforming offers far greater flexibility, including ability to form multiple simultaneous beams, ease of digital array calibration and support for MIMO. However, ADC dynamic range is challenged due to the absence of spatial interference rejection at RF/analog.",
"title": ""
},
{
"docid": "74ccb28a31d5a861bea1adfaab2e9bf1",
"text": "For many decades CMOS devices have been successfully scaled down to achieve higher speed and increased performance of integrated circuits at lower cost. Today’s charge-based CMOS electronics encounters two major challenges: power dissipation and variability. Spintronics is a rapidly evolving research and development field, which offers a potential solution to these issues by introducing novel ‘more than Moore’ devices. Spin-based magnetoresistive random-access memory (MRAM) is already recognized as one of the most promising candidates for future universal memory. Magnetic tunnel junctions, the main elements of MRAM cells, can also be used to build logic-in-memory circuits with non-volatile storage elements on top of CMOS logic circuits, as well as versatile compact on-chip oscillators with low power consumption. We give an overview of CMOS-compatible spintronics applications. First, we present a brief introduction to the physical background considering such effects as magnetoresistance, spin-transfer torque (STT), spin Hall effect, and magnetoelectric effects. We continue with a comprehensive review of the state-of-the-art spintronic devices for memory applications (STT-MRAM, domain wallmotion MRAM, and spin–orbit torque MRAM), oscillators (spin torque oscillators and spin Hall nano-oscillators), logic (logic-in-memory, all-spin logic, and buffered magnetic logic gate grid), sensors, and random number generators. Devices with different types of resistivity switching are analyzed and compared, with their advantages highlighted and challenges revealed. CMOScompatible spintronic devices are demonstrated beginning with predictive simulations, proceeding to their experimental confirmation and realization, and finalized by the current status of application in modern integrated systems and circuits. We conclude the review with an outlook, where we share our vision on the future applications of the prospective devices in the area.",
"title": ""
},
{
"docid": "285587e0e608d8bafa0962b5cf561205",
"text": "BACKGROUND\nGeneralized Additive Model (GAM) provides a flexible and effective technique for modelling nonlinear time-series in studies of the health effects of environmental factors. However, GAM assumes that errors are mutually independent, while time series can be correlated in adjacent time points. Here, a GAM with Autoregressive terms (GAMAR) is introduced to fill this gap.\n\n\nMETHODS\nParameters in GAMAR are estimated by maximum partial likelihood using modified Newton's method, and the difference between GAM and GAMAR is demonstrated using two simulation studies and a real data example. GAMM is also compared to GAMAR in simulation study 1.\n\n\nRESULTS\nIn the simulation studies, the bias of the mean estimates from GAM and GAMAR are similar but GAMAR has better coverage and smaller relative error. While the results from GAMM are similar to GAMAR, the estimation procedure of GAMM is much slower than GAMAR. In the case study, the Pearson residuals from the GAM are correlated, while those from GAMAR are quite close to white noise. In addition, the estimates of the temperature effects are different between GAM and GAMAR.\n\n\nCONCLUSIONS\nGAMAR incorporates both explanatory variables and AR terms so it can quantify the nonlinear impact of environmental factors on health outcome as well as the serial correlation between the observations. It can be a useful tool in environmental epidemiological studies.",
"title": ""
},
{
"docid": "19f8ae070aa161ca1399b21b6a9c4678",
"text": "Wireless Sensor Network (WSN) is a large scale network with from dozens to thousands tiny devices. Using fields of WSNs (military, health, smart home e.g.) has a large-scale and its usage areas increasing day by day. Secure issue of WSNs is an important research area and applications of WSN have some big security deficiencies. Intrusion Detection System is a second-line of the security mechanism for networks, and it is very important to integrity, confidentiality and availability. Intrusion Detection in WSNs is somewhat different from wired and non-energy constraint wireless network because WSN has some constraints influencing cyber security approaches and attack types. This paper is a survey describing attack types of WSNs intrusion detection approaches being against to this attack types.",
"title": ""
},
{
"docid": "3753bd82d038b2b2b7f03812480fdacd",
"text": "BACKGROUND\nDuring the last few years, an increasing number of unstable thoracolumbar fractures, especially in elderly patients, has been treated by dorsal instrumentation combined with a balloon kyphoplasty. This combination provides additional stabilization to the anterior spinal column without any need for a second ventral approach.\n\n\nCASE PRESENTATION\nWe report the case of a 97-year-old male patient with a lumbar burst fracture (type A3-1.1 according to the AO Classification) who presented prolonged neurological deficits of the lower limbs - grade C according to the modified Frankel/ASIA score. After a posterior realignment of the fractured vertebra with an internal screw fixation and after an augmentation with non-absorbable cement in combination with a balloon kyphoplasty, the patient regained his mobility without any neurological restrictions.\n\n\nCONCLUSION\nEspecially in older patients, the presented technique of PMMA-augmented pedicle screw instrumentation combined with balloon-assisted kyphoplasty could be an option to address unstable vertebral fractures in \"a minor-invasive way\". The standard procedure of a two-step dorsoventral approach could be reduced to a one-step procedure.",
"title": ""
},
{
"docid": "0ad47e79e9bea44a76029e1f24f0a16c",
"text": "The requirements for OLTP database systems are becoming ever more demanding. New OLTP applications require high degrees of scalability with controlled transaction latencies in in-memory databases. Deployments of these applications require low-level control of database system overhead and program-to-data affinity to maximize resource utilization in modern machines. Unfortunately, current solutions fail to meet these requirements. First, existing database solutions fail to expose a high-level programming abstraction in which latency of transactions can be reasoned about by application developers. Second, these solutions limit infrastructure engineers in exercising low-level control on the deployment of the system on a target infrastructure, further impacting performance. In this paper, we propose a relational actor programming model for in-memory databases. Conceptually, relational actors, or reactors for short, are application-defined, isolated logical actors encapsulating relations that process function calls asynchronously. Reactors ease reasoning about correctness by guaranteeing serializability of application-level function calls. In contrast to classic transactional models, however, reactors allow developers to take advantage of intra-transaction parallelism to reduce latency and improve performance. Moreover, reactors enable a new degree of flexibility in database deployment. We present REACTDB, a novel system design exposing reactors that allows for flexible virtualization of database architecture between the extremes of shared-nothing and shared-everything without changes to application code. Our experiments with REACTDB illustrate performance predictability, multi-core scalability, and low overhead in OLTP benchmarks.",
"title": ""
},
{
"docid": "e12d800b09f2f8f19a138b25d8a8d363",
"text": "This paper proposes a corpus-based approach for answering why-questions. Conventional systems use hand-crafted patterns to extract and evaluate answer candidates. However, such hand-crafted patterns are likely to have low coverage of causal expressions, and it is also difficult to assign suitable weights to the patterns by hand. In our approach, causal expressions are automatically collected from corpora tagged with semantic relations. From the collected expressions, features are created to train an answer candidate ranker that maximizes the QA performance with regards to the corpus of why-questions and answers. NAZEQA, a Japanese why-QA system based on our approach, clearly outperforms a baseline that uses hand-crafted patterns with a Mean Reciprocal Rank (top-5) of 0.305, making it presumably the best-performing fully implemented why-QA system.",
"title": ""
},
{
"docid": "dd1a7e3493b9164af4321db944b4950c",
"text": "The emerging optical/wireless topology reconfiguration technologies have shown great potential in improving the performance of data center networks. However, it also poses a big challenge on how to find the best topology configurations to support the dynamic traffic demands. In this work, we present xWeaver, a traffic-driven deep learning solution to infer the high-performance network topology online. xWeaver supports a powerful network model that enables the topology optimization over different performance metrics and network architectures. With the design of properly-structured neural networks, it can automatically derive the critical traffic patterns from data traces and learn the underlying mapping between the traffic patterns and topology configurations specific to the target data center. After offline training, xWeaver generates the optimized (or near-optimal) topology configuration online, and can also smoothly update its model parameters for new traffic patterns. We build an optical-circuit-switch-based testbed to demonstrate the function and transmission efficiency of our proposed solution. We further perform extensive simulations to show the significant performance gain of xWeaver, in supporting higher network throughput and smaller flow completion time.",
"title": ""
},
{
"docid": "5aa219f23d4be5d18ace0aa0b0b51b76",
"text": "An improved bandgap reference with high power supply rejection (PSR) is presented. The proposed circuit consists of a simple voltage subtractor circuit incorporated into the conventional Brokaw bandgap reference. Essentially, the subtractor feeds the supply noise directly into the feedback loop of the bandgap circuit which could help to suppress supply noise. The simulation results have been shown to conform well with the theoretical evaluation. The proposed circuit has also shown robust performance across temperature and process variations. where PSRRl is the power supply rejection ratio of opamp and is given by PSRRl = A 1 / A d d l . Also, gmQI . P 2 = gmQz , and A I and A d d l are the PI = g m Q , + R , + R 2 gn~Q2+~3 open-loop differential gain and power gain of amplifier respectively.",
"title": ""
},
{
"docid": "3a21628b7ca55d2910da220f0c866bea",
"text": "BACKGROUND\nType 2 diabetes is associated with a substantially increased risk of cardiovascular disease, but the role of lipid-lowering therapy with statins for the primary prevention of cardiovascular disease in diabetes is inadequately defined. We aimed to assess the effectiveness of atorvastatin 10 mg daily for primary prevention of major cardiovascular events in patients with type 2 diabetes without high concentrations of LDL-cholesterol.\n\n\nMETHODS\n2838 patients aged 40-75 years in 132 centres in the UK and Ireland were randomised to placebo (n=1410) or atorvastatin 10 mg daily (n=1428). Study entrants had no documented previous history of cardiovascular disease, an LDL-cholesterol concentration of 4.14 mmol/L or lower, a fasting triglyceride amount of 6.78 mmol/L or less, and at least one of the following: retinopathy, albuminuria, current smoking, or hypertension. The primary endpoint was time to first occurrence of the following: acute coronary heart disease events, coronary revascularisation, or stroke. Analysis was by intention to treat.\n\n\nFINDINGS\nThe trial was terminated 2 years earlier than expected because the prespecified early stopping rule for efficacy had been met. Median duration of follow-up was 3.9 years (IQR 3.0-4.7). 127 patients allocated placebo (2.46 per 100 person-years at risk) and 83 allocated atorvastatin (1.54 per 100 person-years at risk) had at least one major cardiovascular event (rate reduction 37% [95% CI -52 to -17], p=0.001). Treatment would be expected to prevent at least 37 major vascular events per 1000 such people treated for 4 years. Assessed separately, acute coronary heart disease events were reduced by 36% (-55 to -9), coronary revascularisations by 31% (-59 to 16), and rate of stroke by 48% (-69 to -11). Atorvastatin reduced the death rate by 27% (-48 to 1, p=0.059). No excess of adverse events was noted in the atorvastatin group.\n\n\nINTERPRETATION\nAtorvastatin 10 mg daily is safe and efficacious in reducing the risk of first cardiovascular disease events, including stroke, in patients with type 2 diabetes without high LDL-cholesterol. No justification is available for having a particular threshold level of LDL-cholesterol as the sole arbiter of which patients with type 2 diabetes should receive statins. The debate about whether all people with this disorder warrant statin treatment should now focus on whether any patients are at sufficiently low risk for this treatment to be withheld.",
"title": ""
},
{
"docid": "a99b1a9409ea1241695590814e685828",
"text": "A two-phase heat spreader has been developed for cooling high heat flux sources in high-power lasers, high-intensity light-emitting diodes (LEDs), and semiconductor power devices. The heat spreader uses a passive mechanism to cool heat sources with fluxes as high as 5 W/mm2 without requiring any active power consumption for the thermal solution. The prototype is similar to a vapor chamber in which water is injected into an evacuated, air-tight shell. The shell consists of an evaporator plate, a condenser plate and an adiabatic section. The heat source is made from aluminum nitride, patterned with platinum. The heat source contains a temperature sensor and is soldered to a copper substrate that serves as the evaporator. Tests were performed with several different evaporator microstructures at different heat loads. A screen mesh was able to dissipate heat loads of 2 W/mm2, but at unacceptably high evaporator temperatures. For sintered copper powder with a 50 µm particle diameter, a heat load of 8.5 W/mm2 was supported, without the occurrence of dryout. A sintered copper powder surface coated with multi-walled carbon nanotubes (CNT) that were rendered hydrophilic showed a lowered thermal resistance for the device.",
"title": ""
},
{
"docid": "a274e05ba07259455d0e1fef57f2c613",
"text": "Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover images. The Least Significant Bit (LSB) steganography that replaces the least significant bits of the host medium is a widely used technique with low computational complexity and high insertion capacity. Although it has good perceptual transparency, it is vulnerable to steganalysis which is based on histogram analysis. In all the existing schemes detection of a secret message in a cover image can be easily detected from the histogram analysis and statistical analysis. Therefore developing new LSB steganography algorithms against statistical and histogram analysis is the prime requirement.",
"title": ""
},
{
"docid": "37a47bd2561b534d5734d250d16ff1c2",
"text": "Many chronic eye diseases can be conveniently investigated by observing structural changes in retinal blood vessel diameters. However, detecting changes in an accurate manner in face of interfering pathologies is a challenging task. The task is generally performed through an automatic computerized process. The literature shows that powerful methods have already been proposed to identify vessels in retinal images. Though a significant progress has been achieved toward methods to separate blood vessels from the uneven background, the methods still lack the necessary sensitivity to segment fine vessels. Recently, a multi-scale line-detector method proved its worth in segmenting thin vessels. This paper presents modifications to boost the sensitivity of this multi-scale line detector. First, a varying window size with line-detector mask is suggested to detect small vessels. Second, external orientations are fed to steer the multi-scale line detectors into alignment with flow directions. Third, optimal weights are suggested for weighted linear combinations of individual line-detector responses. Fourth, instead of using one global threshold, a hysteresis threshold is proposed to find a connected vessel tree. The overall impact of these modifications is a large improvement in noise removal capability of the conventional multi-scale line-detector method while finding more of the thin vessels. The contrast-sensitive steps are validated using a publicly available database and show considerable promise for the suggested strategy.",
"title": ""
},
{
"docid": "3b62ccd8e989d81f86b557e8d35a8742",
"text": "The ability to accurately judge the similarity between natural language sentences is critical to the performance of several applications such as text mining, question answering, and text summarization. Given two sentences, an effective similarity measure should be able to determine whether the sentences are semantically equivalent or not, taking into account the variability of natural language expression. That is, the correct similarity judgment should be made even if the sentences do not share similar surface form. In this work, we evaluate fourteen existing text similarity measures which have been used to calculate similarity score between sentences in many text applications. The evaluation is conducted on three different data sets, TREC9 question variants, Microsoft Research paraphrase corpus, and the third recognizing textual entailment data set.",
"title": ""
},
{
"docid": "c43ad751dade7d0a5a396f95cc904030",
"text": "The electric grid is radically evolving and transforming into the smart grid, which is characterized by improved energy efficiency and manageability of available resources. Energy management (EM) systems, often integrated with home automation systems, play an important role in the control of home energy consumption and enable increased consumer participation. These systems provide consumers with information about their energy consumption patterns and help them adopt energy-efficient behavior. The new generation EM systems leverage advanced analytics and communication technologies to offer consumers actionable information and control features, while ensuring ease of use, availability, security, and privacy. In this article, we present a survey of the state of the art in EM systems, applications, and frameworks. We define a set of requirements for EM systems and evaluate several EM systems in this context. We also discuss emerging trends in this area.",
"title": ""
},
{
"docid": "a7fa5171308a566a19da39ee6d7b74f6",
"text": "Machine learning approaches to coreference resolution vary greatly in the modeling of the problem: while early approaches operated on the mention pair level, current research focuses on ranking architectures and antecedent trees. We propose a unified representation of different approaches to coreference resolution in terms of the structure they operate on. We represent several coreference resolution approaches proposed in the literature in our framework and evaluate their performance. Finally, we conduct a systematic analysis of the output of these approaches, highlighting differences and similarities.",
"title": ""
}
] |
scidocsrr
|
8ac37d86ad7ea7c70031ac22ebb19981
|
The red one!: On learning to refer to things based on discriminative properties
|
[
{
"docid": "08768f6cf1305884a735bbe4e7e98474",
"text": "Language is sensitive to both semantic and pragmatic effects. To capture both effects, we model language use as a cooperative game between two players: a speaker, who generates an utterance, and a listener, who responds with an action. Specifically, we consider the task of generating spatial references to objects, wherein the listener must accurately identify an object described by the speaker. We show that a speaker model that acts optimally with respect to an explicit, embedded listener model substantially outperforms one that is trained to directly generate spatial descriptions.",
"title": ""
},
{
"docid": "f6a66ea4a5e8683bae76e71912694874",
"text": "We consider the task of learning visual connections between object categories using the ImageNet dataset, which is a large-scale dataset ontology containing more than 15 thousand object classes. We want to discover visual relationships between the classes that are currently missing (such as similar colors or shapes or textures). In this work we learn 20 visual attributes and use them both in a zero-shot transfer learning experiment as well as to make visual connections between semantically unrelated object categories.",
"title": ""
}
] |
[
{
"docid": "388f4a555c7aa004f081cbdc6bc0f799",
"text": "We present a multi-GPU version of GPUSPH, a CUDA implementation of fluid-dynamics models based on the smoothed particle hydrodynamics (SPH) numerical method. The SPH is a well-known Lagrangian model for the simulation of free-surface fluid flows; it exposes a high degree of parallelism and has already been successfully ported to GPU. We extend the GPU-based simulator to run simulations on multiple GPUs simultaneously, to obtain a gain in speed and overcome the memory limitations of using a single device. The computational domain is spatially split with minimal overlapping and shared volume slices are updated at every iteration of the simulation. Data transfers are asynchronous with computations, thus completely covering the overhead introduced by slice exchange. A simple yet effective load balancing policy preserves the performance in case of unbalanced simulations due to asymmetric fluid topologies. The obtained speedup factor (up to 4.5x for 6 GPUs) closely follows the expected one (5x for 6 GPUs) and it is possible to run simulations with a higher number of particles than would fit on a single device. We use the Karp-Flatt metric to formally estimate the overall efficiency of the parallelization.",
"title": ""
},
{
"docid": "d6ca38ccad91c0c2c51ba3dd5be454b2",
"text": "Dirty data is a serious problem for businesses leading to incorrect decision making, inefficient daily operations, and ultimately wasting both time and money. Dirty data often arises when domain constraints and business rules, meant to preserve data consistency and accuracy, are enforced incompletely or not at all in application code. In this work, we propose a new data-driven tool that can be used within an organization’s data quality management process to suggest possible rules, and to identify conformant and non-conformant records. Data quality rules are known to be contextual, so we focus on the discovery of context-dependent rules. Specifically, we search for conditional functional dependencies (CFDs), that is, functional dependencies that hold only over a portion of the data. The output of our tool is a set of functional dependencies together with the context in which they hold (for example, a rule that states for CS graduate courses, the course number and term functionally determines the room and instructor). Since the input to our tool will likely be a dirty database, we also search for CFDs that almost hold. We return these rules together with the non-conformant records (as these are potentially dirty records). We present effective algorithms for discovering CFDs and dirty values in a data instance. Our discovery algorithm searches for minimal CFDs among the data values and prunes redundant candidates. No universal objective measures of data quality or data quality rules are known. Hence, to avoid returning an unnecessarily large number of CFDs and only those that are most interesting, we evaluate a set of interest metrics and present comparative results using real datasets. We also present an experimental study showing the scalability of our techniques.",
"title": ""
},
{
"docid": "72cfe76ea68d5692731531aea02444d0",
"text": "Primary human tumor culture models allow for individualized drug sensitivity testing and are therefore a promising technique to achieve personalized treatment for cancer patients. This would especially be of interest for patients with advanced stage head and neck cancer. They are extensively treated with surgery, usually in combination with high-dose cisplatin chemoradiation. However, adding cisplatin to radiotherapy is associated with an increase in severe acute toxicity, while conferring only a minor overall survival benefit. Hence, there is a strong need for a preclinical model to identify patients that will respond to the intended treatment regimen and to test novel drugs. One of such models is the technique of culturing primary human tumor tissue. This review discusses the feasibility and success rate of existing primary head and neck tumor culturing techniques and their corresponding chemo- and radiosensitivity assays. A comprehensive literature search was performed and success factors for culturing in vitro are debated, together with the actual value of these models as preclinical prediction assay for individual patients. With this review, we aim to fill a gap in the understanding of primary culture models from head and neck tumors, with potential importance for other tumor types as well.",
"title": ""
},
{
"docid": "0772a2f393b1820e6fa8970cc14339a2",
"text": "The internet is empowering the rise of crowd work, gig work, and other forms of on--demand labor. A large and growing body of scholarship has attempted to predict the socio--technical outcomes of this shift, especially addressing three questions: begin{inlinelist} item What are the complexity limits of on-demand work?, item How far can work be decomposed into smaller microtasks?, and item What will work and the place of work look like for workers' end {inlinelist} In this paper, we look to the historical scholarship on piecework --- a similar trend of work decomposition, distribution, and payment that was popular at the turn of the nth{20} century --- to understand how these questions might play out with modern on--demand work. We identify the mechanisms that enabled and limited piecework historically, and identify whether on--demand work faces the same pitfalls or might differentiate itself. This approach introduces theoretical grounding that can help address some of the most persistent questions in crowd work, and suggests design interventions that learn from history rather than repeat it.",
"title": ""
},
{
"docid": "cd48180e93d25858410222fff4b1f43e",
"text": "Metaphors pervade discussions of social issues like climate change, the economy, and crime. We ask how natural language metaphors shape the way people reason about such social issues. In previous work, we showed that describing crime metaphorically as a beast or a virus, led people to generate different solutions to a city's crime problem. In the current series of studies, instead of asking people to generate a solution on their own, we provided them with a selection of possible solutions and asked them to choose the best ones. We found that metaphors influenced people's reasoning even when they had a set of options available to compare and select among. These findings suggest that metaphors can influence not just what solution comes to mind first, but also which solution people think is best, even when given the opportunity to explicitly compare alternatives. Further, we tested whether participants were aware of the metaphor. We found that very few participants thought the metaphor played an important part in their decision. Further, participants who had no explicit memory of the metaphor were just as much affected by the metaphor as participants who were able to remember the metaphorical frame. These findings suggest that metaphors can act covertly in reasoning. Finally, we examined the role of political affiliation on reasoning about crime. The results confirm our previous findings that Republicans are more likely to generate enforcement and punishment solutions for dealing with crime, and are less swayed by metaphor than are Democrats or Independents.",
"title": ""
},
{
"docid": "a478928c303153172133d805ac35c6cc",
"text": "Chest X-ray is one of the most accessible medical imaging technique for diagnosis of multiple diseases. With the availability of ChestX-ray14, which is a massive dataset of chest X-ray images and provides annotations for 14 thoracic diseases; it is possible to train Deep Convolutional Neural Networks (DCNN) to build Computer Aided Diagnosis (CAD) systems. In this work, we experiment a set of deep learning models and present a cascaded deep neural network that can diagnose all 14 pathologies better than the baseline and is competitive with other published methods. Our work provides the quantitative results to answer following research questions for the dataset: 1) What loss functions to use for training DCNN from scratch on ChestXray14 dataset that demonstrates high class imbalance and label co occurrence? 2) How to use cascading to model label dependency and to improve accuracy of the deep learning model?",
"title": ""
},
{
"docid": "8f089d55c0ce66db7bbf27476267a8e5",
"text": "Planning radar sites is very important for several civilian and military applications. Depending on the security or defence issue different requirements exist regarding the radar coverage and the radar sites. QSiteAnalysis offers several functions to automate, improve and speed up this highly complex task. Wave propagation effects such as diffraction, refraction, multipath and atmospheric attenuation are considered for the radar coverage calculation. Furthermore, an automatic optimisation of the overall coverage is implemented by optimising the radar sites. To display the calculation result, the calculated coverage is visualised in 2D and 3D. Therefore, QSiteAnalysis offers several functions to improve and automate radar site studies.",
"title": ""
},
{
"docid": "e33fa3ebbd612dbc6e76feebde52d3d9",
"text": "In this paper, we introduce a general iterative human-machine collaborative method for training crowdsource workers: the classifier (i.e., the machine) selects the highest quality examples for training the crowdsource workers (i.e., the humans). Then, the latter annotate the lower quality examples such that the classifier can be re-trained with more accurate examples. This process can be iterated several times. We tested our approach on two different tasks, Relation Extraction and Community Question Answering, which are also in two different languages, English and Arabic, respectively. Our experimental results show a significant improvement for creating Gold Standard data over distant supervision or just crowdsourcing without worker training. At the same time, our method approach the performance than state-of-the-art methods using expensive Gold Standard for training workers",
"title": ""
},
{
"docid": "34dcd712c5eae560f3d611fcc8ef9825",
"text": "Do I understand the problem of P vs. NP? The answer is a simple \"no\". If I were to understand the problem, I would've solved it as well\" — This is the current state of many theoretical computer scientists around the world. Apart from a bag of laureates waiting for the person who successfully understands this most popular millennium prize riddle, this is also considered to be a game changer in both mathematics and computer science. According to Scott Aaronson, \"If P = NP, then the world would be a profoundly different place than we usually assume it to be\". The speaker intends to share the part that he understood on the problem, and about the efforts that were recently put-forth in cracking the same.",
"title": ""
},
{
"docid": "d28d956c271189f4909ed11f0e5c342a",
"text": "This article presents new oscillation criteria for the second-order delay differential equation (p(t)(x′(t))α)′ + q(t)x(t− τ) + n X i=1 qi(t)x αi (t− τ) = e(t) where τ ≥ 0, p(t) ∈ C1[0,∞), q(t), qi(t), e(t) ∈ C[0,∞), p(t) > 0, α1 > · · · > αm > α > αm+1 > · · · > αn > 0 (n > m ≥ 1), α1, . . . , αn and α are ratio of odd positive integers. Without assuming that q(t), qi(t) and e(t) are nonnegative, the results in [6, 8] have been extended and a mistake in the proof of the results in [3] is corrected.",
"title": ""
},
{
"docid": "7ebaee3df1c8ee4bf1c82102db70f295",
"text": "Small cells such as femtocells overlaying the macrocells can enhance the coverage and capacity of cellular wireless networks and increase the spectrum efficiency by reusing the frequency spectrum assigned to the macrocells in a universal frequency reuse fashion. However, management of both the cross-tier and co-tier interferences is one of the most critical issues for such a two-tier cellular network. Centralized solutions for interference management in a two-tier cellular network with orthogonal frequency-division multiple access (OFDMA), which yield optimal/near-optimal performance, are impractical due to the computational complexity. Distributed solutions, on the other hand, lack the superiority of centralized schemes. In this paper, we propose a semi-distributed (hierarchical) interference management scheme based on joint clustering and resource allocation for femtocells. The problem is formulated as a mixed integer non-linear program (MINLP). The solution is obtained by dividing the problem into two sub-problems, where the related tasks are shared between the femto gateway (FGW) and femtocells. The FGW is responsible for clustering, where correlation clustering is used as a method for femtocell grouping. In this context, a low-complexity approach for solving the clustering problem is used based on semi-definite programming (SDP). In addition, an algorithm is proposed to reduce the search range for the best cluster configuration. For a given cluster configuration, within each cluster, one femto access point (FAP) is elected as a cluster head (CH) that is responsible for resource allocation among the femtocells in that cluster. The CH performs sub-channel and power allocation in two steps iteratively, where a low-complexity heuristic is proposed for the sub-channel allocation phase. Numerical results show the performance gains due to clustering in comparison to other related schemes. Also, the proposed correlation clustering scheme offers performance, which is close to that of the optimal clustering, with a lower complexity.",
"title": ""
},
{
"docid": "5403ebc5a8fc5789809145fb8114bb63",
"text": "This paper explores why occupational therapists use arts and crafts as therapeutic modalities. Beginning with the turn-of-the-century origins of occupational therapy, the paper traces the similarities and differences in the ideas and beliefs of the founders of occupational therapy and the proponents of the arts-and-crafts movement.",
"title": ""
},
{
"docid": "4c29f5ffaeff5911e3d5f7a85146c601",
"text": "In August 2004, Duke University provided free iPods to its entire freshman class (Belanger, 2005). The next month, a Korean education firm offered free downloadable college entrance exam lectures to students who purchased an iRiver personal multimedia player (Kim, 2004). That October, a financial trading firm in Chicago was reportedly assessing the hand-eye coordination of traders’ using GameBoys (Logan, 2004). Yet while such innovative applications abound, the use of technology in education and training is far from new, a fact as true in language classrooms as it is in medical schools.",
"title": ""
},
{
"docid": "f5352a1eee7340bf7c7e37b1210c7b99",
"text": "In recent years, traditional cybersecurity safeguards have proven ineffective against insider threats. Famous cases of sensitive information leaks caused by insiders, including the WikiLeaks release of diplomatic cables and the Edward Snowden incident, have greatly harmed the U.S. government's relationship with other governments and with its own citizens. Data Leak Prevention (DLP) is a solution for detecting and preventing information leaks from within an organization's network. However, state-of-art DLP detection models are only able to detect very limited types of sensitive information, and research in the field has been hindered due to the lack of available sensitive texts. Many researchers have focused on document-based detection with artificially labeled “confidential documents” for which security labels are assigned to the entire document, when in reality only a portion of the document is sensitive. This type of whole-document based security labeling increases the chances of preventing authorized users from accessing non-sensitive information within sensitive documents. In this paper, we introduce Automated Classification Enabled by Security Similarity (ACESS), a new and innovative detection model that penetrates the complexity of big text security classification/detection. To analyze the ACESS system, we constructed a novel dataset, containing formerly classified paragraphs from diplomatic cables made public by the WikiLeaks organization. To our knowledge this paper is the first to analyze a dataset that contains actual formerly sensitive information annotated at paragraph granularity.",
"title": ""
},
{
"docid": "68473e74e1c188d41f4ea42028728a18",
"text": "The mastery of fundamental movement skills (FMS) has been purported as contributing to children's physical, cognitive and social development and is thought to provide the foundation for an active lifestyle. Commonly developed in childhood and subsequently refined into context- and sport-specific skills, they include locomotor (e.g. running and hopping), manipulative or object control (e.g. catching and throwing) and stability (e.g. balancing and twisting) skills. The rationale for promoting the development of FMS in childhood relies on the existence of evidence on the current or future benefits associated with the acquisition of FMS proficiency. The objective of this systematic review was to examine the relationship between FMS competency and potential health benefits in children and adolescents. Benefits were defined in terms of psychological, physiological and behavioural outcomes that can impact public health. A systematic search of six electronic databases (EMBASE, OVID MEDLINE, PsycINFO, PubMed, Scopus and SportDiscus®) was conducted on 22 June 2009. Included studies were cross-sectional, longitudinal or experimental studies involving healthy children or adolescents (aged 3-18 years) that quantitatively analysed the relationship between FMS competency and potential benefits. The search identified 21 articles examining the relationship between FMS competency and eight potential benefits (i.e. global self-concept, perceived physical competence, cardio-respiratory fitness [CRF], muscular fitness, weight status, flexibility, physical activity and reduced sedentary behaviour). We found strong evidence for a positive association between FMS competency and physical activity in children and adolescents. There was also a positive relationship between FMS competency and CRF and an inverse association between FMS competency and weight status. Due to an inadequate number of studies, the relationship between FMS competency and the remaining benefits was classified as uncertain. More longitudinal and intervention research examining the relationship between FMS competency and potential psychological, physiological and behavioural outcomes in children and adolescents is recommended.",
"title": ""
},
{
"docid": "a8478fa2a7088c270f1b3370bb06d862",
"text": "Sodium-ion batteries (SIBs) are prospective alternative to lithium-ion batteries for large-scale energy-storage applications, owing to the abundant resources of sodium. Metal sulfides are deemed to be promising anode materials for SIBs due to their low-cost and eco-friendliness. Herein, for the first time, series of copper sulfides (Cu2S, Cu7S4, and Cu7KS4) are controllably synthesized via a facile electrochemical route in KCl-NaCl-Na2S molten salts. The as-prepared Cu2S with micron-sized flakes structure is first investigated as anode of SIBs, which delivers a capacity of 430 mAh g-1 with a high initial Coulombic efficiency of 84.9% at a current density of 100 mA g-1. Moreover, the Cu2S anode demonstrates superior capability (337 mAh g-1 at 20 A g-1, corresponding to 50 C) and ultralong cycle performance (88.2% of capacity retention after 5000 cycles at 5 A g-1, corresponding to 0.0024% of fade rate per cycle). Meanwhile, the pseudocapacitance contribution and robust porous structure in situ formed during cycling endow the Cu2S anodes with outstanding rate capability and enhanced cyclic performance, which are revealed by kinetics analysis and ex situ characterization.",
"title": ""
},
{
"docid": "49d714c778b820fca5946b9a587d1e17",
"text": "The current Web of Data is producing increasingly large RDF datasets. Massive publication efforts of RDF data driven by initiatives like the Linked Open Data movement, and the need to exchange large datasets has unveiled the drawbacks of traditional RDF representations, inspired and designed by a documentcentric and human-readable Web. Among the main problems are high levels of verbosity/redundancy and weak machine-processable capabilities in the description of these datasets. This scenario calls for efficient formats for publication and exchange. This article presents a binary RDF representation addressing these issues. Based on a set of metrics that characterizes the skewed structure of real-world RDF data, we develop a proposal of an RDF representation that modularly partitions and efficiently represents three components of RDF datasets: Header information, a Dictionary, and the actual Triples structure (thus called HDT). Our experimental evaluation shows that datasets in HDT format can be compacted by more than fifteen times as compared to current naive representations, improving both parsing and processing while keeping a consistent publication scheme. Specific compression techniques over HDT further improve these compression rates and prove to outperform existing compression solutions for efficient RDF exchange. © 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1d1005fe036932695a7706cde950fe75",
"text": "In recent years, the use of mobile ad hoc networks (MANETs) has been widespread in many applications, including some mission critical applications, and as such security has become one of the major concerns in MANETs. Due to some unique characteristics of MANETs, prevention methods alone are not sufficient to make them secure; therefore, detection should be added as another defense before an attacker can breach the system. In general, the intrusion detection techniques for traditional wireless networks are not well suited for MANETs. In this paper, we classify the architectures for intrusion detection systems (IDS) that have been introduced for MANETs. Current IDS’s corresponding to those architectures are also reviewed and compared. We then provide some directions for future research.",
"title": ""
},
{
"docid": "635f090bc5d0bf928640aaaaa1e16861",
"text": "Event-based social networks (EBSNs) provide convenient online platforms for users to organize, attend and share social events. Understanding users’ social influences in social networks can benefit many applications, such as social recommendation and social marketing. In this paper, we focus on the problem of predicting users’ social influences on upcoming events in EBSNs. We formulate this prediction problem as the estimation of unobserved entries of the constructed user-event social influence matrix, where each entry represents the influence value of a user on an event. In particular, we define a user's social influence on a given event as the proportion of the user's friends who are influenced by him/her to attend the event. To solve this problem, we present a hybrid collaborative filtering model, namely, Matrix Factorization with Event-User Neighborhood (MF-EUN) model, by incorporating both event-based and user-based neighborhood methods into matrix factorization. Due to the fact that the constructed social influence matrix is very sparse and the overlap values in the matrix are few, it is challenging to find reliable similar neighbors using the widely adopted similarity measures (e.g., Pearson correlation and Cosine similarity). To address this challenge, we propose an additional information based neighborhood discovery (AID) method by considering both event-specific and user-specific features in EBSNs. The parameters of our MF-EUN model are determined by minimizing the associated regularized squared error function through stochastic gradient descent. We conduct a comprehensive performance evaluation on real-world datasets collected from DoubanEvent. Experimental results show that our proposed hybrid collaborative filtering model is superior than several alternatives, which provides excellent performance with RMSE and MAE reaching 0.248 and 0.1266 respectively in the 90% training data of 10 000",
"title": ""
},
{
"docid": "c263d0c704069ecbdd9d27e9722536e3",
"text": "This paper proposes a chaos-based true random number generator using image as nondeterministic entropy sources. Logistic map is applied to permute and diffuse the image to produce a random sequence after the image is divided to bit-planes. The generated random sequence passes NIST 800-22 test suite with good performance.",
"title": ""
}
] |
scidocsrr
|
a78be6c9a0927113b9fa7925014fab58
|
End-to-end visual speech recognition with LSTMS
|
[
{
"docid": "6af09f57f2fcced0117dca9051917a0d",
"text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.",
"title": ""
},
{
"docid": "7d78ca30853ed8a84bbb56fe82e3b9ba",
"text": "Deep belief networks (DBN) have shown impressive improvements over Gaussian mixture models for automatic speech recognition. In this work we use DBNs for audio-visual speech recognition; in particular, we use deep learning from audio and visual features for noise robust speech recognition. We test two methods for using DBNs in a multimodal setting: a conventional decision fusion method that combines scores from single-modality DBNs, and a novel feature fusion method that operates on mid-level features learned by the single-modality DBNs. On a continuously spoken digit recognition task, our experiments show that these methods can reduce word error rate by as much as 21% relative over a baseline multi-stream audio-visual GMM/HMM system.",
"title": ""
}
] |
[
{
"docid": "14b48440dd0b797cec04bbc249ee9940",
"text": "T cells use integrins in essentially all of their functions. They use integrins to migrate in and out of lymph nodes and, following infection, to migrate into other tissues. At the beginning of an immune response, integrins also participate in the immunological synapse formed between T cells and antigen-presenting cells. Because the ligands for integrins are widely expressed, integrin activity on T cells must be tightly controlled. Integrins become active following signalling through other membrane receptors, which cause both affinity alteration and an increase in integrin clustering. Lipid raft localization may increase integrin activity. Signalling pathways involving ADAP, Vav-1 and SKAP-55, as well as Rap1 and RAPL, cause clustering of leukocyte function-associated antigen-1 (LFA-1; integrin alphaLbeta2). T-cell integrins can also signal, and the pathways dedicated to the migratory activity of T cells have been the most investigated so far. Active LFA-1 causes T-cell attachment and lamellipodial movement induced by myosin light chain kinase at the leading edge, whereas RhoA and ROCK cause T-cell detachment at the trailing edge. Another important signalling pathway acts through CasL/Crk, which might regulate the activity of the GTPases Rac and Rap1 that have important roles in T-cell migration.",
"title": ""
},
{
"docid": "541075ddb29dd0acdf1f0cf3784c220a",
"text": "Many recent works on knowledge distillation have provided ways to transfer the knowledge of a trained network for improving the learning process of a new one, but finding a good technique for knowledge distillation is still an open problem. In this paper, we provide a new perspective based on a decision boundary, which is one of the most important component of a classifier. The generalization performance of a classifier is closely related to the adequacy of its decision boundary, so a good classifier bears a good decision boundary. Therefore, transferring information closely related to the decision boundary can be a good attempt for knowledge distillation. To realize this goal, we utilize an adversarial attack to discover samples supporting a decision boundary. Based on this idea, to transfer more accurate information about the decision boundary, the proposed algorithm trains a student classifier based on the adversarial samples supporting the decision boundary. Experiments show that the proposed method indeed improves knowledge distillation and achieves the stateof-the-arts performance. 1",
"title": ""
},
{
"docid": "c071d5a7ff1dbfd775e9ffdee1b07662",
"text": "OBJECTIVES\nComplete root coverage is the primary objective to be accomplished when treating gingival recessions in patients with aesthetic demands. Furthermore, in order to satisfy patient demands fully, root coverage should be accomplished by soft tissue, the thickness and colour of which should not be distinguishable from those of adjacent soft tissue. The aim of the present split-mouth study was to compare the treatment outcome of two surgical approaches of the bilaminar procedure in terms of (i) root coverage and (ii) aesthetic appearance of the surgically treated sites.\n\n\nMATERIAL AND METHODS\nFifteen young systemically and periodontally healthy subjects with two recession-type defects of similar depth affecting contralateral teeth in the aesthetic zone of the maxilla were enrolled in the study. All recessions fall into Miller class I or II. Randomization for test and control treatment was performed by coin toss immediately prior to surgery. All defects were treated with a bilaminar surgical technique: differences between test and control sites resided in the size, thickness and positioning of the connective tissue graft. The clinical re-evaluation was made 1 year after surgery.\n\n\nRESULTS\nThe two bilaminar techniques resulted in a high percentage of root coverage (97.3% in the test and 94.7% in the control group) and complete root coverage (gingival margin at the cemento-enamel junction (CEJ)) (86.7% in the test and 80% in the control teeth), with no statistically significant difference between them. Conversely, better aesthetic outcome and post-operative course were indicated by the patients for test compared to control sites.\n\n\nCONCLUSIONS\nThe proposed modification of the bilaminar technique improved the aesthetic outcome. The reduced size and minimal thickness of connective tissue graft, together with its positioning apical to the CEJ, facilitated graft coverage by means of the coronally advanced flap.",
"title": ""
},
{
"docid": "ab50f458d919ba3ac3548205418eea62",
"text": "Department of Microbiology, School of Life Sciences, Bharathidasan University, Tiruchirappali 620 024, Tamilnadu, India. Department of Medical Biotechnology, Sri Ramachandra University, Porur, Chennai 600 116, Tamilnadu, India. CAS Marine Biology, Annamalai University, Parangipettai 608 502, Tamilnadu, India. Department of Zoology, DDE, Annamalai University, Annamalai Nagar 608 002, Tamilnadu, India Asian Pacific Journal of Tropical Disease (2012)S291-S295",
"title": ""
},
{
"docid": "531ac7d6500373005bae464c49715288",
"text": "We have used acceleration sensors to monitor the heart motion during surgery. A three-axis accelerometer was made from two commercially available two-axis sensors, and was used to measure the heart motion in anesthetized pigs. The heart moves due to both respiration and heart beating. The heart beating was isolated from respiration by high-pass filtering at 1.0 Hz, and heart wall velocity and position were calculated by numerically integrating the filtered acceleration traces. The resulting curves reproduced the heart motion in great detail, noise was hardly visible. Events that occurred during the measurements, e.g. arrhythmias and fibrillation, were recognized in the curves, and confirmed by comparison with synchronously recorded ECG data. We conclude that acceleration sensors are able to measure heart motion with good resolution, and that such measurements can reveal patterns that may be an indication of heart circulation failure.",
"title": ""
},
{
"docid": "b5eafe60989c0c4265fa910c79bbce41",
"text": "Little research has addressed IT professionals’ script debugging strategies, or considered whether there may be gender differences in these strategies. What strategies do male and female scripters use and what kinds of mechanisms do they employ to successfully fix bugs? Also, are scripters’ debugging strategies similar to or different from those of spreadsheet debuggers? Without the answers to these questions, tool designers do not have a target to aim at for supporting how male and female scripters want to go about debugging. We conducted a think-aloud study to bridge this gap. Our results include (1) a generalized understanding of debugging strategies used by spreadsheet users and scripters, (2) identification of the multiple mechanisms scripters employed to carry out the strategies, and (3) detailed examples of how these debugging strategies were employed by males and females to successfully fix bugs.",
"title": ""
},
{
"docid": "8505afb27c5ef73baeaa53dfe1c337ae",
"text": "The Osprey (Pandion haliaetus) is one of only six bird species with an almost world-wide distribution. We aimed at clarifying its phylogeographic structure and elucidating its taxonomic status (as it is currently separated into four subspecies). We tested six biogeographical scenarios to explain how the species’ distribution and differentiation took place in the past and how such a specialized raptor was able to colonize most of the globe. Using two mitochondrial genes (cyt b and ND2), the Osprey appeared structured into four genetic groups representing quasi non-overlapping geographical regions. The group Indo-Australasia corresponds to the cristatus ssp, as well as the group Europe-Africa to the haliaetus ssp. In the Americas, we found a single lineage for both carolinensis and ridgwayi ssp, whereas in north-east Asia (Siberia and Japan), we discovered a fourth new lineage. The four lineages are well differentiated, contrasting with the low genetic variability observed within each clade. Historical demographic reconstructions suggested that three of the four lineages experienced stable trends or slight demographic increases. Molecular dating estimates the initial split between lineages at about 1.16 Ma ago, in the Early Pleistocene. Our biogeographical inference suggests a pattern of colonization from the American continent towards the Old World. Populations of the Palearctic would represent the last outcomes of this colonization. At a global scale the Osprey complex may be composed of four different Evolutionary Significant Units, which should be treated as specific management units. Our study brought essential genetic clarifications, which have implications for conservation strategies in identifying distinct lineages across which birds should not be artificially moved through exchange/reintroduction schemes.",
"title": ""
},
{
"docid": "eb0ec729796a93f36d348e70e3fa9793",
"text": "This paper proposes a novel approach to measure the object size using a regular digital camera. Nowadays, the remote object-size measurement is very crucial to many multimedia applications. Our proposed computer-aided automatic object-size measurement technique is based on a new depth-information extraction (range finding) scheme using a regular digital camera. The conventional range finders are often carried out using the passive method such as stereo cameras or the active method such as ultrasonic and infrared equipment. They either require the cumbersome set-up or deal with point targets only. The proposed approach requires only a digital camera with certain image processing techniques and relies on the basic principles of visible light. Experiments are conducted to evaluate the performance of our proposed new object-size measurement mechanism. The average error-percentage of this method is below 2%. It demonstrates the striking effectiveness of our proposed new method.",
"title": ""
},
{
"docid": "21961041e3bf66d7e3f004c65ddc5da2",
"text": "A novel high step-up converter is proposed for a front-end photovoltaic system. Through a voltage multiplier module, an asymmetrical interleaved high step-up converter obtains high step-up gain without operating at an extreme duty ratio. The voltage multiplier module is composed of a conventional boost converter and coupled inductors. An extra conventional boost converter is integrated into the first phase to achieve a considerably higher voltage conversion ratio. The two-phase configuration not only reduces the current stress through each power switch, but also constrains the input current ripple, which decreases the conduction losses of metal-oxide-semiconductor field-effect transistors (MOSFETs). In addition, the proposed converter functions as an active clamp circuit, which alleviates large voltage spikes across the power switches. Thus, the low-voltage-rated MOSFETs can be adopted for reductions of conduction losses and cost. Efficiency improves because the energy stored in leakage inductances is recycled to the output terminal. Finally, the prototype circuit with a 40-V input voltage, 380-V output, and 1000- W output power is operated to verify its performance. The highest efficiency is 96.8%.",
"title": ""
},
{
"docid": "2a818337c472caa1e693edb05722954b",
"text": "UNLABELLED\nThis study focuses on the relationship between classroom ventilation rates and academic achievement. One hundred elementary schools of two school districts in the southwest United States were included in the study. Ventilation rates were estimated from fifth-grade classrooms (one per school) using CO(2) concentrations measured during occupied school days. In addition, standardized test scores and background data related to students in the classrooms studied were obtained from the districts. Of 100 classrooms, 87 had ventilation rates below recommended guidelines based on ASHRAE Standard 62 as of 2004. There is a linear association between classroom ventilation rates and students' academic achievement within the range of 0.9-7.1 l/s per person. For every unit (1 l/s per person) increase in the ventilation rate within that range, the proportion of students passing standardized test (i.e., scoring satisfactory or above) is expected to increase by 2.9% (95%CI 0.9-4.8%) for math and 2.7% (0.5-4.9%) for reading. The linear relationship observed may level off or change direction with higher ventilation rates, but given the limited number of observations, we were unable to test this hypothesis. A larger sample size is needed for estimating the effect of classroom ventilation rates higher than 7.1 l/s per person on academic achievement.\n\n\nPRACTICAL IMPLICATIONS\nThe results of this study suggest that increasing the ventilation rates toward recommended guideline ventilation rates in classrooms should translate into improved academic achievement of students. More studies are needed to fully understand the relationships between ventilation rate, other indoor environmental quality parameters, and their effects on students' health and achievement. Achieving the recommended guidelines and pursuing better understanding of the underlying relationships would ultimately support both sustainable and productive school environments for students and personnel.",
"title": ""
},
{
"docid": "bcab7b2f12f72c6db03446046586381e",
"text": "The key barrier to widespread uptake of cloud computing is the lack of trust in clouds by potential customers. While preventive controls for security and privacy are actively researched, there is still little focus on detective controls related to cloud accountability and audit ability. The complexity resulting from large-scale virtualization and data distribution carried out in current clouds has revealed an urgent research agenda for cloud accountability, as has the shift in focus of customer concerns from servers to data. This paper discusses key issues and challenges in achieving a trusted cloud through the use of detective controls, and presents the Trust Cloud framework, which addresses accountability in cloud computing via technical and policy-based approaches.",
"title": ""
},
{
"docid": "8f449e62b300c4c8ff62306d02f2f820",
"text": "The effects of adrenal corticosteroids on subsequent adrenocorticotropin secretion are complex. Acutely (within hours), glucocorticoids (GCs) directly inhibit further activity in the hypothalamo-pituitary-adrenal axis, but the chronic actions (across days) of these steroids on brain are directly excitatory. Chronically high concentrations of GCs act in three ways that are functionally congruent. (i) GCs increase the expression of corticotropin-releasing factor (CRF) mRNA in the central nucleus of the amygdala, a critical node in the emotional brain. CRF enables recruitment of a chronic stress-response network. (ii) GCs increase the salience of pleasurable or compulsive activities (ingesting sucrose, fat, and drugs, or wheel-running). This motivates ingestion of \"comfort food.\" (iii) GCs act systemically to increase abdominal fat depots. This allows an increased signal of abdominal energy stores to inhibit catecholamines in the brainstem and CRF expression in hypothalamic neurons regulating adrenocorticotropin. Chronic stress, together with high GC concentrations, usually decreases body weight gain in rats; by contrast, in stressed or depressed humans chronic stress induces either increased comfort food intake and body weight gain or decreased intake and body weight loss. Comfort food ingestion that produces abdominal obesity, decreases CRF mRNA in the hypothalamus of rats. Depressed people who overeat have decreased cerebrospinal CRF, catecholamine concentrations, and hypothalamo-pituitary-adrenal activity. We propose that people eat comfort food in an attempt to reduce the activity in the chronic stress-response network with its attendant anxiety. These mechanisms, determined in rats, may explain some of the epidemic of obesity occurring in our society.",
"title": ""
},
{
"docid": "3e691cf6055eb564dedca955b816a654",
"text": "Many Internet-based services have already been ported to the mobile-based environment, embracing the new services is therefore critical to deriving revenue for services providers. Based on a valence framework and trust transfer theory, we developed a trust-based customer decision-making model of the non-independent, third-party mobile payment services context. We empirically investigated whether a customer’s established trust in Internet payment services is likely to influence his or her initial trust in mobile payment services. We also examined how these trust beliefs might interact with both positive and negative valence factors and affect a customer’s adoption of mobile payment services. Our SEM analysis indicated that trust indeed had a substantial impact on the cross-environment relationship and, further, that trust in combination with the positive and negative valence determinants directly and indirectly influenced behavioral intention. In addition, the magnitudes of these effects on workers and students were significantly different from each other. 2011 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +86 27 8755 8100; fax: +86 27 8755 6437. E-mail addresses: [email protected] (Y. Lu), [email protected] (S. Yang), [email protected] (Patrick Y.K. Chau), [email protected] (Y. Cao). 1 Tel.: +86 27 8755 6448. 2 Tel.: +852 2859 1025. 3 Tel.: +86 27 8755 8100.",
"title": ""
},
{
"docid": "84a01029714dfef5d14bc4e2be78921e",
"text": "Integrating frequent pattern mining with interactive visualization for temporal event sequence analysis poses many interesting research questions and challenges. We review and reflect on some of these challenges based on our experiences working on event sequence data from two domains: web analytics and application logs. These challenges can be organized using a three-stage framework: pattern mining, pattern pruning and interactive visualization.",
"title": ""
},
{
"docid": "d0c940a651b1231c6ef4f620e7acfdcc",
"text": "Harvard Business School Working Paper Number 05-016. Working papers are distributed in draft form for purposes of comment and discussion only. They may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author(s). Abstract Much recent research has pointed to the critical role of architecture in the development of a firm's products, services and technical capabilities. A common theme in these studies is the notion that specific characteristics of a product's design – for example, the degree of modularity it exhibits – can have a profound effect on among other things, its performance, the flexibility of the process used to produce it, the value captured by its producer, and the potential for value creation at the industry level. Unfortunately, this stream of work has been limited by the lack of appropriate tools, metrics and terminology for characterizing key attributes of a product's architecture in a robust fashion. As a result, there is little empirical evidence that the constructs emerging in the literature have power in predicting the phenomena with which they are associated. This paper reports data from a research project which seeks to characterize the differences in design structure between complex software products. In particular, we adopt a technique based upon Design Structure Matrices (DSMs) to map the dependencies between different elements of a design then develop metrics that allow us to compare the structures of these different DSMs. We demonstrate the power of this approach in two ways: First, we compare the design structures of two complex software products – the Linux operating system and the Mozilla web browser – that were developed via contrasting modes of organization: specifically, open source versus proprietary development. We find significant differences in their designs, consistent with an interpretation that Linux possesses a more \" modular \" architecture. We then track the evolution of Mozilla, paying particular attention to a major \" redesign \" effort that took place several months after its release as an open source product. We show that this effort resulted in a design structure that was significantly more modular than its predecessor, and indeed, more modular than that of a comparable version of Linux. Our findings demonstrate that it is possible to characterize the structure of complex product designs and draw meaningful conclusions about the precise ways in which they differ. We provide a description of a set of tools …",
"title": ""
},
{
"docid": "0dbca0a2aec1b27542463ff80fc4f59d",
"text": "An emerging research area named Learning-to-Rank (LtR) has shown that effective solutions to the ranking problem can leverage machine learning techniques applied to a large set of features capturing the relevance of a candidate document for the user query. Large-scale search systems must however answer user queries very fast, and the computation of the features for candidate documents must comply with strict back-end latency constraints. The number of features cannot thus grow beyond a given limit, and Feature Selection (FS) techniques have to be exploited to find a subset of features that both meets latency requirements and leads to high effectiveness of the trained models. In this paper, we propose three new algorithms for FS specifically designed for the LtR context where hundreds of continuous or categorical features can be involved. We present a comprehensive experimental analysis conducted on publicly available LtR datasets and we show that the proposed strategies outperform a well-known state-of-the-art competitor.",
"title": ""
},
{
"docid": "5757d96fce3e0b3b3303983b15d0030d",
"text": "Malicious applications pose a threat to the security of the Android platform. The growing amount and diversity of these applications render conventional defenses largely ineffective and thus Android smartphones often remain unprotected from novel malware. In this paper, we propose DREBIN, a lightweight method for detection of Android malware that enables identifying malicious applications directly on the smartphone. As the limited resources impede monitoring applications at run-time, DREBIN performs a broad static analysis, gathering as many features of an application as possible. These features are embedded in a joint vector space, such that typical patterns indicative for malware can be automatically identified and used for explaining the decisions of our method. In an evaluation with 123,453 applications and 5,560 malware samples DREBIN outperforms several related approaches and detects 94% of the malware with few false alarms, where the explanations provided for each detection reveal relevant properties of the detected malware. On five popular smartphones, the method requires 10 seconds for an analysis on average, rendering it suitable for checking downloaded applications directly on the device.",
"title": ""
},
{
"docid": "3038afba11844c31fefc30a8245bc61c",
"text": "Frame duplication is to duplicate a sequence of consecutive frames and insert or replace to conceal or imitate a specific event/content in the same source video. To automatically detect the duplicated frames in a manipulated video, we propose a coarse-to-fine deep convolutional neural network framework to detect and localize the frame duplications. We first run an I3D network [2] to obtain the most candidate duplicated frame sequences and selected frame sequences, and then run a Siamese network with ResNet network [6] to identify each pair of a duplicated frame and the corresponding selected frame. We also propose a heuristic strategy to formulate the video-level score. We then apply our inconsistency detector fine-tuned on the I3D network to distinguish duplicated frames from selected frames. With the experimental evaluation conducted on two video datasets, we strongly demonstrate that our proposed method outperforms the current state-of-the-art methods.",
"title": ""
},
{
"docid": "af5fe4ecd02d320477e2772d63b775dd",
"text": "Background: Blockchain technology is recently receiving a lot of attention from researchers as well as from many different industries. There are promising application areas for the logistics sector like digital document exchange and tracking of goods, but there is no existing research on these topics. This thesis aims to contribute to the research of information systems in logistics in combination with Blockchain technology. Purpose: The purpose of this research is to explore the capabilities of Blockchain technology regarding the concepts of privacy, transparency and trust. In addition, the requirements of information systems in logistics regarding the mentioned concepts are studied and brought in relation to the capabilities of Blockchain technology. The goal is to contribute to a theoretical discussion on the role of Blockchain technology in improving the flow of goods and the flow of information in logistics. Method: The research is carried out in the form of an explorative case study. Blockchain technology has not been studied previously in a logistics setting and therefore, an inductive research approach is chosen by using thematic analysis. The case study is based on a pilot test which had the goal to facilitate a Blockchain to exchange documents and track shipments. Conclusion: The findings reflect that the research on Blockchain technology is still in its infancy and that it still takes several years to facilitate the technology in a productive environment. The Blockchain has the capabilities to meet the requirements of information systems in logistics due to the ability to create trust and establish an organisation overarching platform to exchange information.",
"title": ""
}
] |
scidocsrr
|
6f4ab31fca22f899dedcb84ea87a7ac2
|
Identifying Speakers and Listeners of Quoted Speech in Literary Works
|
[
{
"docid": "67992d0c0b5f32726127855870988b01",
"text": "We present a method for extracting social networks from literature, namely, nineteenth-century British novels and serials. We derive the networks from dialogue interactions, and thus our method depends on the ability to determine when two characters are in conversation. Our approach involves character name chunking, quoted speech attribution and conversation detection given the set of quotes. We extract features from the social networks and examine their correlation with one another, as well as with metadata such as the novel’s setting. Our results provide evidence that the majority of novels in this time period do not fit two characterizations provided by literacy scholars. Instead, our results suggest an alternative explanation for differences in social networks.",
"title": ""
}
] |
[
{
"docid": "e78e70d347fb76a79755442cabe1fbe0",
"text": "Recent advances in neural variational inference have facilitated efficient training of powerful directed graphical models with continuous latent variables, such as variational autoencoders. However, these models usually assume simple, unimodal priors — such as the multivariate Gaussian distribution — yet many realworld data distributions are highly complex and multi-modal. Examples of complex and multi-modal distributions range from topics in newswire text to conversational dialogue responses. When such latent variable models are applied to these domains, the restriction of the simple, uni-modal prior hinders the overall expressivity of the learned model as it cannot possibly capture more complex aspects of the data distribution. To overcome this critical restriction, we propose a flexible, simple prior distribution which can be learned efficiently and potentially capture an exponential number of modes of a target distribution. We develop the multi-modal variational encoder-decoder framework and investigate the effectiveness of the proposed prior in several natural language processing modeling tasks, including document modeling and dialogue modeling.",
"title": ""
},
{
"docid": "005ee252d6a89d75f8200ffe1f64f2c0",
"text": "Traditionally, a distinction is made between that what is as serted by uttering a sentence and that what is presupposed. Presuppositions are characte rized as those propositions which persist even if the sentence which triggers them is neg ated. Thus ‘The king of France is bald’ presupposes that there is a king of France , since this follows from both ‘The king of France is bald’ and ‘It is not the case that the kin g of France is bald’. Stalnaker (1974) put forward the idea that a presupposition of an asserted sentence is a piece of information which is assumed by the speaker to be part of the common background of the speaker and interpreter. The presuppositions as anaphors theory of Van der Sandt (1992) — currently the best theory of presuppos ition as far as empirical predictions are concerned (Beaver 1997:983)— can be seen as o e advanced realization of Stalnaker’s basic idea. The main insight of Van der Sa ndt is that there is an interesting correspondence between the behaviour of anaph oric pronouns in discourse and the projection of presuppositions (i.e., whether and ho w presuppositions survive in complex sentences). Like most research in this area, Van d er Sandt’s work concentrates on the interaction between presuppositions and the linguistic context (i.e., the preceding sentences). However, not only linguistic contex t interacts with presuppositions. Consider:",
"title": ""
},
{
"docid": "e4d38d8ef673438e9ab231126acfda99",
"text": "The trend toward physically dispersed work groups has necessitated a fresh inquiry into the role and nature of team leadership in virtual settings. To accomplish this, we assembled thirteen culturally diverse global teams from locations in Europe, Mexico, and the United States, assigning each team a project leader and task to complete. The findings suggest that effective team leaders demonstrate the capability to deal with paradox and contradiction by performing multiple leadership roles simultaneously (behavioral complexity). Specifically, we discovered that highly effective virtual team leaders act in a mentoring role and exhibit a high degree of understanding (empathy) toward other team members. At the same time, effective leaders are also able to assert their authority without being perceived as overbearing or inflexible. Finally, effective leaders are found to be extremely effective at providing regular, detailed, and prompt communication with their peers and in articulating role relationships (responsibilities) among the virtual team members. This study provides useful insights for managers interested in developing global virtual teams, as well as for academics interested in pursuing virtual team research. 8 KAYWORTH AND LEIDNER",
"title": ""
},
{
"docid": "d1c33990b7642ea51a8a568fa348d286",
"text": "Connectionist temporal classification CTC has recently shown improved performance and efficiency in automatic speech recognition. One popular decoding implementation is to use a CTC model to predict the phone posteriors at each frame and then perform Viterbi beam search on a modified WFST network. This is still within the traditional frame synchronous decoding framework. In this paper, the peaky posterior property of CTC is carefully investigated and it is found that ignoring blank frames will not introduce additional search errors. Based on this phenomenon, a novel phone synchronous decoding framework is proposed by removing tremendous search redundancy due to blank frames, which results in significant search speed up. The framework naturally leads to an extremely compact phone-level acoustic space representation: CTC lattice. With CTC lattice, efficient and effective modular speech recognition approaches, second pass rescoring for large vocabulary continuous speech recognition LVCSR, and phone-based keyword spotting KWS, are also proposed in this paper. Experiments showed that phone synchronous decoding can achieve 3-4 times search speed up without performance degradation compared to frame synchronous decoding. Modular LVCSR with CTC lattice can achieve further WER improvement. KWS with CTC lattice not only achieved significant equal error rate improvement, but also greatly reduced the KWS model size and increased the search speed.",
"title": ""
},
{
"docid": "80244987f22f9fe3f69fddc8af5ded5b",
"text": "In the online voting system, people can vote through the internet. In order to prevent voter frauds, we use two levels of security. In the first level of security, the face of the voter is captured by a web camera and sent to the database. Later, the face of the person is verified with the face present in the database and validated using Matlab. The comparison of the two faces is done using Local Binary Pattern algorithm. The scheme is based on a merging an image and assigns a value of a central pixel. These central pixels are labeled either 0 or 1. If the value is a lower pixel, a histogram of the labels is computed and used as a descriptor. LBP results are combined together to create one vector representing the entire face image. A password (OTP) is used as the second level of security, after entering the one time password generated to their mail it is verified and allow to vote. It should be noted that with this system in place, the users, in this case, shall be given an ample time during the voting period. They shall also be trained on how to vote online before the election time.",
"title": ""
},
{
"docid": "c8911f38bfd68baa54b49b9126c2ad22",
"text": "This document presents a performance comparison of three 2D SLAM techniques available in ROS: Gmapping, Hec-torSLAM and CRSM SLAM. These algorithms were evaluated using a Roomba 645 robotic platform with differential drive and a RGB-D Kinect sensor as an emulator of a scanner lasser. All tests were realized in static indoor environments. To improve the quality of the maps, some rosbag files were generated and used to build the maps in an off-line way.",
"title": ""
},
{
"docid": "370054a58b8f50719106508b138bd095",
"text": "In-network aggregation has been proposed as one method for reducing energy consumption in sensor networks. In this paper, we explore two ideas related to further reducing energy consumption in the context of in-network aggregation. The first is by influencing the construction of the routing trees for sensor networks with the goal of reducing the size of transmitted data. To this end, we propose a group-aware network configuration method that “clusters” along the same path sensor nodes that belong to the same group. The second idea involves imposing a hierarchy of output filters on the sensor network with the goal of both reducing the size of transmitted data and minimizing the number of transmitted messages. More specifically, we propose a framework to use temporal coherency tolerances in conjunction with in-network aggregation to save energy at the sensor nodes while maintaining specified quality of data. These tolerances are based on user preferences or can be dictated by the network in cases where the network cannot support the current tolerance level. Our framework, called TiNA, works on top of existing in-network aggregation schemes. We evaluate experimentally our proposed schemes in the context of existing in-network aggregation schemes. We present experimental results measuring energy consumption, response time, and quality of data for Group-By queries. Overall, our schemes provide significant energy savings with respect to communication and a negligible drop in quality of data.",
"title": ""
},
{
"docid": "72420289372499b50e658ef0957a3ad9",
"text": "A ripple current cancellation technique injects AC current into the output voltage bus of a converter that is equal and opposite to the normal converter ripple current. The output current ripple is ideally zero, leading to ultra-low noise converter output voltages. The circuit requires few additional components, no active circuits are required. Only an additional filter inductor winding, an auxiliary inductor, and small capacitor are required. The circuit utilizes leakage inductance of the modified filter inductor as all or part of the required auxiliary inductance. Ripple cancellation is independent of switching frequency, duty cycle, and other converter parameters. The circuit eliminates ripple current in both continuous conduction mode and discontinuous conduction mode. Experimental results provide better than an 80/spl times/ ripple current reduction.",
"title": ""
},
{
"docid": "3189fa20d605bf31c404b0327d74da79",
"text": "We now see an increasing number of self-tracking apps and wearable devices. Despite the vast number of available tools, however, it is still challenging for self-trackers to find apps that suit their unique tracking needs, preferences, and commitments. Furthermore, people are bounded by the tracking tools’ initial design because it is difficult to modify, extend, or mash up existing tools. In this paper, we present OmniTrack, a mobile self-tracking system, which enables self-trackers to construct their own trackers and customize tracking items to meet their individual tracking needs. To inform the OmniTrack design, we first conducted semi-structured interviews (N = 12) and analyzed existing mobile tracking apps (N = 62). We then designed and developed OmniTrack as an Android mobile app, leveraging a semi-automated tracking approach that combines manual and automated tracking methods. We evaluated OmniTrack through a usability study (N = 10) and improved its interfaces based on the feedback. Finally, we conducted a 3-week deployment study (N = 21) to assess if people can capitalize on OmniTrack’s flexible and customizable design to meet their tracking needs. From the study, we showed how participants used OmniTrack to create, revise, and appropriate trackers—ranging from a simple mood tracker to a sophisticated daily activity tracker. We discuss how OmniTrack positively influences and supports self-trackers’ tracking practices over time, and how to further improve OmniTrack by providing more appropriate visualizations and sharable templates, incorporating external contexts, and supporting researchers’ unique data collection needs.",
"title": ""
},
{
"docid": "70358147741dda2d10fdd2d103af9b3a",
"text": "Semi-structured documents (e.g. journal art,icles, electronic mail, television programs, mail order catalogs, . ..) a.re often not explicitly typed; the only available t,ype information is the implicit structure. An explicit t,ype, however, is needed in order to a.pply objectoriented technology, like type-specific methods. In this paper, we present a.n experimental vector space cla.ssifier for determining the type of semi-structured documents. Our goal was to design a. high-performa.nce classifier in t,erms of accuracy (recall and precision), speed, and extensibility.",
"title": ""
},
{
"docid": "b22a05d39ba34d581f0d809e89850520",
"text": "Due to recent financial crises and regulatory concerns, financial intermediaries' credit risk assessment is an area of renewed interest in both the academic world and the business community. In this paper, we propose a new fuzzy support vector machine to discriminate good creditors from bad ones. Because in credit scoring areas we usually cannot label one customer as absolutely good who is sure to repay in time, or absolutely bad who will default certainly, our new fuzzy support vector machine treats every sample as both positive and negative classes, but with different memberships. By this way we expect the new fuzzy support vector machine to have more generalization ability, while preserving the merit of insensitive to outliers, as the fuzzy support vector machine (SVM) proposed in previous papers. We reformulate this kind of two-group classification problem into a quadratic programming problem. Empirical tests on three public datasets show that it can have better discriminatory power than the standard support vector machine and the fuzzy support vector machine if appropriate kernel and membership generation method are chosen.",
"title": ""
},
{
"docid": "eb32ce661a0d074ce90861793a2e4de7",
"text": "A new transfer function from control voltage to duty cycle, the closed-current loop, which captures the natural sampling effect is used to design a controller for the voltage-loop of a pulsewidth modulated (PWM) dc-dc converter operating in continuous-conduction mode (CCM) with peak current-mode control (PCM). This paper derives the voltage loop gain and the closed-loop transfer function from reference voltage to output voltage. The closed-loop transfer function from the input voltage to the output voltage, or the closed-loop audio-susceptibility is derived. The closed-loop transfer function from output current to output voltage, or the closed loop output impedance is also derived. The derivation is performed using an averaged small-signal model of the example boost converter for CCM. Experimental verification is presented. The theoretical and experimental results were in good agreement, confirming the validity of the transfer functions derived.",
"title": ""
},
{
"docid": "f670178ac943bbcc17978a0091159c7f",
"text": "In this article, we present the first academic comparable corpus involving written French and French Sign Language. After explaining our initial motivation to build a parallel set of such data, especially in the context of our work on Sign Language modelling and our prospect of machine translation into Sign Language, we present the main problems posed when mixing language channels and modalities (oral, written, signed), discussing the translation-vs-interpretation narrative in particular. We describe the process followed to guarantee feature coverage and exploitable results despite a serious cost limitation, the data being collected from professional translations. We conclude with a few uses and prospects of the corpus.",
"title": ""
},
{
"docid": "fe753c4be665700ac15509c4b831309c",
"text": "Elements of Successful Digital Transformation12 New digital technologies, particularly what we refer to as SMACIT3 (social, mobile, analytics, cloud and Internet of things [IoT]) technologies, present both game-changing opportunities and existential threats to big old companies. GE’s “industrial internet” and Philips’ digital platform for personalized healthcare information represent bets made by big old companies attempting to cash",
"title": ""
},
{
"docid": "486b140009524e48da94712191dba78e",
"text": "The concept of holistic processing is a cornerstone of face-recognition research. In the study reported here, we demonstrated that holistic processing predicts face-recognition abilities on the Cambridge Face Memory Test and on a perceptual face-identification task. Our findings validate a large body of work that relies on the assumption that holistic processing is related to face recognition. These findings also reconcile the study of face recognition with the perceptual-expertise work it inspired; such work links holistic processing of objects with people's ability to individuate them. Our results differ from those of a recent study showing no link between holistic processing and face recognition. This discrepancy can be attributed to the use in prior research of a popular but flawed measure of holistic processing. Our findings salvage the central role of holistic processing in face recognition and cast doubt on a subset of the face-perception literature that relies on a problematic measure of holistic processing.",
"title": ""
},
{
"docid": "809384abcd6e402c1b30c3d2dfa75aa1",
"text": "Traditionally, psychiatry has offered clinical insights through keen behavioral observation and a deep study of emotion. With the subsequent biological revolution in psychiatry displacing psychoanalysis, some psychiatrists were concerned that the field shifted from “brainless” to “mindless.”1 Over the past 4 decades, behavioral expertise, once the strength of psychiatry, has diminished in importanceaspsychiatricresearchfocusedonpharmacology,genomics, and neuroscience, and much of psychiatric practicehasbecomeaseriesofbriefclinical interactionsfocused on medication management. In research settings, assigning a diagnosis from the Diagnostic and Statistical Manual of Mental Disorders has become a surrogate for behavioral observation. In practice, few clinicians measure emotion, cognition, or behavior with any standard, validated tools. Some recent changes in both research and practice are promising. The National Institute of Mental Health has led an effort to create a new diagnostic approach for researchers that is intended to combine biological, behavioral, and social factors to create “precision medicine for psychiatry.”2 Although this Research Domain Criteria project has been controversial, the ensuing debate has been",
"title": ""
},
{
"docid": "404f1c68c097c74b120189af67bf00f5",
"text": "In 1991, a novel robot, MIT-MANUS, was introduced to study the potential that robots might assist in and quantify the neuro-rehabilitation of motor function. MIT-MANUS proved an excellent tool for shoulder and elbow rehabilitation in stroke patients, showing in clinical trials a reduction of impairment in movements confined to the exercised joints. This successful proof of principle as to additional targeted and intensive movement treatment prompted a test of robot training examining other limb segments. This paper focuses on a robot for wrist rehabilitation designed to provide three rotational degrees-of-freedom. The first clinical trial of the device will enroll 200 stroke survivors. Ultimately 160 stroke survivors will train with both the proximal shoulder and elbow MIT-MANUS robot, as well as with the novel distal wrist robot, in addition to 40 stroke survivor controls. So far 52 stroke patients have completed the robot training (ongoing protocol). Here, we report on the initial results on 36 of these volunteers. These results demonstrate that further improvement should be expected by adding additional training to other limb segments.",
"title": ""
},
{
"docid": "b408788cd974438f32c1858cda9ff910",
"text": "Speaking as someone who has personally felt the influence of the “Chomskian Turn”, I believe that one of Chomsky’s most significant contributions to Psychology, or as it is now called, Cognitive Science was to bring back scientific realism. This may strike you as a very odd claim, for one does not usually think of science as needing to be talked into scientific realism. Science is, after all, the study of reality by the most precise instruments of measurement and analysis that humans have developed.",
"title": ""
},
{
"docid": "90b2d777eeac2466293c60ba699ea76b",
"text": "As autonomous vehicles become an every-day reality, high-accuracy pedestrian detection is of paramount practical importance. Pedestrian detection is a highly researched topic with mature methods, but most datasets (for both training and evaluation) focus on common scenes of people engaged in typical walking poses on sidewalks. But performance is most crucial for dangerous scenarios that are rarely observed, such as children playing in the street and people using bicycles/skateboards in unexpected ways. Such in-the-tail data is notoriously hard to observe, making both training and testing difficult. To analyze this problem, we have collected a novel annotated dataset of dangerous scenarios called the Precarious Pedestrian dataset. Even given a dedicated collection effort, it is relatively small by contemporary standards (≈ 1000 images). To explore large-scale data-driven learning, we explore the use of synthetic data generated by a game engine. A significant challenge is selected the right priors or parameters for synthesis: we would like realistic data with realistic poses and object configurations. Inspired by Generative Adversarial Networks, we generate a massive amount of synthetic data and train a discriminative classifier to select a realistic subset (that fools the classifier), which we deem Synthetic Imposters. We demonstrate that this pipeline allows one to generate realistic (or adverserial) training data by making use of rendering/animation engines. Interestingly, we also demonstrate that such data can be used to rank algorithms, suggesting that Synthetic Imposters can also be used for in-the-tail validation at test-time, a notoriously difficult challenge for real-world deployment.",
"title": ""
},
{
"docid": "2c4fed71ee9d658516b017a924ad6589",
"text": "As the concept of Friction stir welding is relatively new, there are many areas, which need thorough investigation to optimize and make it commercially viable. In order to obtain the desired mechanical properties, certain process parameters, like rotational and translation speeds, tool tilt angle, tool geometry etc. are to be controlled. Aluminum alloys of 5xxx series and their welded joints show good resistance to corrosion in sea water. Here, a literature survey has been carried out for the friction stir welding of 5xxx series aluminum alloys.",
"title": ""
}
] |
scidocsrr
|
c7f5a23df45b056a8a593e880402f3ed
|
Advances in dental veneers: materials, applications, and techniques
|
[
{
"docid": "b42c230ff1af8da8b8b4246bc9cb2bd8",
"text": "Patients have many restorative options for changing the appearance of their teeth. The most conservative restorative treatments for changing the appearance of teeth include tooth bleaching, direct composite resin veneers, and porcelain veneers. Patients seeking esthetic treatment should undergo a comprehensive clinical examination that includes an esthetic evaluation. When selecting a conservative treatment modality, the use of minimally invasive or no-preparation porcelain veneers should be considered. As with any treatment decision, the indications and contraindications must be considered before a definitive treatment plan is made. Long-term research has demonstrated a 94% survival rate for minimally invasive porcelain veneers. While conservation of tooth structure is important, so is selecting the right treatment modality for each patient based on clinical findings.",
"title": ""
}
] |
[
{
"docid": "d78117c809f963a2983c262cca2399e9",
"text": "Range detection applications based on radar can be separated into measurements of short distances with high accuracy or large distances with low accuracy. In this paper an approach is investigated to combine the advantages of both principles. Therefore an FMCW radar will be extended with an additional phase evaluation technique. In order to realize this combination an increased range resolution of the FMCW radar is required. This paper describes an frequency estimation algorithm to increase the frequency resolution and hence the range resolution of an FMCW radar at 24 GHz for a line based range detection system to evaluate the possibility of an extended FMCW radar using the phase information.",
"title": ""
},
{
"docid": "d1ebf47c1f0b1d8572d526e9260dbd32",
"text": "In this paper, mortality in the immediate aftermath of an earthquake is studied on a worldwide scale using multivariate analysis. A statistical method is presented that analyzes reported earthquake fatalities as a function of a heterogeneous set of parameters selected on the basis of their presumed influence on earthquake mortality. The ensemble was compiled from demographic, seismic, and reported fatality data culled from available records of past earthquakes organized in a geographic information system. The authors consider the statistical relation between earthquake mortality and the available data ensemble, analyze the validity of the results in view of the parametric uncertainties, and propose a multivariate mortality analysis prediction method. The analysis reveals that, although the highest mortality rates are expected in poorly developed rural areas, high fatality counts can result from a wide range of mortality ratios that depend on the effective population size.",
"title": ""
},
{
"docid": "89039f8d247b3f178c0be6a1f30004b8",
"text": "We study the property of the Fused Lasso Signal Approximator (FLSA) for estimating a blocky signal sequence with additive noise. We transform the FLSA to an ordinary Lasso problem, and find that in general the resulting design matrix does not satisfy the irrepresentable condition that is known as an almost necessary and sufficient condition for exact pattern recovery. We give necessary and sufficient conditions on the expected signal pattern such that the irrepresentable condition holds in the transformed Lasso problem. However, these conditions turn out to be very restrictive. We apply the newly developed preconditioning method — Puffer Transformation (Jia and Rohe, 2015) to the transformed Lasso and call the new procedure the preconditioned fused Lasso. We give nonasymptotic results for this method, showing that as long as the signal-to-noise ratio is not too small, our preconditioned fused Lasso estimator always recovers the correct pattern with high probability. Theoretical results give insight into what controls the ability of recovering the pattern — it is the noise level instead of the length of the signal sequence. Simulations further confirm our theorems and visualize the significant improvement of the preconditioned fused Lasso estimator over the vanilla FLSA in exact pattern recovery. © 2015 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "e756574e701c9ecc4e28da6135499215",
"text": "MicroRNAs are small noncoding RNA molecules that regulate gene expression posttranscriptionally through complementary base pairing with thousands of messenger RNAs. They regulate diverse physiological, developmental, and pathophysiological processes. Recent studies have uncovered the contribution of microRNAs to the pathogenesis of many human diseases, including liver diseases. Moreover, microRNAs have been identified as biomarkers that can often be detected in the systemic circulation. We review the role of microRNAs in liver physiology and pathophysiology, focusing on viral hepatitis, liver fibrosis, and cancer. We also discuss microRNAs as diagnostic and prognostic markers and microRNA-based therapeutic approaches for liver disease.",
"title": ""
},
{
"docid": "7b7289900ac45f4ee5357084f16a4c0d",
"text": "We present a simple and accurate span-based model for semantic role labeling (SRL). Our model directly takes into account all possible argument spans and scores them for each label. At decoding time, we greedily select higher scoring labeled spans. One advantage of our model is to allow us to design and use spanlevel features, that are difficult to use in tokenbased BIO tagging approaches. Experimental results demonstrate that our ensemble model achieves the state-of-the-art results, 87.4 F1 and 87.0 F1 on the CoNLL-2005 and 2012 datasets, respectively.",
"title": ""
},
{
"docid": "d9f7d78b6e1802a17225db13edd033f6",
"text": "The edit distance between two character strings can be defined as the minimum cost of a sequence of editing operations which transforms one string into the other. The operations we admit are deleting, inserting and replacing one symbol at a time, with possibly different costs for each of these operations. The problem of finding the longest common subsequence of two strings is a special case of the problem of computing edit distances. We describe an algorithm for computing the edit distance between two strings of length n and m, n > m, which requires O(n * max( 1, m/log n)) steps whenever the costs of edit operations are integral multiples of a single positive real number and the alphabet for the strings is finite. These conditions are necessary for the algorithm to achieve the time bound.",
"title": ""
},
{
"docid": "712098110f7713022e4664807ac106c7",
"text": "Getting a machine to understand human narratives has been a classic challenge for NLP and AI. This paper proposes a new representation for the temporal structure of narratives. The representation is parsimonious, using temporal relations as surrogates for discourse relations. The narrative models, called Temporal Discourse Models, are treestructured, where nodes include abstract events interpreted as pairs of time points and where the dominance relation is expressed by temporal inclusion. Annotation examples and challenges are discussed, along with a report on progress to date in creating annotated corpora.",
"title": ""
},
{
"docid": "67c444b9538ccfe7a2decdd11523dcd5",
"text": "Attention-based learning for fine-grained image recognition remains a challenging task, where most of the existing methods treat each object part in isolation, while neglecting the correlations among them. In addition, the multi-stage or multi-scale mechanisms involved make the existing methods less efficient and hard to be trained end-to-end. In this paper, we propose a novel attention-based convolutional neural network (CNN) which regulates multiple object parts among different input images. Our method first learns multiple attention region features of each input image through the one-squeeze multi-excitation (OSME) module, and then apply the multi-attention multi-class constraint (MAMC) in a metric learning framework. For each anchor feature, the MAMC functions by pulling same-attention same-class features closer, while pushing different-attention or different-class features away. Our method can be easily trained end-to-end, and is highly efficient which requires only one training stage. Moreover, we introduce Dogs-in-the-Wild, a comprehensive dog species dataset that surpasses similar existing datasets by category coverage, data volume and annotation quality. Extensive experiments are conducted to show the substantial improvements of our method on four benchmark datasets.",
"title": ""
},
{
"docid": "a7c9d58c49f1802b94395c6f12c2d6dd",
"text": "Signature-based network intrusion detection systems (NIDSs) have been widely deployed in current network security infrastructure. However, these detection systems suffer from some limitations such as network packet overload, expensive signature matching and massive false alarms in a large-scale network environment. In this paper, we aim to develop an enhanced filter mechanism (named EFM) to comprehensively mitigate these issues, which consists of three major components: a context-aware blacklist-based packet filter, an exclusive signature matching component and a KNN-based false alarm filter. The experiments, which were conducted with two data sets and in a network environment, demonstrate that our proposed EFM can overall enhance the performance of a signaturebased NIDS such as Snort in the aspects of packet filtration, signature matching improvement and false alarm reduction without affecting network security. a 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8d5d2f266181d456d4f71df26075a650",
"text": "Integrated architectures in the automotive and avionic domain promise improved resource utilization and enable a better tactic coordination of application subsystems compared to federated systems. In order to support safety-critical application subsystems, an integrated architecture needs to support fault-tolerant strategies that enable the continued operation of the system in the presence of failures. The basis for the implementation and validation of fault-tolerant strategies is a fault hypothesis that identifies the fault containment regions, specifies the failure modes and provides realistic failure rate assumptions. This paper describes a fault hypothesis for integrated architectures, which takes into account the collocation of multiple software components on shared node computers. We argue in favor of a differentiation of fault containment regions for hardware and software faults. In addition, the fault hypothesis describes the assumptions concerning the respective frequencies of transient and permanent failures in consideration of recent semiconductor trends",
"title": ""
},
{
"docid": "d3a79da70eed0ec0352cb924c8ce0744",
"text": "2. School of Electronics Engineering and Computer science. Peking University, Beijing 100871,China Abstract—Speech emotion recognition (SER) is to study the formation and change of speaker’s emotional state from the speech signal perspective, so as to make the interaction between human and computer more intelligent. SER is a challenging task that has encountered the problem of less training data and low prediction accuracy. Here we propose a data augmentation algorithm based on the imaging principle of the retina and convex lens, to acquire the different sizes of spectrogram and increase the amount of training data by changing the distance between the spectrogram and the convex lens. Meanwhile, with the help of deep learning to get the high-level features, we propose the Deep Retinal Convolution Neural Networks (DRCNNs) for SER and achieve the average accuracy over 99%. The experimental results indicate that DRCNNs outperforms the previous studies in terms of both the number of emotions and the accuracy of recognition. Predictably, our results will dramatically improve human-computer interaction.",
"title": ""
},
{
"docid": "5c5c21bd0c50df31c6ccec63d864568c",
"text": "Intellectual Property issues (IP) is a concern that refrains companies to cooperate in whatever of Open Innovation (OI) processes. Particularly, SME consider open innovation as uncertain, risky processes. Despite the opportunities that online OI platforms offer, SMEs have so far failed to embrace them, and proved reluctant to OI. We intend to find whether special collaborative spaces that facilitate a sort of preventive idea claiming, explicit claiming evolution of defensive publication, as so far patents and publications for prevailing innovation, can be the right complementary instruments in OI as to when stronger IP protection regimes might drive openness by SME in general. These spaces, which we name NIR (Networking Innovation Rooms), are a practical, smart paradigm to boost OI for SME. There users sign smart contracts as NDA which takes charge of timestamping any IP disclosure or creation and declares what corrective actions (if they might apply) might be taken for unauthorised IP usage or disclosure of any of the NDA signers. With Blockchain, a new technology emerges which enables decentralised, fine-grained IP management for OI.",
"title": ""
},
{
"docid": "1514ce079eba01f4a78ab13c49cc2fa7",
"text": "The task of event trigger labeling is typically addressed in the standard supervised setting: triggers for each target event type are annotated as training data, based on annotation guidelines. We propose an alternative approach, which takes the example trigger terms mentioned in the guidelines as seeds, and then applies an eventindependent similarity-based classifier for trigger labeling. This way we can skip manual annotation for new event types, while requiring only minimal annotated training data for few example events at system setup. Our method is evaluated on the ACE-2005 dataset, achieving 5.7% F1 improvement over a state-of-the-art supervised system which uses the full training data.",
"title": ""
},
{
"docid": "e0d8936ecce870fbcee6b3bd4bc66d10",
"text": "UNLABELLED\nMathematical modeling is a process by which a real world problem is described by a mathematical formulation. The cancer modeling is a highly challenging problem at the frontier of applied mathematics. A variety of modeling strategies have been developed, each focusing on one or more aspects of cancer.\n\n\nMATERIAL AND METHODS\nThe vast majority of mathematical models in cancer diseases biology are formulated in terms of differential equations. We propose an original mathematical model with small parameter for the interactions between these two cancer cell sub-populations and the mathematical model of a vascular tumor. We work on the assumption that, the quiescent cells' nutrient consumption is long. One the equations system includes small parameter epsilon. The smallness of epsilon is relative to the size of the solution domain.\n\n\nRESULTS\nMATLAB simulations obtained for transition rate from the quiescent cells' nutrient consumption is long, we show a similar asymptotic behavior for two solutions of the perturbed problem. In this system, the small parameter is an asymptotic variable, different from the independent variable. The graphical output for a mathematical model of a vascular tumor shows the differences in the evolution of the tumor populations of proliferating, quiescent and necrotic cells. The nutrient concentration decreases sharply through the viable rim and tends to a constant level in the core due to the nearly complete necrosis in this region.\n\n\nCONCLUSIONS\nMany mathematical models can be quantitatively characterized by ordinary differential equations or partial differential equations. The use of MATLAB in this article illustrates the important role of informatics in research in mathematical modeling. The study of avascular tumor growth cells is an exciting and important topic in cancer research and will profit considerably from theoretical input. Interpret these results to be a permanent collaboration between math's and medical oncologists.",
"title": ""
},
{
"docid": "5ae4b1d4ef00afbde49edfaa2728934b",
"text": "A wideband, low loss inline transition from microstrip line to rectangular waveguide is presented. This transition efficiently couples energy from a microstrip line to a ridge and subsequently to a TE10 waveguide. This unique structure requires no mechanical pressure for electrical contact between the microstrip probe and the ridge because the main planar circuitry and ridge sections are placed on a single housing. The measured insertion loss for back-to-back transition is 0.5 – 0.7 dB (0.25 – 0.35 dB/transition) in the band 50 – 72 GHz.",
"title": ""
},
{
"docid": "9d175a211ec3b0ee7db667d39c240e1c",
"text": "In recent years, there has been an increased effort to introduce coding and computational thinking in early childhood education. In accordance with the international trend, programming has become an increasingly growing focus in European education. With over 9.5 million iOS downloads, ScratchJr is the most popular freely available introductory programming language for young children (ages 5-7). This paper provides an overview of ScratchJr, and the powerful ideas from computer science it is designed to teach. In addition, data analytics are presented to show trends of usage in Europe and and how it compares to the rest of the world. Data reveals that countries with robust computer science initiatives such as the UK and the Nordic countries have high usage of ScratchJr.",
"title": ""
},
{
"docid": "0243c98e13e814320ec2a3416d2bcc94",
"text": "Projects that are over-budget, delivered late, and fall short of user's expectations have been a common problem are a for software development efforts for years. Agile methods, which represent an emerging set of software development methodologies based on the concepts of adaptability and flexibility, are currently touted as a way to alleviate these reoccurring problems and pave the way for the future of development. The estimation in Agile Software Development methods depends on an expert opinion and historical data of project for estimation of cost, size, effort and duration. In absence of the historical data and experts the previous method like analogy and planning poker are not useful. This paper focuses on the research work in Agile Software development and estimation in Agile. It also focuses the problems in current Agile practices thereby proposed a method for accurate cost and effort estimation.",
"title": ""
},
{
"docid": "1738a8ccb1860e5b85e2364f437d4058",
"text": "We describe a new algorithm for finding the hypothesis in a recognition lattice that is expected to minimize the word er ror rate (WER). Our approach thus overcomes the mismatch between the word-based performance metric and the standard MAP scoring paradigm that is sentence-based, and that can le ad to sub-optimal recognition results. To this end we first find a complete alignment of all words in the recognition lattice, identifying mutually supporting and competing word hypotheses . Finally, a new sentence hypothesis is formed by concatenating the words with maximal posterior probabilities. Experimental ly, this approach leads to a significant WER reduction in a large vocab ulary recognition task.",
"title": ""
},
{
"docid": "ddb51863430250a28f37c5f12c13c910",
"text": "Much of our understanding of human thinking is based on probabilistic models. This innovative book by Jerome R. Busemeyer and Peter D. Bruza argues that, actually, the underlying mathematical structures from quantum theory provide a much better account of human thinking than traditional models. They introduce the foundations for modelling probabilistic-dynamic systems using two aspects of quantum theory. The first, “contextuality,” is a way to understand interference effects found with inferences and decisions under conditions of uncertainty. The second, “quantum entanglement,” allows cognitive phenomena to be modelled in non-reductionist ways. Employing these principles drawn from quantum theory allows us to view human cognition and decision in a totally new light. Introducing the basic principles in an easy-to-follow way, this book does not assume a physics background or a quantum brain and comes complete with a tutorial and fully worked-out applications in important areas of cognition and decision.",
"title": ""
}
] |
scidocsrr
|
443b689900fc69a1a256fb30af2036e5
|
SYSTEMS CONTINUANCE : AN EXPECTATION-CONFIRMATION MODEL 1 By :
|
[
{
"docid": "6c2afcf5d7db0f5d6baa9d435c203f8a",
"text": "An attempt to extend current thinking on postpurchase response to include attribute satisfaction and dissatisfaction as separate determinants not fully reflected in either cognitive (i.e.. expectancy disconfirmation) or affective paradigms is presented. In separate studies of automobile satisfaction and satisfaction with course instruction, respondents provided the nature of emotional experience, disconfirmation perceptions, and separate attribute satisfaction and dissatisfaction judgments. Analysis confirmed the disconfirmation effect and tbe effects of separate dimensions of positive and negative affect and also suggested a multidimensional structure to the affect dimensions. Additionally, attribute satisfaction and dissatisfaction were significantly related to positive and negative affect, respectively, and to overall satisfaction. It is suggested that all dimensions tested are needed for a full accounting of postpurchase responses in usage.",
"title": ""
}
] |
[
{
"docid": "b3e90fdfda5346544f769b6dd7c3882b",
"text": "Bromelain is a complex mixture of proteinases typically derived from pineapple stem. Similar proteinases are also present in pineapple fruit. Beneficial therapeutic effects of bromelain have been suggested or proven in several human inflammatory diseases and animal models of inflammation, including arthritis and inflammatory bowel disease. However, it is not clear how each of the proteinases within bromelain contributes to its anti-inflammatory effects in vivo. Previous in vivo studies using bromelain have been limited by the lack of assays to control for potential differences in the composition and proteolytic activity of this naturally derived proteinase mixture. In this study, we present model substrate assays and assays for cleavage of bromelain-sensitive cell surface molecules can be used to assess the activity of constituent proteinases within bromelain without the need for biochemical separation of individual components. Commercially available chemical and nutraceutical preparations of bromelain contain predominately stem bromelain. In contrast, the proteinase activity of pineapple fruit reflects its composition of fruit bromelain>ananain approximately stem bromelain. Concentrated bromelain solutions (>50 mg/ml) are more resistant to spontaneous inactivation of their proteolytic activity than are dilute solutions, with the proteinase stability in the order of stem bromelain>fruit bromelain approximately ananain. The proteolytic activity of concentrated bromelain solutions remains relatively stable for at least 1 week at room temperature, with minimal inactivation by multiple freeze-thaw cycles or exposure to the digestive enzyme trypsin. The relative stability of concentrated versus dilute bromelain solutions to inactivation under physiologically relevant conditions suggests that delivery of bromelain as a concentrated bolus would be the preferred method to maximize its proteolytic activity in vivo.",
"title": ""
},
{
"docid": "135d451e66cdc8d47add47379c1c35f9",
"text": "We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind denoising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.",
"title": ""
},
{
"docid": "9a9fd442bc7353d9cd202e9ace6e6580",
"text": "The idea of developmental dyspraxia has been discussed in the research literature for almost 100 years. However, there continues to be a lack of consensus regarding both the definition and description of this disorder. This paper presents a neuropsychologically based operational definition of developmental dyspraxia that emphasizes that developmental dyspraxia is a disorder of gesture. Research that has investigated the development of praxis is discussed. Further, different types of gestural disorders displayed by children and different mechanisms that underlie developmental dyspraxia are compared to and contrasted with adult acquired apraxia. The impact of perceptual-motor, language, and cognitive impairments on children's gestural development and the possible associations between these developmental disorders and developmental dyspraxia are also examined. Also, the relationship among limb, orofacial, and verbal dyspraxia is discussed. Finally, problems that exist in the neuropsychological assessment of developmental dyspraxia are discussed and recommendations concerning what should be included in such an assessment are presented.",
"title": ""
},
{
"docid": "22285844f638715765d21bff139d1bb1",
"text": "The field of Terahertz (THz) radiation, electromagnetic energy, between 0.3 to 3 THz, has seen intense interest recently, because it combines some of the best properties of IR along with those of RF. For example, THz radiation can penetrate fabrics with less attenuation than IR, while its short wavelength maintains comparable imaging capabilities. We discuss major challenges in the field: designing systems and applications which fully exploit the unique properties of THz radiation. To illustrate, we present our reflective, radar-inspired THz imaging system and results, centered on biomedical burn imaging and skin hydration, and discuss challenges and ongoing research.",
"title": ""
},
{
"docid": "85d9b0ed2e9838811bf3b07bb31dbeb6",
"text": "In recent years, the medium which has negative index of refraction is widely researched. The medium has both the negative permittivity and the negative permeability. In this paper, we have researched the frequency range widening of negative permeability using split ring resonators.",
"title": ""
},
{
"docid": "0d2260653f223db82e2e713f211a2ba0",
"text": "Smartphone usage is a hot topic in pervasive computing due to their popularity and personal aspect. We present our initial results from analyzing how individual differences, such as gender and age, affect smartphone usage. The dataset comes from a large scale longitudinal study, the Menthal project. We select a sample of 30, 677 participants, from which 16, 147 are males and 14, 523 are females, with a median age of 21 years. These have been tracked for at least 28 days and they have submitted their demographic data through a questionnaire. The ongoing experiment has been started in January 2014 and we have used our own mobile data collection and analysis framework. Females use smartphones for longer periods than males, with a daily mean of 166.78 minutes vs. 154.26 minutes. Younger participants use their phones longer and usage is directed towards entertainment and social interactions through specialized apps. Older participants use it less and mainly for getting information or using it as a classic phone.",
"title": ""
},
{
"docid": "893942f986718d639aa46930124af679",
"text": "In this work we consider the problem of controlling a team of microaerial vehicles moving quickly through a three-dimensional environment while maintaining a tight formation. The formation is specified by a shape matrix that prescribes the relative separations and bearings between the robots. Each robot plans its trajectory independently based on its local information of other robot plans and estimates of states of other robots in the team to maintain the desired shape. We explore the interaction between nonlinear decentralized controllers, the fourth-order dynamics of the individual robots, the time delays in the network, and the effects of communication failures on system performance. An experimental evaluation of our approach on a team of quadrotors suggests that suitable performance is maintained as the formation motions become increasingly aggressive and as communication degrades.",
"title": ""
},
{
"docid": "62f5640954e5b731f82599fb52ea816f",
"text": "This paper presents an energy-balance control strategy for a cascaded single-phase grid-connected H-bridge multilevel inverter linking n independent photovoltaic (PV) arrays to the grid. The control scheme is based on an energy-sampled data model of the PV system and enables the design of a voltage loop linear discrete controller for each array, ensuring the stability of the system for the whole range of PV array operating conditions. The control design is adapted to phase-shifted and level-shifted carrier pulsewidth modulations to share the control action among the cascade-connected bridges in order to concurrently synthesize a multilevel waveform and to keep each of the PV arrays at its maximum power operating point. Experimental results carried out on a seven-level inverter are included to validate the proposed approach.",
"title": ""
},
{
"docid": "1c6a9910a51656a47a8599a98dba77bb",
"text": "In real life facial expressions show mixture of emotions. This paper proposes a novel expression descriptor based expression map that can efficiently represent pure, mixture and transition of facial expressions. The expression descriptor is the integration of optic flow and image gradient values and the descriptor value is accumulated in temporal scale. The expression map is realized using self-organizing map. We develop an objective scheme to find the percentage of different prototypical pure emotions (e.g., happiness, surprise, disgust etc.) that mix up to generate a real facial expression. Experimental results show that the expression map can be used as an effective classifier for facial expressions.",
"title": ""
},
{
"docid": "210052dbabdb5c48502079d75cdd6ce6",
"text": "Sketch It, Make It (SIMI) is a modeling tool that enables non-experts to design items for fabrication with laser cutters. SIMI recognizes rough, freehand input as a user iteratively edits a structured vector drawing. The tool combines the strengths of sketch-based interaction with the power of constraint-based modeling. Several interaction techniques are combined to present a coherent system that makes it easier to make precise designs for laser cutters.",
"title": ""
},
{
"docid": "426d3b0b74eacf4da771292abad06739",
"text": "Brain tumor is considered as one of the deadliest and most common form of cancer both in children and in adults. Consequently, determining the correct type of brain tumor in early stages is of significant importance to devise a precise treatment plan and predict patient's response to the adopted treatment. In this regard, there has been a recent surge of interest in designing Convolutional Neural Networks (CNNs) for the problem of brain tumor type classification. However, CNNs typically require large amount of training data and can not properly handle input transformations. Capsule networks (referred to as CapsNets) are brand new machine learning architectures proposed very recently to overcome these shortcomings of CNNs, and posed to revolutionize deep learning solutions. Of particular interest to this work is that Capsule networks are robust to rotation and affine transformation, and require far less training data, which is the case for processing medical image datasets including brain Magnetic Resonance Imaging (MRI) images. In this paper, we focus to achieve the following four objectives: (i) Adopt and incorporate CapsNets for the problem of brain tumor classification to design an improved architecture which maximizes the accuracy of the classification problem at hand; (ii) Investigate the over-fitting problem of CapsNets based on a real set of MRI images; (iii) Explore whether or not CapsNets are capable of providing better fit for the whole brain images or just the segmented tumor, and; (iv) Develop a visualization paradigm for the output of the CapsNet to better explain the learned features. Our results show that the proposed approach can successfully overcome CNNs for the brain tumor classification problem.",
"title": ""
},
{
"docid": "4357e361fd35bcbc5d6a7c195a87bad1",
"text": "In an age of increasing technology, the possibility that typing on a keyboard will replace handwriting raises questions about the future usefulness of handwriting skills. Here we present evidence that brain activation during letter perception is influenced in different, important ways by previous handwriting of letters versus previous typing or tracing of those same letters. Preliterate, five-year old children printed, typed, or traced letters and shapes, then were shown images of these stimuli while undergoing functional MRI scanning. A previously documented \"reading circuit\" was recruited during letter perception only after handwriting-not after typing or tracing experience. These findings demonstrate that handwriting is important for the early recruitment in letter processing of brain regions known to underlie successful reading. Handwriting therefore may facilitate reading acquisition in young children.",
"title": ""
},
{
"docid": "753f837e53a08a59392c30515481b503",
"text": "Light is a powerful zeitgeber that synchronizes our endogenous circadian pacemaker with the environment and has been previously described as an agent in improving cognitive performance. With that in mind, this study was designed to explore the influence of exposure to blue-enriched white light in the morning on the performance of adolescent students. 58 High school students were recruited from four classes in two schools. In each school, one classroom was equipped with blue-enriched white lighting while the classroom next door served as a control setting. The effects of classroom lighting on cognitive performance were assessed using standardized psychological tests. Results show beneficial effects of blue-enriched white light on students' performance. In comparison to standard lighting conditions, students showed faster cognitive processing speed and better concentration. The blue-enriched white lighting seems to influence very basic information processing primarily, as no effects on short-term encoding and retrieval of memories were found. & 2014 Elsevier GmbH. All rights reserved.",
"title": ""
},
{
"docid": "47b7ebc460ce1273941bdef5bc754d4a",
"text": "When people predict their future behavior, they tend to place too much weight on their current intentions, which produces an optimistic bias for behaviors associated with currently strong intentions. More realistic self-predictions require greater sensitivity to situational barriers, such as obstacles or competing demands, that may interfere with the translation of current intentions into future behavior. We consider three reasons why people may not adjust sufficiently for such barriers. First, self-predictions may focus exclusively on current intentions, ignoring potential barriers altogether. We test this possibility, in three studies, with manipulations that draw greater attention to barriers. Second, barriers may be discounted in the self-prediction process. We test this possibility by comparing prospective and retrospective ratings of the impact of barriers on the target behavior. Neither possibility was supported in these tests, or in a further test examining whether an optimally weighted statistical model could improve on the accuracy of self-predictions by placing greater weight on anticipated situational barriers. Instead, the evidence supports a third possibility: Even when they acknowledge that situational factors can affect the likelihood of carrying out an intended behavior, people do not adequately moderate the weight placed on their current intentions when predicting their future behavior.",
"title": ""
},
{
"docid": "9a397ca2a072d9b1f861f8a6770aa792",
"text": "Computational photography systems are becoming increasingly diverse, while computational resources---for example on mobile platforms---are rapidly increasing. As diverse as these camera systems may be, slightly different variants of the underlying image processing tasks, such as demosaicking, deconvolution, denoising, inpainting, image fusion, and alignment, are shared between all of these systems. Formal optimization methods have recently been demonstrated to achieve state-of-the-art quality for many of these applications. Unfortunately, different combinations of natural image priors and optimization algorithms may be optimal for different problems, and implementing and testing each combination is currently a time-consuming and error-prone process. ProxImaL is a domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety of linear and nonlinear image formation models and cost functions, advanced image priors, and noise models. The compiler intelligently chooses the best way to translate a problem formulation and choice of optimization algorithm into an efficient solver implementation. In applications to the image processing pipeline, deconvolution in the presence of Poisson-distributed shot noise, and burst denoising, we show that a few lines of ProxImaL code can generate highly efficient solvers that achieve state-of-the-art results. We also show applications to the nonlinear and nonconvex problem of phase retrieval.",
"title": ""
},
{
"docid": "9c1687323661ccb6bf2151824edc4260",
"text": "In this work we present the design of a digitally controlled ring type oscillator in 0.5 μm CMOS technology for a low-cost and portable radio-frequency diathermy (RFD) device. The oscillator circuit is composed by a low frequency ring oscillator (LFRO), a voltage controlled ring oscillator (VCRO), and a logic control. The digital circuit generates an input signal for the LFO, which generates a voltage ramp that controls the oscillating output signal of the VCRO in the range of 500 KHz to 1 MHz. Simulation results show that the proposed circuit exhibits controllable output characteristics in the range of 500 KHz–1 MHz, with low power consumption and low phase noise, making it suitable for a portable RFD device.",
"title": ""
},
{
"docid": "47faebfa7d65ebf277e57436cf7c2ca4",
"text": "Steganography is a method which can put data into a media without a tangible impact on the cover media. In addition, the hidden data can be extracted with minimal differences. In this paper, twodimensional discrete wavelet transform is used for steganography in 24-bit color images. This steganography is of blind type that has no need for original images to extract the secret image. In this algorithm, by the help of a structural similarity and a two-dimensional correlation coefficient, it is tried to select part of sub-band cover image instead of embedding location. These sub-bands are obtained by 3levels of applying the DWT. Also to increase the steganography resistance against cropping or insert visible watermark, two channels of color image is used simultaneously. In order to raise the security, an encryption algorithm based on Arnold transform was also added to the steganography operation. Because diversity of chaos scenarios is limited in Arnold transform, it could be improved by its mirror in order to increase the diversity of key. Additionally, an ability is added to encryption algorithm that can still maintain its efficiency against image crop. Transparency of steganography image is measured by the peak signalto-noise ratio that indicates the adequate transparency of steganography process. Extracted image similarity is also measured by two-dimensional correlation coefficient with more than 99% similarity. Moreover, steganography resistance against increasing and decreasing brightness and contrast, lossy compression, cropping image, changing scale and adding noise is acceptable",
"title": ""
},
{
"docid": "04435e017e720c0ed6e5c0cd29f1b4fc",
"text": "Blobworld is a system for image retrieval based on finding coherent image regions which roughly correspond to objects. Each image is automatically segmented into regions (“blobs”) with associated color and texture descriptors. Querying is based on the attributes of one or two regions of interest, rather than a description of the entire image. In order to make large-scale retrieval feasible, we index the blob descriptions using a tree. Because indexing in the high-dimensional feature space is computationally prohibitive, we use a lower-rank approximation to the high-dimensional distance. Experiments show encouraging results for both querying and indexing.",
"title": ""
}
] |
scidocsrr
|
92a6ff6616ba7c6622b8b1510ef7f142
|
Interactive whiteboards: Interactive or just whiteboards?
|
[
{
"docid": "e1d0c07f9886d3258f0c5de9dd372e17",
"text": "strategies and tools must be based on some theory of learning and cognition. Of course, crafting well-articulated views that clearly answer the major epistemological questions of human learning has exercised psychologists and educators for centuries. What is a mind? What does it mean to know something? How is our knowledge represented and manifested? Many educators prefer an eclectic approach, selecting “principles and techniques from the many theoretical perspectives in much the same way we might select international dishes from a smorgasbord, choosing those we like best and ending up with a meal which represents no nationality exclusively and a design technology based on no single theoretical base” (Bednar et al., 1995, p. 100). It is certainly the case that research within collaborative educational learning tools has drawn upon behavioral, cognitive information processing, humanistic, and sociocultural theory, among others, for inspiration and justification. Problems arise, however, when tools developed in the service of one epistemology, say cognitive information processing, are integrated within instructional systems designed to promote learning goals inconsistent with it. When concepts, strategies, and tools are abstracted from the theoretical viewpoint that spawned them, they are too often stripped of meaning and utility. In this chapter, we embed our discussion in learner-centered, constructivist, and sociocultural perspectives on collaborative technology, with a bias toward the third. The principles of these perspectives, in fact, provide the theoretical rationale for much of the research and ideas presented in this book. 2",
"title": ""
}
] |
[
{
"docid": "8a3dba8aa5aa8cf69da21079f7e36de6",
"text": "This letter presents a novel technique for synthesis of coupled-resonator filters with inter-resonator couplings varying linearly with frequency. The values of non-zero elements of the coupling matrix are found by solving a nonlinear least squares problem involving eigenvalues of matrix pencils derived from the coupling matrix and reference zeros and poles of scattering parameters. The proposed method was verified by numerical tests carried out for various coupling schemes including triplets and quadruplets for which the frequency-dependent coupling was found to produce an extra zero.",
"title": ""
},
{
"docid": "e55ad28c68a422ec959e8b247aade1b9",
"text": "Developing reliable methods for representing and managing information uncertainty remains a persistent and relevant challenge to GIScience. Information uncertainty is an intricate idea, and recent examinations of this concept have generated many perspectives on its representation and visualization, with perspectives emerging from a wide range of disciplines and application contexts. In this paper, we review and assess progress toward visual tools and methods to help analysts manage and understand information uncertainty. Specifically, we report on efforts to conceptualize uncertainty, decision making with uncertainty, frameworks for representing uncertainty, visual representation and user control of displays of information uncertainty, and evaluative efforts to assess the use and usability of visual methods of uncertainty. We conclude by identifying seven key research challenges in visualizing information uncertainty, particularly as it applies to decision making and analysis.",
"title": ""
},
{
"docid": "c8911f38bfd68baa54b49b9126c2ad22",
"text": "This document presents a performance comparison of three 2D SLAM techniques available in ROS: Gmapping, Hec-torSLAM and CRSM SLAM. These algorithms were evaluated using a Roomba 645 robotic platform with differential drive and a RGB-D Kinect sensor as an emulator of a scanner lasser. All tests were realized in static indoor environments. To improve the quality of the maps, some rosbag files were generated and used to build the maps in an off-line way.",
"title": ""
},
{
"docid": "baa0bf8fe429c4fe8bfb7ebf78a1ed94",
"text": "The weakly supervised object localization (WSOL) is to locate the objects in an image while only image-level labels are available during the training procedure. In this work, the Selective Feature Category Mapping (SFCM) method is proposed, which introduces the Feature Category Mapping (FCM) and the widely-used selective search method to solve the WSOL task. Our FCM replaces layers after the specific layer in the state-of-the-art CNNs with a set of kernels and learns the weighted pooling for previous feature maps. It is trained with only image-level labels and then map the feature maps to their corresponding categories in the test phase. Together with selective search method, the location of each object is finally obtained. Extensive experimental evaluation on ILSVRC2012 and PASCAL VOC2007 benchmarks shows that SFCM is simple but very effective, and it is able to achieve outstanding classification performance and outperform the state-of-the-art methods in the WSOL task.",
"title": ""
},
{
"docid": "b305e3504e3a99a5cd026e7845d98dab",
"text": "This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST and the backwards-smoothing extended Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A twostep approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, Associate Professor, Department of Mechanical & Aerospace Engineering. Email: [email protected]. Associate Fellow AIAA. Aerospace Engineer, Guidance, Navigation and Control Systems Engineering Branch. Email: [email protected]. Fellow AIAA. Postdoctoral Research Fellow, Department of Mechanical & Aerospace Engineering. Email: [email protected]. Member AIAA.",
"title": ""
},
{
"docid": "fc79bfdb7fbbfa42d2e1614964113101",
"text": "Probability Theory, 2nd ed. Princeton, N. J.: 960. Van Nostrand, 1 121 T. T. Kadota, “Optimum reception of binary gaussian signals,” Bell Sys. Tech. J., vol. 43, pp. 2767-2810, November 1964. 131 T. T. Kadota. “Ootrmum recention of binarv sure and Gaussian signals,” Bell Sys. ?‘ech: J., vol. 44;~~. 1621-1658, October 1965. 141 U. Grenander, ‘Stochastic processes and statistical inference,” Arkiv fiir Matematik, vol. 17, pp. 195-277, 1950. 151 L. A. Zadeh and J. R. Ragazzini, “Optimum filters for the detection of signals in noise,” Proc. IRE, vol. 40, pp. 1223-1231, O,+nhm 1 a.63 161 J. H. Laning and R. H. Battin, Random Processes in Automatic Control. New York: McGraw-Hill. 1956. nn. 269-358. 171 C.. W. Helstrom, “ Solution of the dete&on integral equation for stationary filtered white noise,” IEEE Trans. on Information Theory, vol. IT-II, pp. 335-339, July 1965. 181 T. Kailath, “The detection of known signals in colored Gaussian noise,” Stanford Electronics Labs., Stanford Univ., Stanford, Calif. Tech. Rept. 7050-4, July 1965. 191 T. T. Kadota, “Optimum reception of nf-ary Gaussian signals in Gaussian noise,” Bell. Sys. Tech. J., vol. 44, pp. 2187-2197, November 1965. [lOI T. T. Kadota, “Term-by-term differentiability of Mercer’s expansion,” Proc. of Am. Math. Sot., vol. 18, pp. 69-72, February 1967.",
"title": ""
},
{
"docid": "ecea888d3b2d6b9ce0a26a4af6382db8",
"text": "Business Process Management (BPM) research resulted in a plethora of methods, techniques, and tools to support the design, enactment, management, and analysis of operational business processes. This survey aims to structure these results and provides an overview of the state-of-the-art in BPM. In BPM the concept of a process model is fundamental. Process models may be used to configure information systems, but may also be used to analyze, understand, and improve the processes they describe. Hence, the introduction of BPM technology has both managerial and technical ramifications, and may enable significant productivity improvements, cost savings, and flow-time reductions. The practical relevance of BPM and rapid developments over the last decade justify a comprehensive survey.",
"title": ""
},
{
"docid": "03ba329de93f763ff6f0a8c4c6e18056",
"text": "Nowadays, with the availability of massive amount of trade data collected, the dynamics of the financial markets pose both a challenge and an opportunity for high frequency traders. In order to take advantage of the rapid, subtle movement of assets in High Frequency Trading (HFT), an automatic algorithm to analyze and detect patterns of price change based on transaction records must be available. The multichannel, time-series representation of financial data naturally suggests tensor-based learning algorithms. In this work, we investigate the effectiveness of two multilinear methods for the mid-price prediction problem against other existing methods. The experiments in a large scale dataset which contains more than 4 millions limit orders show that by utilizing tensor representation, multilinear models outperform vector-based approaches and other competing ones.",
"title": ""
},
{
"docid": "4f631769d8267c81ea568c9eed71ac09",
"text": "To study a phenomenon scientifically, it must be appropriately described and measured. How mindfulness is conceptualized and assessed has considerable importance for mindfulness science, and perhaps in part because of this, these two issues have been among the most contentious in the field. In recognition of the growing scientific and clinical interest in",
"title": ""
},
{
"docid": "f1f72a6d5d2ab8862b514983ac63480b",
"text": "Grids are commonly used as histograms to process spatial data in order to detect frequent patterns, predict destinations, or to infer popular places. However, they have not been previously used for GPS trajectory similarity searches or retrieval in general. Instead, slower and more complicated algorithms based on individual point-pair comparison have been used. We demonstrate how a grid representation can be used to compute four different route measures: novelty, noteworthiness, similarity, and inclusion. The measures may be used in several applications such as identifying taxi fraud, automatically updating GPS navigation software, optimizing traffic, and identifying commuting patterns. We compare our proposed route similarity measure, C-SIM, to eight popular alternatives including Edit Distance on Real sequence (EDR) and Frechet distance. The proposed measure is simple to implement and we give a fast, linear time algorithm for the task. It works well under noise, changes in sampling rate, and point shifting. We demonstrate that by using the grid, a route similarity ranking can be computed in real-time on the Mopsi20141 route dataset, which consists of over 6,000 routes. This ranking is an extension of the most similar route search and contains an ordered list of all similar routes from the database. The real-time search is due to indexing the cell database and comes at the cost of spending 80% more memory space for the index. The methods are implemented inside the Mopsi2 route module.",
"title": ""
},
{
"docid": "68865e653e94d3366961434cc012363f",
"text": "Solving the problem of consciousness remains one of the biggest challenges in modern science. One key step towards understanding consciousness is to empirically narrow down neural processes associated with the subjective experience of a particular content. To unravel these neural correlates of consciousness (NCC) a common scientific strategy is to compare perceptual conditions in which consciousness of a particular content is present with those in which it is absent, and to determine differences in measures of brain activity (the so called \"contrastive analysis\"). However, this comparison appears not to reveal exclusively the NCC, as the NCC proper can be confounded with prerequisites for and consequences of conscious processing of the particular content. This implies that previous results cannot be unequivocally interpreted as reflecting the neural correlates of conscious experience. Here we review evidence supporting this conjecture and suggest experimental strategies to untangle the NCC from the prerequisites and consequences of conscious experience in order to further develop the otherwise valid and valuable contrastive methodology.",
"title": ""
},
{
"docid": "c224cc83b4c58001dbbd3e0ea44a768a",
"text": "We review the current status of research in dorsal-ventral (D-V) patterning in vertebrates. Emphasis is placed on recent work on Xenopus, which provides a paradigm for vertebrate development based on a rich heritage of experimental embryology. D-V patterning starts much earlier than previously thought, under the influence of a dorsal nuclear -Catenin signal. At mid-blastula two signaling centers are present on the dorsal side: The prospective neuroectoderm expresses bone morphogenetic protein (BMP) antagonists, and the future dorsal endoderm secretes Nodal-related mesoderm-inducing factors. When dorsal mesoderm is formed at gastrula, a cocktail of growth factor antagonists is secreted by the Spemann organizer and further patterns the embryo. A ventral gastrula signaling center opposes the actions of the dorsal organizer, and another set of secreted antagonists is produced ventrally under the control of BMP4. The early dorsal -Catenin signal inhibits BMP expression at the transcriptional level and promotes expression of secreted BMP antagonists in the prospective central nervous system (CNS). In the absence of mesoderm, expression of Chordin and Noggin in ectoderm is required for anterior CNS formation. FGF (fibroblast growth factor) and IGF (insulin-like growth factor) signals are also potent neural inducers. Neural induction by anti-BMPs such as Chordin requires mitogen-activated protein kinase (MAPK) activation mediated by FGF and IGF. These multiple signals can be integrated at the level of Smad1. Phosphorylation by BMP receptor stimulates Smad1 transcriptional activity, whereas phosphorylation by MAPK has the opposite effect. Neural tissue is formed only at very low levels of activity of BMP-transducing Smads, which require the combination of both low BMP levels and high MAPK signals. Many of the molecular players that regulate D-V patterning via regulation of BMP signaling have been conserved between Drosophila and the vertebrates.",
"title": ""
},
{
"docid": "aad34b3e8acc311d0d32964c6607a6e1",
"text": "This paper looks at the performance of photovoltaic modules in nonideal conditions and proposes topologies to minimize the degradation of performance caused by these conditions. It is found that the peak power point of a module is significantly decreased due to only the slightest shading of the module, and that this effect is propagated through other nonshaded modules connected in series with the shaded one. Based on this result, two topologies for parallel module connections have been outlined. In addition, dc/dc converter technologies, which are necessary to the design, are compared by way of their dynamic models, frequency characteristics, and component cost. Out of this comparison, a recommendation has been made",
"title": ""
},
{
"docid": "1ad06e5eee4d4f29dd2f0e8f0dd62370",
"text": "Recent research on map matching algorithms for land vehicle navigation has been based on either a conventional topological analysis or a probabilistic approach. The input to these algorithms normally comes from the global positioning system and digital map data. Although the performance of some of these algorithms is good in relatively sparse road networks, they are not always reliable for complex roundabouts, merging or diverging sections of motorways and complex urban road networks. In high road density areas where the average distance between roads is less than 100 metres, there may be many road patterns matching the trajectory of the vehicle reported by the positioning system at any given moment. Consequently, it may be difficult to precisely identify the road on which the vehicle is travelling. Therefore, techniques for dealing with qualitative terms such as likeliness are essential for map matching algorithms to identify a correct link. Fuzzy logic is one technique that is an effective way to deal with qualitative terms, linguistic vagueness, and human intervention. This paper develops a map matching algorithm based on fuzzy logic theory. The inputs to the proposed algorithm are from the global positioning system augmented with data from deduced reckoning sensors to provide continuous navigation. The algorithm is tested on different road networks of varying complexity. The validation of this algorithm is carried out using high precision positioning data obtained from GPS carrier phase observables. The performance of the developed map matching algorithm is evaluated against the performance of several well-accepted existing map matching algorithms. The results show that the fuzzy logic-based map matching algorithm provides a significant improvement over existing map matching algorithms both in terms of identifying correct links and estimating the vehicle position on the links.",
"title": ""
},
{
"docid": "39eaf3ad7373d36404e903a822a3d416",
"text": "We present HaptoMime, a mid-air interaction system that allows users to touch a floating virtual screen with hands-free tactile feedback. Floating images formed by tailored light beams are inherently lacking in tactile feedback. Here we propose a method to superpose hands-free tactile feedback on such a floating image using ultrasound. By tracking a fingertip with an electronically steerable ultrasonic beam, the fingertip encounters a mechanical force consistent with the floating image. We demonstrate and characterize the proposed transmission scheme and discuss promising applications with an emphasis that it helps us 'pantomime' in mid-air.",
"title": ""
},
{
"docid": "b0c5c8e88e9988b6548acb1c8ebb5edd",
"text": "We present a bottom-up aggregation approach to image segmentation. Beginning with an image, we execute a sequence of steps in which pixels are gradually merged to produce larger and larger regions. In each step, we consider pairs of adjacent regions and provide a probability measure to assess whether or not they should be included in the same segment. Our probabilistic formulation takes into account intensity and texture distributions in a local area around each region. It further incorporates priors based on the geometry of the regions. Finally, posteriors based on intensity and texture cues are combined using “ a mixture of experts” formulation. This probabilistic approach is integrated into a graph coarsening scheme, providing a complete hierarchical segmentation of the image. The algorithm complexity is linear in the number of the image pixels and it requires almost no user-tuned parameters. In addition, we provide a novel evaluation scheme for image segmentation algorithms, attempting to avoid human semantic considerations that are out of scope for segmentation algorithms. Using this novel evaluation scheme, we test our method and provide a comparison to several existing segmentation algorithms.",
"title": ""
},
{
"docid": "c1b5b1dcbb3e7ff17ea6ad125bbc4b4b",
"text": "This article focuses on a new type of wireless devices in the domain between RFIDs and sensor networks—Energy-Harvesting Active Networked Tags (EnHANTs). Future EnHANTs will be small, flexible, and self-powered devices that can be attached to objects that are traditionally not networked (e.g., books, furniture, toys, produce, and clothing). Therefore, they will provide the infrastructure for various tracking applications and can serve as one of the enablers for the Internet of Things. We present the design considerations for the EnHANT prototypes, developed over the past 4 years. The prototypes harvest indoor light energy using custom organic solar cells, communicate and form multihop networks using ultra-low-power Ultra-Wideband Impulse Radio (UWB-IR) transceivers, and dynamically adapt their communications and networking patterns to the energy harvesting and battery states. We describe a small-scale testbed that uniquely allows evaluating different algorithms with trace-based light energy inputs. Then, we experimentally evaluate the performance of different energy-harvesting adaptive policies with organic solar cells and UWB-IR transceivers. Finally, we discuss the lessons learned during the prototype and testbed design process.",
"title": ""
},
{
"docid": "c44ef4f4242147affdbe613c70ec4a85",
"text": "The physical and generalized sensor models are two widely used imaging geometry models in the photogrammetry and remote sensing. Utilizing the rational function model (RFM) to replace physical sensor models in photogrammetric mapping is becoming a standard way for economical and fast mapping from high-resolution images. The RFM is accepted for imagery exploitation since high accuracies have been achieved in all stages of the photogrammetric process just as performed by rigorous sensor models. Thus it is likely to become a passkey in complex sensor modeling. Nowadays, commercial off-the-shelf (COTS) digital photogrammetric workstations have incorporated the RFM and related techniques. Following the increasing number of RFM related publications in recent years, this paper reviews the methods and key applications reported mainly over the past five years, and summarizes the essential progresses and address the future research directions in this field. These methods include the RFM solution, the terrainindependent and terrain-dependent computational scenarios, the direct and indirect RFM refinement methods, the photogrammetric exploitation techniques, and photogrammetric interoperability for cross sensor/platform imagery integration. Finally, several open questions regarding some aspects worth of further study are addressed.",
"title": ""
},
{
"docid": "d5b004af32bd747c2b5ad175975f8c06",
"text": "This paper presents a design of a quasi-millimeter wave wideband antenna array consisting of a leaf-shaped bowtie antenna (LSBA) and series-parallel feed networks in which parallel strip and microstrip lines are employed. A 16-element LSBA array is designed such that the antenna array operates over the frequency band of 22-30GHz. In order to demonstrate the effective performance of the presented configuration, characteristics of the designed LSBA array are evaluated by the finite-difference time domain (FDTD) analysis and measurements. Over the frequency range from 22GHz to 30GHz, the simulated reflection coefficient is observed to be less than -8dB, and the actual gain of 12.3-19.4dBi is obtained.",
"title": ""
},
{
"docid": "c117a5fc0118f3ea6c576bb334759d59",
"text": "While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no attack that perturbs each pixel by at most = 0.1 can cause more than 35% test error.",
"title": ""
}
] |
scidocsrr
|
db2f203d3ab77a6309c0540827ba4da0
|
Modeling gate-pitch scaling impact on stress-induced mobility and external resistance for 20nm-node MOSFETs
|
[
{
"docid": "c10a58037c4b13953236831af304e660",
"text": "A 32 nm generation logic technology is described incorporating 2nd-generation high-k + metal-gate technology, 193 nm immersion lithography for critical patterning layers, and enhanced channel strain techniques. The transistors feature 9 Aring EOT high-k gate dielectric, dual band-edge workfunction metal gates, and 4th-generation strained silicon, resulting in the highest drive currents yet reported for NMOS and PMOS. Process yield, performance and reliability are demonstrated on a 291 Mbit SRAM test vehicle, with 0.171 mum2 cell size, containing >1.9 billion transistors.",
"title": ""
}
] |
[
{
"docid": "8d4007b4d769c2d90ae07b5fdaee8688",
"text": "In this project, we implement the semi-supervised Recursive Autoencoders (RAE), and achieve the result comparable with result in [1] on the Movie Review Polarity dataset1. We achieve 76.08% accuracy, which is slightly lower than [1] ’s result 76.8%, with less vector length. Experiments show that the model can learn sentiment and build reasonable structure from sentence.We find longer word vector and adjustment of words’ meaning vector is beneficial, while normalization of transfer function brings some improvement. We also find normalization of the input word vector may be beneficial for training.",
"title": ""
},
{
"docid": "de22ea59ae33c910ba1a2f697ad4fd0c",
"text": "This study aimed to compare the differences of microbial spectrum and antibiotic resistance patterns between external and intraocular bacterial infections in an eye hospital in South China. A total of 737 bacteria isolates from suspected ocular infections were included in this retrospective study covering the period 2010-2013. The organisms cultured from the ocular surface (cornea, conjunctiva) accounted for the majority of the isolates (82.77%, n = 610), followed by the intraocular (aqueous humor, vitreous fluid), which accounted for 17.23% (n = 127). The top three species accounting for the external ocular infections were S. epidermidis (35.25%), P. aeruginosa (8.03%), and S. simulans (4.43%). The top three species for the intraocular infections were S. epidermidis (14.96%), S. hominis (8.66%), and B. subtilis (7.87%). The bacteria from the external ocular surface were more sensitive to neomycin, while those from the intraocular specimens were more sensitive to levofloxacin (P < 0.01). Multidrug resistance was found in 89 bacteria (12.08%), including isolates from both external (13.28%) and intraocular samples (6.30%). The results of this study indicate that the bacteria spectrum of external and intraocular infections is variable in the setting. A high percentage of bacterial organisms were found to be primarily susceptible to neomycin for external infection and levofloxacin for intraocular infection.",
"title": ""
},
{
"docid": "eb962e14f34ea53dec660dfe304756b0",
"text": "It is difficult to train a personalized task-oriented dialogue system because the data collected from each individual is often insufficient. Personalized dialogue systems trained on a small dataset can overfit and make it difficult to adapt to different user needs. One way to solve this problem is to consider a collection of multiple users’ data as a source domain and an individual user’s data as a target domain, and to perform a transfer learning from the source to the target domain. By following this idea, we propose the “PETAL” (PErsonalized Task-oriented diALogue), a transfer learning framework based on POMDP to learn a personalized dialogue system. The system first learns common dialogue knowledge from the source domain and then adapts this knowledge to the target user. This framework can avoid the negative transfer problem by considering differences between source and target users. The policy in the personalized POMDP can learn to choose different actions appropriately for different users. Experimental results on a real-world coffee-shopping data and simulation data show that our personalized dialogue system can choose different optimal actions for different users, and thus effectively improve the dialogue quality under the personalized setting.",
"title": ""
},
{
"docid": "2f84b44cdce52068b7e692dad7feb178",
"text": "Two stage PCR has been used to introduce single amino acid substitutions into the EF hand structures of the Ca(2+)-activated photoprotein aequorin. Transcription of PCR products, followed by cell free translation of the mRNA, allowed characterisation of recombinant proteins in vitro. Substitution of D to A at position 119 produced an active photoprotein with a Ca2+ affinity reduced by a factor of 20 compared to the wild type recombinant aequorin. This recombinant protein will be suitable for measuring Ca2+ inside the endoplasmic reticulum, the mitochondria, endosomes and the outside of live cells.",
"title": ""
},
{
"docid": "bb2ad600e0e90a1a349e39ce0f097277",
"text": "Tongue drive system (TDS) is a tongue-operated, minimally invasive, unobtrusive, and wireless assistive technology (AT) that infers users' intentions by detecting their voluntary tongue motion and translating them into user-defined commands. Here we present the new intraoral version of the TDS (iTDS), which has been implemented in the form of a dental retainer. The iTDS system-on-a-chip (SoC) features a configurable analog front-end (AFE) that reads the magnetic field variations inside the mouth from four 3-axial magnetoresistive sensors located at four corners of the iTDS printed circuit board (PCB). A dual-band transmitter (Tx) on the same chip operates at 27 and 432 MHz in the Industrial/Scientific/Medical (ISM) band to allow users to switch in the presence of external interference. The Tx streams the digitized samples to a custom-designed TDS universal interface, built from commercial off-the-shelf (COTS) components, which delivers the iTDS data to other devices such as smartphones, personal computers (PC), and powered wheelchairs (PWC). Another key block on the iTDS SoC is the power management integrated circuit (PMIC), which provides individually regulated and duty-cycled 1.8 V supplies for sensors, AFE, Tx, and digital control blocks. The PMIC also charges a 50 mAh Li-ion battery with constant current up to 4.2 V, and recovers data and clock to update its configuration register through a 13.56 MHz inductive link. The iTDS SoC has been implemented in a 0.5-μm standard CMOS process and consumes 3.7 mW on average.",
"title": ""
},
{
"docid": "77c7f144c63df9022434313cfe2e5290",
"text": "Today the prevalence of online banking is enormous. People prefer to accomplish their financial transactions through the online banking services offered by their banks. This method of accessing is more convenient, quicker and secured. Banks are also encouraging their customers to opt for this mode of e-banking facilities since that result in cost savings for the banks and there is better customer satisfaction. An important aspect of online banking is the precise authentication of users before allowing them to access their accounts. Typically this is done by asking the customers to enter their unique login id and password combination. The success of this authentication relies on the ability of customers to maintain the secrecy of their passwords. Since the customer login to the banking portals normally occur in public environments, the passwords are prone to key logging attacks. To avoid this, virtual keyboards are provided. But virtual keyboards are vulnerable to shoulder surfing based attacks. In this paper, a secured virtual keyboard scheme that withstands such attacks is proposed. Elaborate user studies carried out on the proposed scheme have testified the security and the usability of the proposed approach.",
"title": ""
},
{
"docid": "d50d07954360c23bcbe3802776562f34",
"text": "A stationary display of white discs positioned on intersecting gray bars on a dark background gives rise to a striking scintillating effectthe scintillating grid illusion. The spatial and temporal properties of the illusion are well known, but a neuronal-level explanation of the mechanism has not been fully investigated. Motivated by the neurophysiology of the Limulus retina, we propose disinhibition and self-inhibition as possible neural mechanisms that may give rise to the illusion. In this letter, a spatiotemporal model of the early visual pathway is derived that explicitly accounts for these two mechanisms. The model successfully predicted the change of strength in the illusion under various stimulus conditions, indicating that low-level mechanisms may well explain the scintillating effect in the illusion.",
"title": ""
},
{
"docid": "4597ab07ac630eb5e256f57530e2828e",
"text": "This paper presents novel QoS extensions to distributed control plane architectures for multimedia delivery over large-scale, multi-operator Software Defined Networks (SDNs). We foresee that large-scale SDNs shall be managed by a distributed control plane consisting of multiple controllers, where each controller performs optimal QoS routing within its domain and shares summarized (aggregated) QoS routing information with other domain controllers to enable inter-domain QoS routing with reduced problem dimensionality. To this effect, this paper proposes (i) topology aggregation and link summarization methods to efficiently acquire network topology and state information, (ii) a general optimization framework for flow-based end-to-end QoS provision over multi-domain networks, and (iii) two distributed control plane designs by addressing the messaging between controllers for scalable and secure inter-domain QoS routing. We apply these extensions to streaming of layered videos and compare the performance of different control planes in terms of received video quality, communication cost and memory overhead. Our experimental results show that the proposed distributed solution closely approaches the global optimum (with full network state information) and nicely scales to large networks.",
"title": ""
},
{
"docid": "83591fa3bd2409d2c04fecdbdf9a4ede",
"text": "The dramatic growth of cloud computing services and mobility trends, in terms of 3/4G availability and smart devices, is creating a new phenomenon called \"Consumerization\" that affects the consumers habits in all facets of the their life. Also during the working time people prefer to stay in their consumer environment because that's their comfort zone. A collateral phenomenon is called BYOD (Bring Your Own Device), that means the employees use their own devices also during their working time. These changing of habits represent an opportunity and a challenge for the enterprises. The opportunity is related to two main aspects: the productivity increase and the costs reduction. In a BYOD scenario the end users would pay totally or partially the devices and would work independently from time and location. On the opposite side, the new scenario bring some risks that could be critical. The use of devices for both personal and working activities opens to new security threats to face for IT organization. Also, the direct comparison between public cloud services for personal use and company's IT services could be a frustrating user experience, that's because of the public cloud services are often almost more effective and usable than typical IT company's services. The aim of this work is presenting a brief survey about the emerging methods and models to approach the BYOD phenomenon from the security point of view.",
"title": ""
},
{
"docid": "41b8fb6fd9237c584ce0211f94a828be",
"text": "Over the last few years, two of the main research directions in machine learning of natural language processing have been the study of semi-supervised learning algorithms as a way to train classifiers when the labeled data is scarce, and the study of ways to exploit knowledge and global information in structured learning tasks. In this paper, we suggest a method for incorporating domain knowledge in semi-supervised learning algorithms. Our novel framework unifies and can exploit several kinds of task specific constraints. The experimental results presented in the information extraction domain demonstrate that applying constraints helps the model to generate better feedback during learning, and hence the framework allows for high performance learning with significantly less training data than was possible before on these tasks.",
"title": ""
},
{
"docid": "0dfd46719752d933c966b5e91006bc19",
"text": "A fall is an abnormal activity that occurs rarely, so it is hard to collect real data for falls. It is, therefore, difficult to use supervised learning methods to automatically detect falls. Another challenge in using machine learning methods to automatically detect falls is the choice of engineered features. In this paper, we propose to use an ensemble of autoencoders to extract features from different channels of wearable sensor data trained only on normal activities. We show that the traditional approach of choosing a threshold as the maximum of the reconstruction error on the training normal data is not the right way to identify unseen falls. We propose two methods for automatic tightening of reconstruction error from only the normal activities for better identification of unseen falls. We present our results on two activity recognition datasets and show the efficacy of our proposed method against traditional autoencoder models and two standard one-class classification methods.",
"title": ""
},
{
"docid": "34989468dace8410e9b7b68f0fd78a96",
"text": "A novel coplanar waveguide (CPW)-fed triband planar monopole antenna is presented for WLAN/WiMAX applications. The monopole antenna is printed on a substrate with two rectangular corners cut off. The radiator of the antenna is very compact with an area of only 3.5 × 17 mm2, on which two inverted-L slots are etched to achieve three radiating elements so as to produce three resonant modes for triband operation. With simple structure and small size, the measured and simulated results show that the proposed antenna has 10-dB impedance bandwidths of 120 MHz (2.39-2.51 GHz), 340 MHz (3.38-3.72 GHz), and 1450 MHz (4.79-6.24 GHz) to cover all the 2.4/5.2/5.8-GHz WLAN and the 3.5/5.5-GHz WiMAX bands, and good dipole-like radiation characteristics are obtained over the operating bands.",
"title": ""
},
{
"docid": "f1598c31e059d2e795f8fb393b21bb46",
"text": "We present a Reinforcement Learning (RL) solution to the view planning problem (VPP), which generates a sequence of view points that are capable of sensing all accessible area of a given object represented as a 3D model. In doing so, the goal is to minimize the number of view points, making the VPP a class of set covering optimization problem (SCOP). The SCOP is NP-hard, and the inapproximability results tell us that the greedy algorithm provides the best approximation that runs in polynomial time. In order to find a solution that is better than the greedy algorithm, (i) we introduce a novel score function by exploiting the geometry of the 3D model, (ii) we device an intuitive approach to VPP using this score function, and (iii) we cast VPP as a Markovian Decision Process (MDP), and solve the MDP in RL framework using well-known RL algorithms. In particular, we use SARSA, Watkins-Q and TD with function approximation to solve the MDP. We compare the results of our method with the baseline greedy algorithm in an extensive set of test objects, and show that we can outperform the baseline in almost all cases.",
"title": ""
},
{
"docid": "d74f13cbc4b2b6cace06a9f55b6d060c",
"text": "Climate fluctuations and human exploitation are causing global changes in nutrient enrichment of terrestrial and aquatic ecosystems and declining abundances of apex predators. The resulting trophic cascades have had profound effects on food webs, leading to significant economic and societal consequences. However, the strength of cascades-that is the extent to which a disturbance is diminished as it propagates through a food web-varies widely between ecosystems, and there is no formal theory as to why this should be so. Some food chain models reproduce cascade effects seen in nature, but to what extent is this dependent on their formulation? We show that inclusion of processes represented mathematically as density-dependent regulation of either consumer uptake or mortality rates is necessary for the generation of realistic 'top-down' cascades in simple food chain models. Realistically modelled 'bottom-up' cascades, caused by changing nutrient input, are also dependent on the inclusion of density dependence, but especially on mortality regulation as a caricature of, e.g. disease and parasite dynamics or intraguild predation. We show that our conclusions, based on simple food chains, transfer to a more complex marine food web model in which cascades are induced by varying river nutrient inputs or fish harvesting rates.",
"title": ""
},
{
"docid": "32e71f6ea2a624d669dfbb7a52042432",
"text": "In this paper, a design method of an ultra-wideband multi-section power divider on suspended stripline (SSL) is presented. A clear design guideline for ultra-wideband power dividers is provided. As a design example, a 10-section SSL power divider is implemented. The fabricated divider exhibits the minimum insertion loss of 0.3 dB, the maximum insertion loss of 1.5 dB from 1 to 19 GHz. The measured VSWR is typically 1.40:1, and the isolation between output-port is typically 20 dB.",
"title": ""
},
{
"docid": "73d3f2b0af6e6d1e27139b80ae0d0c81",
"text": "The acceleration of rhythm of everyday life requires efficiency and flexibility in daily routines. The real expectations and needs of people concerning intelligent home devices should be carefully researched. The project Moon 2.0 by Indesit Company presents alternative ways of producing household appliance services developing a 2.0 Human Machine Interface and programs setting unit for washing machines, totally manageable by smart phones or I-Phones. Users cannot explicitly control washing machines when they would like to use a feature combination that has not application in a current washing program. The application of the Web 2.0 philosophy to the washing machine let the user the possibility to directly control all the existing features of the washing programs and to decide time by time how many programs their machine should have, with regards to the transparency and interactivity concepts of the ambient intelligence. Moon 2.0 should not be confused with an hand held personal home assistant capable of controlling a wide range of electronic home devices. The smart phone behaves the intelligence of the washing machine and offers the user endless customisation possibilities.",
"title": ""
},
{
"docid": "96607113a8b6d0ca1c043d183420996b",
"text": "Primary retroperitoneal masses include a diverse, and often rare, group of neoplastic and non-neoplastic entities that arise within the retroperitoneum but do not originate from any retroperitoneal organ. Their overlapping appearances on cross-sectional imaging may pose a diagnostic challenge to the radiologist; familiarity with characteristic imaging features, together with relevant clinical information, helps to narrow the differential diagnosis. In this article, a systematic approach to identifying and classifying primary retroperitoneal masses is described. The normal anatomy of the retroperitoneum is reviewed with an emphasis on fascial planes, retroperitoneal compartments, and their contents using cross-sectional imaging. Specific radiologic signs to accurately identify an intra-abdominal mass as primary retroperitoneal are presented, first by confirming the location as retroperitoneal and secondly by excluding an organ of origin. A differential diagnosis based on a predominantly solid or cystic appearance, including neoplastic and non-neoplastic entities, is elaborated. Finally, key diagnostic clues based on characteristic imaging findings are described, which help to narrow the differential diagnosis. This article provides a comprehensive overview of the cross-sectional imaging features of primary retroperitoneal masses, including normal retroperitoneal anatomy, radiologic signs of retroperitoneal masses and the differential diagnosis of solid and cystic, neoplastic and non-neoplastic retroperitoneal masses, with a view to assist the radiologist in narrowing the differential diagnosis.",
"title": ""
},
{
"docid": "dd2e81d24584fe0684266217b732d881",
"text": "In order to understand the role of titanium isopropoxide (TIPT) catalyst on insulation rejuvenation for water tree aged cables, dielectric properties and micro structure changes are investigated for the rejuvenated cables. Needle-shape defects are made inside cross-linked polyethylene (XLPE) cable samples to form water tree in the XLPE layer. The water tree aged samples are injected by the liquid with phenylmethyldimethoxy silane (PMDMS) catalyzed by TIPT for rejuvenation, and the breakdown voltage of the rejuvenated samples is significantly higher than that of the new samples. By the observation of scanning electronic microscope (SEM), the nano-TiO2 particles are observed inside the breakdown channels of the rejuvenated samples. Accordingly, the insulation performance of rejuvenated samples is significantly enhanced by the nano-TiO2 particles. Through analyzing the products of hydrolysis from TIPT, the nano-scale TiO2 particles are observed, and its micro-morphology is consistent with that observed inside the breakdown channels. According to the observation, the insulation enhancement mechanism is described. Therefore, the dielectric property of the rejuvenated cables is improved due to the nano-TiO2 produced by the hydrolysis from TIPT.",
"title": ""
},
{
"docid": "da61b8bd6c1951b109399629f47dad16",
"text": "In this paper, we introduce an approach for distributed nonlinear control of multiple hovercraft-type underactuated vehicles with bounded and unidirectional inputs. First, a bounded nonlinear controller is given for stabilization and tracking of a single vehicle, using a cascade backstepping method. Then, this controller is combined with a distributed gradient-based control for multi-vehicle formation stabilization using formation potential functions previously constructed. The vehicles are used in the Caltech Multi-Vehicle Wireless Testbed (MVWT). We provide simulation and experimental results for stabilization and tracking of a single vehicle, and a simulation of stabilization of a six-vehicle formation, demonstrating that in all cases the control bounds and the control objective are satisfied.",
"title": ""
},
{
"docid": "565c949a2bf8b6f6c3d246c7c195419d",
"text": "Extracorporeal photochemotherapy (ECP) is an effective treatment modality for patients with erythrodermic myocosis fungoides (MF) and Sezary syndrome (SS). During ECP, a fraction of peripheral blood mononuclear cells is collected, incubated ex-vivo with methoxypsoralen, UVA irradiated, and finally reinfused to the patient. Although the mechanism of action of ECP is not well established, clinical and laboratory observations support the hypothesis of a vaccination-like effect. ECP induces apoptosis of normal and neoplastic lymphocytes, while enhancing differentiation of monocytes towards immature dendritic cells (imDCs), followed by engulfment of apoptotic bodies. After reinfusion, imDCs undergo maturation and antigenic peptides from the neoplastic cells are expressed on the surface of DCs. Mature DCs travel to lymph nodes and activate cytotoxic T-cell clones with specificity against tumor antigens. Disease control is mediated through cytotoxic T-lymphocytes with tumor specificity. The efficacy and excellent safety profile of ECP has been shown in a large number of retrospective trials. Previous studies showed that monotherapy with ECP produces an overall response rate of approximately 60%, while clinical data support that ECP is much more effective when combined with other immune modulating agents such as interferons or retinoids, or when used as consolidation treatment after total skin electron beam irradiation. However, only a proportion of patients actually respond to ECP and parameters predictive of response need to be discovered. A patient with a high probability of response to ECP must fulfill all of the following criteria: (1) SS or erythrodermic MF, (2) presence of neoplastic cells in peripheral blood, and (3) early disease onset. Despite the fact that ECP has been established as a standard treatment modality, no prospective randomized study has been conducted so far, to the authors' knowledge. Considering the high cost of the procedure, the role of ECP in the treatment of SS/MF needs to be clarified via well designed multicenter prospective randomized trials.",
"title": ""
}
] |
scidocsrr
|
6d0a7a3badf32dcbc215a93d114a80b8
|
A Puppet Interface for Retrieval of Motion Capture Data
|
[
{
"docid": "ae58bc6ced30bf2c855473541840ec4d",
"text": "Techniques from the image and signal processing domain can be successfully applied to designing, modifying, and adapting animated motion. For this purpose, we introduce multiresolution motion filtering, multitarget motion interpolation with dynamic timewarping, waveshaping and motion displacement mapping. The techniques are well-suited for reuse and adaptation of existing motion data such as joint angles, joint coordinates or higher level motion parameters of articulated figures with many degrees of freedom. Existing motions can be modified and combined interactively and at a higher level of abstraction than conventional systems support. This general approach is thus complementary to keyframing, motion capture, and procedural animation.",
"title": ""
}
] |
[
{
"docid": "821b46014ba0828b5d8cafef8a1289fd",
"text": "Poor graduation and retention rates are widespread in higher education, with significant economic and social ramifications for individuals and our society in general. Early intervention with students most at risk of attrition can be effective in improving college student retention. Our research aim was to create a firstyear at-risk model using educational data mining and to apply that model at New York Institute of Technology (NYIT). Building the model creates new challenges: (1)the model must be welcomed by counseling staff and the outputs need to be user friendly, and (2)the model needs to work automatically from data collection to processing and prediction in order to eliminate the bottleneck of a human operator which can slow down the process. The result of our effort was an end-to-end solution, including a cost-effective infrastructure, that could be used by student support personnel for early identification and early intervention. The Student At-Risk Model (STAR) provides retention risk ratings for each new freshman at NYIT before the start of the fall semester and identifies the key factors that place a student at risk of not returning the following year. The model was built using historical data for the 2011 and 2012 Fall Class and the STAR system went into production at NYIT in Fall 2013.",
"title": ""
},
{
"docid": "48ea1d793f0ae2b79f406c87fe5980b5",
"text": "In this paper, we describe a UHF radio-frequency-identification tag test and measurement system based on National Instruments LabVIEW-controlled PXI RF hardware. The system operates in 800-1000-MHz frequency band with a variable output power up to 30 dBm and is capable of testing tags using Gen2 and other protocols. We explain testing methods and metrics, describe in detail the construction of our system, show its operation with real tag measurement examples, and draw general conclusions.",
"title": ""
},
{
"docid": "5e6cc7d1849b6003ff86d3ba6227d546",
"text": "BACKGROUND\nThe prognosis of advanced (stage IV) cancer of the digestive organs is very poor. We have previously reported a case of advanced breast cancer with bone metastasis that was successfully treated with combined treatments including autologous formalin-fixed tumor vaccine (AFTV). Herein, we report the success of this approach in advanced stage IV (heavily metastasized) cases of gall bladder cancer and colon cancer.\n\n\nCASE PRESENTATION\nCase 1: A 61-year-old woman with stage IV gall bladder cancer (liver metastasis and lymph node metastasis) underwent surgery in May 2011, including partial resection of the liver. She was treated with AFTV as the first-line adjuvant therapy, followed by conventional chemotherapy. This patient is still alive without any recurrence, as confirmed with computed tomography, for more than 5 years. Case 2: A 64-year-old man with stage IV colon cancer (multiple para-aortic lymph node metastases and direct abdominal wall invasion) underwent non-curative surgery in May 2006. Following conventional chemotherapy, two courses of AFTV and radiation therapy were administered sequentially. This patient has had no recurrence for more than 5 years.\n\n\nCONCLUSION\nWe report the success of combination therapy including AFTV in cases of liver-metastasized gall bladder cancer and abdominal wall-metastasized colon cancer. Both patients experienced long-lasting, complete remission. Therefore, combination therapies including AFTV should be considered in patients with advanced cancer of the digestive organs.",
"title": ""
},
{
"docid": "397b3b96c16b2ce310ab61f9d2d7bdbf",
"text": "Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches.",
"title": ""
},
{
"docid": "f2b6afabd67354280d091d11e8265b96",
"text": "This paper aims to present three new methods for color detection and segmentation of road signs. The images are taken by a digital camera mounted in a car. The RGB images are converted into IHLS color space, and new methods are applied to extract the colors of the road signs under consideration. The methods are tested on hundreds of outdoor images in different light conditions, and they show high robustness. This project is part of the research taking place in Dalarna University/Sweden in the field of the ITS",
"title": ""
},
{
"docid": "5ea366c59a6cd57ac2311a027084b566",
"text": "Shape changing interfaces give physical shapes to digital data so that users can feel and manipulate data with their hands and bodies. However, physical objects in our daily life not only have shape but also various material properties. In this paper, we propose an interaction technique to represent material properties using shape changing interfaces. Specifically, by integrating the multi-modal sensation techniques of haptics, our approach builds a perceptive model for the properties of deformable materials in response to direct manipulation. As a proof-of-concept prototype, we developed preliminary physics algorithms running on pin-based shape displays. The system can create computationally variable properties of deformable materials that are visually and physically perceivable. In our experiments, users identify three deformable material properties (flexibility, elasticity and viscosity) through direct touch interaction with the shape display and its dynamic movements. In this paper, we describe interaction techniques, our implementation, future applications and evaluation on how users differentiate between specific properties of our system. Our research shows that shape changing interfaces can go beyond simply displaying shape allowing for rich embodied interaction and perceptions of rendered materials with the hands and body.",
"title": ""
},
{
"docid": "6e9e687db8f202a8fa6d49c5996e7141",
"text": "Although various scalable deep learning software packages have been proposed, it remains unclear how to best leverage parallel and distributed computing infrastructure to accelerate their training and deployment. Moreover, the effectiveness of existing parallel and distributed systems varies widely based on the neural network architecture and dataset under consideration. In order to efficiently explore the space of scalable deep learning systems and quickly diagnose their effectiveness for a given problem instance, we introduce an analytical performance model called PALEO. Our key observation is that a neural network architecture carries with it a declarative specification of the computational requirements associated with its training and evaluation. By extracting these requirements from a given architecture and mapping them to a specific point within the design space of software, hardware and communication strategies, PALEO can efficiently and accurately model the expected scalability and performance of a putative deep learning system. We show that PALEO is robust to the choice of network architecture, hardware, software, communication schemes, and parallelization strategies. We further demonstrate its ability to accurately model various recently published scalability results for CNNs such as NiN, Inception and AlexNet.",
"title": ""
},
{
"docid": "661a5c7f49d4232f61a4a2ee0c1ddbff",
"text": "Power is now a first-order design constraint in large-scale parallel computing. Used carefully, dynamic voltage scaling can execute parts of a program at a slower CPU speed to achieve energy savings with a relatively small (possibly zero) time delay. However, the problem of when to change frequencies in order to optimize energy savings is NP-complete, which has led to many heuristic energy-saving algorithms. To determine how closely these algorithms approach optimal savings, we developed a system that determines a bound on the energy savings for an application. Our system uses a linear programming solver that takes as inputs the application communication trace and the cluster power characteristics and then outputs a schedule that realizes this bound. We apply our system to three scientific programs, two of which exhibit load imbalance---particle simulation and UMT2K. Results from our bounding technique show particle simulation is more amenable to energy savings than UMT2K.",
"title": ""
},
{
"docid": "b4880ddb59730f465f585f3686d1d2b1",
"text": "The authors study the effect of word-of-mouth (WOM) marketing on member growth at an Internet social networking site and compare it with traditional marketing vehicles. Because social network sites record the electronic invitations sent out by existing members, outbound WOM may be precisely tracked. WOM, along with traditional marketing, can then be linked to the number of new members subsequently joining the site (signups). Due to the endogeneity among WOM, new signups, and traditional marketing activity, the authors employ a Vector Autoregression (VAR) modeling approach. Estimates from the VAR model show that word-ofmouth referrals have substantially longer carryover effects than traditional marketing actions. The long-run elasticity of signups with respect to WOM is estimated to be 0.53 (substantially larger than the average advertising elasticities reported in the literature) and the WOM elasticity is about 20 times higher than the elasticity for marketing events, and 30 times that of media appearances. Based on revenue from advertising impressions served to a new member, the monetary value of a WOM referral can be calculated; this yields an upper bound estimate for the financial incentives the firm might offer to stimulate word-of-mouth.",
"title": ""
},
{
"docid": "fb1f3f300bcd48d99f0a553a709fdc89",
"text": "This work includes a high step up voltage gain DC-DC converter for DC microgrid applications. The DC microgrid can be utilized for rural electrification, UPS support, Electronic lighting systems and Electrical vehicles. The whole system consists of a Photovoltaic panel (PV), High step up DC-DC converter with Maximum Power Point Tracking (MPPT) and DC microgrid. The entire system is optimized with both MPPT and converter separately. The MPP can be tracked by Incremental Conductance (IC) MPPT technique modified with D-Sweep (Duty ratio Sweep). D-sweep technique reduces the problem of multiple local maxima. Converter optimization includes a high step up DC-DC converter which comprises of both coupled inductor and switched capacitors. This increases the gain up to twenty times with high efficiency. Both converter optimization and MPPT optimization increases overall system efficiency. MATLAB/simulink model is implemented. Hardware of the system can be implemented by either voltage mode control or current mode control.",
"title": ""
},
{
"docid": "32a3c87eb1f2415acaf3bccd652c1bea",
"text": "Many recently trained neural networks employ large numbers of parameters to achieve good performance. One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem. But how accurate are such notions? How many parameters are really needed? In this paper we attempt to answer this question by training networks not in their native parameter space, but instead in a smaller, randomly oriented subspace. We slowly increase the dimension of this subspace, note at which dimension solutions first appear, and define this to be the intrinsic dimension of the objective landscape. The approach is simple to implement, computationally tractable, and produces several suggestive conclusions. Many problems have smaller intrinsic dimensions than one might suspect, and the intrinsic dimension for a given dataset varies little across a family of models with vastly different sizes. This latter result has the profound implication that once a parameter space is large enough to solve a problem, extra parameters serve directly to increase the dimensionality of the solution manifold. Intrinsic dimension allows some quantitative comparison of problem difficulty across supervised, reinforcement, and other types of learning where we conclude, for example, that solving the inverted pendulum problem is 100 times easier than classifying digits from MNIST, and playing Atari Pong from pixels is about as hard as classifying CIFAR-10. In addition to providing new cartography of the objective landscapes wandered by parameterized models, the method is a simple technique for constructively obtaining an upper bound on the minimum description length of a solution. A byproduct of this construction is a simple approach for compressing networks, in some cases by more than 100 times.",
"title": ""
},
{
"docid": "b753eb752d4f87dbff82d77e8417f389",
"text": "Our research team has spent the last few years studying the cognitive processes involved in simultaneous interpreting. The results of this research have shown that professional interpreters develop specific ways of using their working memory, due to their work in simultaneous interpreting; this allows them to perform the processes of linguistic input, lexical and semantic access, reformulation and production of the segment translated both simultaneously and under temporal pressure (Bajo, Padilla & Padilla, 1998). This research led to our interest in the processes involved in the tasks of mediation in general. We understand that linguistic and cultural mediation involves not only translation but also the different forms of interpreting: consecutive and simultaneous. Our general objective in this project is to outline a cognitive theory of translation and interpreting and find empirical support for it. From the field of translation and interpreting there have been some attempts to create global and partial theories of the processes of mediation (Gerver, 1976; Moser-Mercer, 1997; Gile, 1997), but most of these attempts lack empirical support. On the other hand, from the field of psycholinguistics there have been some attempts to make an empirical study of the tasks of translation (De Groot, 1993; Sánchez-Casas Davis and GarcíaAlbea, 1992) and interpreting (McDonald and Carpenter, 1981), but these have always been partial, concentrating on very specific aspects of translation and interpreting. The specific objectives of this project are:",
"title": ""
},
{
"docid": "d43e159aae67755f9bebddb62202e1d5",
"text": "Inactivation of tumour-suppressor genes by homozygous deletion is a prototypic event in the cancer genome, yet such deletions often encompass neighbouring genes. We propose that homozygous deletions in such passenger genes can expose cancer-specific therapeutic vulnerabilities when the collaterally deleted gene is a member of a functionally redundant family of genes carrying out an essential function. The glycolytic gene enolase 1 (ENO1) in the 1p36 locus is deleted in glioblastoma (GBM), which is tolerated by the expression of ENO2. Here we show that short-hairpin-RNA-mediated silencing of ENO2 selectively inhibits growth, survival and the tumorigenic potential of ENO1-deleted GBM cells, and that the enolase inhibitor phosphonoacetohydroxamate is selectively toxic to ENO1-deleted GBM cells relative to ENO1-intact GBM cells or normal astrocytes. The principle of collateral vulnerability should be applicable to other passenger-deleted genes encoding functionally redundant essential activities and provide an effective treatment strategy for cancers containing such genomic events.",
"title": ""
},
{
"docid": "cb1e6d11d372e72f7675a55c8f2c429d",
"text": "We evaluate the performance of a hardware/software architecture designed to perform a wide range of fast image processing tasks. The system ar chitecture is based on hardware featuring a Field Programmable Gate Array (FPGA) co-processor and a h ost computer. A LabVIEW TM host application controlling a frame grabber and an industrial camer a is used to capture and exchange video data with t he hardware co-processor via a high speed USB2.0 chann el, implemented with a standard macrocell. The FPGA accelerator is based on a Altera Cyclone II ch ip and is designed as a system-on-a-programmablechip (SOPC) with the help of an embedded Nios II so ftware processor. The SOPC system integrates the CPU, external and on chip memory, the communication channel and typical image filters appropriate for the evaluation of the system performance. Measured tran sfer rates over the communication channel and processing times for the implemented hardware/softw are logic are presented for various frame sizes. A comparison with other solutions is given and a rang e of applications is also discussed.",
"title": ""
},
{
"docid": "584d2858178e4e33855103a71d7fdce4",
"text": "This paper presents 5G mm-wave phased-array antenna for 3D-hybrid beamforming. This uses MFC to steer beam for the elevation, and uses butler matrix network for the azimuth. In case of butler matrix network, this, using 180° ring hybrid coupler switch network, is proposed to get additional beam pattern and improved SRR in comparison with conventional structure. Also, it can be selected 15 of the azimuth beam pattern. When using the chip of proposed structure, it is possible to get variable kind of beam-forming over 1000. In addition, it is suitable 5G system or a satellite communication system that requires a beamforming.",
"title": ""
},
{
"docid": "823d838471a475ec32d460711b9805b4",
"text": "Marketing has a tradition in conducting scientific research with cutting-edge techniques developed in management science, such as data envelopment analysis (DEA) (Charnes et al. 1985). Two decades ago, Kamakura, Ratchford, and Agrawal (1988) applied DEA to examine market efficiency and consumer welfare loss. In this review of three new books, my purpose is (1) to provide a background of DEA for marketing scholars and executives and (2) to motivate them with exciting DEA advances for marketing theory and practice. All three books provide brief descriptions of DEA’s history, origin, and basic models. For beginners, Ramanathan’s work, An Introduction to Data Envelopment Analysis, is a good source, offering basic concepts of DEA’s efficiency and programming formulations in a straightforward manner (with some illustrations) in the first three chapters. A unique feature of this book is that it dedicates a chapter to discussing as many as 11 DEA computer software programs and explaining some noncommercial DEA packages that are available free on the Internet for academic purposes. As Ramanathan states (p. 111), “[I]n this computer era, it is important that any management science technique has adequate software support so that potential users are encouraged to use it. Software harnesses the computing power of [personal computers] for use in practical decision-making situations. It can also expedite the implementation of a method.” EDITOR: Naveen Donthu",
"title": ""
},
{
"docid": "438a373b6e56a384020492b54dcc124b",
"text": "A new electrostatic discharge (ESD) protection scheme for differential low-noise amplifier (LNA) was proposed in this paper. The new ESD protection scheme, which evolved from the conventional double-diode ESD protection scheme without adding any extra device, was realized with cross-coupled silicon-controlled rectifier (SCR). With the new ESD protection scheme, the pin-to-pin ESD robustness can be improved, which was the most critical ESD-test pin combination for differential input pads. Experimental results had shown that differential LNA with cross-coupled-SCR ESD protection scheme can achieve excellent ESD robustness and good RF performances.",
"title": ""
},
{
"docid": "950a6a611f1ceceeec49534c939b4e0f",
"text": "Often signals and system parameters are most conveniently represented as complex-valued vectors. This occurs, for example, in array processing [1], as well as in communication systems [7] when processing narrowband signals using the equivalent complex baseband representation [2]. Furthermore, in many important applications one attempts to optimize a scalar real-valued measure of performance over the complex parameters defining the signal or system of interest. This is the case, for example, in LMS adaptive filtering where complex filter coefficients are adapted on line. To effect this adaption one attempts to optimize the performance measure by adjustments of the coefficients along its gradient direction [16, 23].",
"title": ""
},
{
"docid": "e4e2bb8bf8cc1488b319a59f82a71f08",
"text": "We aim to dismantle the prevalent black-box neural architectures used in complex visual reasoning tasks, into the proposed eXplainable and eXplicit Neural Modules (XNMs), which advance beyond existing neural module networks towards using scene graphs — objects as nodes and the pairwise relationships as edges — for explainable and explicit reasoning with structured knowledge. XNMs allow us to pay more attention to teach machines how to “think”, regardless of what they “look”. As we will show in the paper, by using scene graphs as an inductive bias, 1) we can design XNMs in a concise and flexible fashion, i.e., XNMs merely consist of 4 meta-types, which significantly reduce the number of parameters by 10 to 100 times, and 2) we can explicitly trace the reasoning-flow in terms of graph attentions. XNMs are so generic that they support a wide range of scene graph implementations with various qualities. For example, when the graphs are detected perfectly, XNMs achieve 100% accuracy on both CLEVR and CLEVR CoGenT, establishing an empirical performance upper-bound for visual reasoning; when the graphs are noisily detected from real-world images, XNMs are still robust to achieve a competitive 67.5% accuracy on VQAv2.0, surpassing the popular bag-of-objects attention models without graph structures.",
"title": ""
}
] |
scidocsrr
|
09ab79166d649d927ba1096fdb2fd5a6
|
Learning Knowledge Graphs for Question Answering through Conversational Dialog
|
[
{
"docid": "cf2fc7338a0a81e4c56440ec7c3c868e",
"text": "We describe a new dependency parser for English tweets, TWEEBOPARSER. The parser builds on several contributions: new syntactic annotations for a corpus of tweets (TWEEBANK), with conventions informed by the domain; adaptations to a statistical parsing algorithm; and a new approach to exploiting out-of-domain Penn Treebank data. Our experiments show that the parser achieves over 80% unlabeled attachment accuracy on our new, high-quality test set and measure the benefit of our contributions. Our dataset and parser can be found at http://www.ark.cs.cmu.edu/TweetNLP.",
"title": ""
}
] |
[
{
"docid": "f7a69acbc2766e990cbd4f3c9b4124d1",
"text": "This paper aims at assisting empirical researchers benefit from recent advances in causal inference. The paper stresses the paradigmatic shifts that must be undertaken in moving from traditional statistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that underly all causal inferences, the languages used in formulating those assumptions, and the conditional nature of causal claims inferred from nonexperimental studies. These emphases are illustrated through a brief survey of recent results, including the control of confounding, the assessment of causal effects, the interpretation of counterfactuals, and a symbiosis between counterfactual and graphical methods of analysis.",
"title": ""
},
{
"docid": "c6d3f20e9d535faab83fb34cec0fdb5b",
"text": "Over the past two decades several attempts have been made to address the problem of face recognition and a voluminous literature has been produced. Current face recognition systems are able to perform very well in controlled environments e.g. frontal face recognition, where face images are acquired under frontal pose with strict constraints as defined in related face recognition standards. However, in unconstrained situations where a face may be captured in outdoor environments, under arbitrary illumination and large pose variations these systems fail to work. With the current focus of research to deal with these problems, much attention has been devoted in the facial feature extraction stage. Facial feature extraction is the most important step in face recognition. Several studies have been made to answer the questions like what features to use, how to describe them and several feature extraction techniques have been proposed. While many comprehensive literature reviews exist for face recognition a complete reference for different feature extraction techniques and their advantages/disadvantages with regards to a typical face recognition task in unconstrained scenarios is much needed. In this chapter we present a comprehensive review of the most relevant feature extraction techniques used in 2D face recognition and introduce a new feature extraction technique termed as Face-GLOH-signature to be used in face recognition for the first time (Sarfraz and Hellwich, 2008), which has a number of advantages over the commonly used feature descriptions in the context of unconstrained face recognition. The goal of feature extraction is to find a specific representation of the data that can highlight relevant information. This representation can be found by maximizing a criterion or can be a pre-defined representation. Usually, a face image is represented by a high dimensional vector containing pixel values (holistic representation) or a set of vectors where each vector summarizes the underlying content of a local region by using a high level 1",
"title": ""
},
{
"docid": "f717225fa7518383e0db362e673b9af4",
"text": "The web has become the world's largest repository of knowledge. Web usage mining is the process of discovering knowledge from the interactions generated by the user in the form of access logs, cookies, and user sessions data. Web Mining consists of three different categories, namely Web Content Mining, Web Structure Mining, and Web Usage Mining (is the process of discovering knowledge from the interaction generated by the users in the form of access logs, browser logs, proxy-server logs, user session data, cookies). Accurate web log mining results and efficient online navigational pattern prediction are undeniably crucial for tuning up websites and consequently helping in visitors’ retention. Like any other data mining task, web log mining starts with data cleaning and preparation and it ends up discovering some hidden knowledge which cannot be extracted using conventional methods. After applying web mining on web sessions we will get navigation patterns which are important for web users such that appropriate actions can be adopted. Due to huge data in web, discovery of patterns and there analysis for further improvement in website becomes a real time necessity. The main focus of this paper is using of hybrid prediction engine to classify users on the basis of discovered patterns from web logs. Our proposed framework is to overcome the problem arise due to using of any single algorithm, we will give results based on comparison of two different algorithms like Longest Common Sequence (LCS) algorithm and Frequent Pattern (Growth) algorithm. Keywords— Web Usage Mining, Navigation Pattern, Frequent Pattern (Growth) Algorithm. ________________________________________________________________________________________________________",
"title": ""
},
{
"docid": "945cf1645df24629842c5e341c3822e7",
"text": "Cloud computing economically enables the paradigm of data service outsourcing. However, to protect data privacy, sensitive cloud data have to be encrypted before outsourced to the commercial public cloud, which makes effective data utilization service a very challenging task. Although traditional searchable encryption techniques allow users to securely search over encrypted data through keywords, they support only Boolean search and are not yet sufficient to meet the effective data utilization need that is inherently demanded by large number of users and huge amount of data files in cloud. In this paper, we define and solve the problem of secure ranked keyword search over encrypted cloud data. Ranked search greatly enhances system usability by enabling search result relevance ranking instead of sending undifferentiated results, and further ensures the file retrieval accuracy. Specifically, we explore the statistical measure approach, i.e., relevance score, from information retrieval to build a secure searchable index, and develop a one-to-many order-preserving mapping technique to properly protect those sensitive score information. The resulting design is able to facilitate efficient server-side ranking without losing keyword privacy. Thorough analysis shows that our proposed solution enjoys “as-strong-as-possible” security guarantee compared to previous searchable encryption schemes, while correctly realizing the goal of ranked keyword search. Extensive experimental results demonstrate the efficiency of the proposed solution.",
"title": ""
},
{
"docid": "046f15ecf1037477b10bfb4fa315c9c9",
"text": "With the rapid proliferation of camera-equipped smart devices (e.g., smartphones, pads, tablets), visible light communication (VLC) over screen-camera links emerges as a novel form of near-field communication. Such communication via smart devices is highly competitive for its user-friendliness, security, and infrastructure-less (i.e., no dependency on WiFi or cellular infrastructure). However, existing approaches mostly focus on improving the transmission speed and ignore the transmission reliability. Considering the interplay between the transmission speed and reliability towards effective end-to-end communication, in this paper, we aim to boost the throughput over screen-camera links by enhancing the transmission reliability. To this end, we propose RDCode, a robust dynamic barcode which enables a novel packet-frame-block structure. Based on the layered structure, we design different error correction schemes at three levels: intra-blocks, inter-blocks and inter-frames, in order to verify and recover the lost blocks and frames. Finally, we implement RDCode and experimentally show that RDCode reaches a high level of transmission reliability (e.g., reducing the error rate to 10%) and yields a at least doubled transmission rate, compared with the existing state-of-the-art approach COBRA.",
"title": ""
},
{
"docid": "733e5961428e5aad785926e389b9bd75",
"text": "OBJECTIVE\nPeer support can be defined as the process of giving and receiving nonprofessional, nonclinical assistance from individuals with similar conditions or circumstances to achieve long-term recovery from psychiatric, alcohol, and/or other drug-related problems. Recently, there has been a dramatic rise in the adoption of alternative forms of peer support services to assist recovery from substance use disorders; however, often peer support has not been separated out as a formalized intervention component and rigorously empirically tested, making it difficult to determine its effects. This article reports the results of a literature review that was undertaken to assess the effects of peer support groups, one aspect of peer support services, in the treatment of addiction.\n\n\nMETHODS\nThe authors of this article searched electronic databases of relevant peer-reviewed research literature including PubMed and MedLINE.\n\n\nRESULTS\nTen studies met our minimum inclusion criteria, including randomized controlled trials or pre-/post-data studies, adult participants, inclusion of group format, substance use-related, and US-conducted studies published in 1999 or later. Studies demonstrated associated benefits in the following areas: 1) substance use, 2) treatment engagement, 3) human immunodeficiency virus/hepatitis C virus risk behaviors, and 4) secondary substance-related behaviors such as craving and self-efficacy. Limitations were noted on the relative lack of rigorously tested empirical studies within the literature and inability to disentangle the effects of the group treatment that is often included as a component of other services.\n\n\nCONCLUSION\nPeer support groups included in addiction treatment shows much promise; however, the limited data relevant to this topic diminish the ability to draw definitive conclusions. More rigorous research is needed in this area to further expand on this important line of research.",
"title": ""
},
{
"docid": "d0a4bc15208b12b1647eb21e7ca9cc6c",
"text": "The investment in an automated fabric defect detection system is more than economical when reduction in labor cost and associated benefits are considered. The development of a fully automated web inspection system requires robust and efficient fabric defect detection algorithms. The inspection of real fabric defects is particularly challenging due to the large number of fabric defect classes, which are characterized by their vagueness and ambiguity. Numerous techniques have been developed to detect fabric defects and the purpose of this paper is to categorize and/or describe these algorithms. This paper attempts to present the first survey on fabric defect detection techniques presented in about 160 references. Categorization of fabric defect detection techniques is useful in evaluating the qualities of identified features. The characterization of real fabric surfaces using their structure and primitive set has not yet been successful. Therefore, on the basis of the nature of features from the fabric surfaces, the proposed approaches have been characterized into three categories; statistical, spectral and model-based. In order to evaluate the state-of-the-art, the limitations of several promising techniques are identified and performances are analyzed in the context of their demonstrated results and intended application. The conclusions from this paper also suggest that the combination of statistical, spectral and model-based approaches can give better results than any single approach, and is suggested for further research.",
"title": ""
},
{
"docid": "e72382020e2b15be32047da611ad078f",
"text": "This article describes the results of a case study that applies Neural Networkbased Optical Character Recognition (OCR) to scanned images of books printed between 1487 and 1870 by training the OCR engine OCRopus (Breuel et al. 2013) on the RIDGES herbal text corpus (Odebrecht et al. 2017, in press). Training specific OCR models was possible because the necessary ground truth is available as error-corrected diplomatic transcriptions. The OCR results have been evaluated for accuracy against the ground truth of unseen test sets. Character and word accuracies (percentage of correctly recognized items) for the resulting machine-readable texts of individual documents range from 94% to more than 99% (character level) and from 76% to 97% (word level). This includes the earliest printed books, which were thought to be inaccessible by OCR methods until recently. Furthermore, OCR models trained on one part of the corpus consisting of books with different printing dates and different typesets (mixed models) have been tested for their predictive power on the books from the other part containing yet other fonts, mostly yielding character accuracies well above 90%. It therefore seems possible to construct generalized models trained on a range of fonts that can be applied to a wide variety of historical printings still giving good results. A moderate postcorrection effort of some pages will then enable the training of individual models with even better accuracies. Using this method, diachronic corpora including early printings can be constructed much faster and cheaper than by manual transcription. The OCR methods reported here open up the possibility of transforming our printed textual cultural 1 ar X iv :1 60 8. 02 15 3v 2 [ cs .C L ] 1 F eb 2 01 7 Springmann & Lüdeling OCR of historical printings heritage into electronic text by largely automatic means, which is a prerequisite for the mass conversion of scanned books.",
"title": ""
},
{
"docid": "2ad8723c9fce1a6264672f41824963f8",
"text": "Psychologists have repeatedly shown that a single statistical factor--often called \"general intelligence\"--emerges from the correlations among people's performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of \"collective intelligence\" exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group's performance on a wide variety of tasks. This \"c factor\" is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.",
"title": ""
},
{
"docid": "e7616fbe9853bf8e1c89441287baf30c",
"text": "The objective of the current study is to compare the use of a nasal continuous positive airway pressure (nCPAP) to a high-flow humidified nasal cannula (HFNC) in infants with acute bronchiolitis, who were admitted to a pediatric intensive care unit (PICU) during two consecutive seasons. We retrospectively reviewed the medical records of all infants admitted to a PICU at a tertiary care French hospital during the bronchiolitis seasons of 2010/11 and 2011/12. Infants admitted to the PICU, who required noninvasive respiratory support, were included. The first noninvasive respiratory support modality was nCPAP during the 2010/11 season, while HFNC was used during the 2011/2012 season. We compared the length of stay (LOS) in the PICU; the daily measure of PCO2 and pH; and the mean of the five higher values of heart rate (HR), respiratory rate (RR), FiO2, and SpO2 each day, during the first 5 days. Thirty-four children met the inclusion criteria: 19 during the first period (nCPAP group) and 15 during the second period (HFNC group). Parameters such as LOS in PICU and oxygenation were similar in the two groups. Oxygen weaning occurred during the same time for the two groups. There were no differences between the two groups for RR, HR, FiO2, and CO2 evolution. HFNC therapy failed in three patients, two of whom required invasive mechanical ventilation, versus one in the nCPAP group. Conclusion: We did not find a difference between HFNC and nCPAP in the management of severe bronchiolitis in our PICU. Larger prospective studies are required to confirm these findings.",
"title": ""
},
{
"docid": "0879f749188cbb88a8cefff60d0d4f6e",
"text": "Raw tomato contains a high level of lycopene, which has been reported to have many important health benefits. However, information on the changes of the lycopene content in tomato during cooking is limited. In this study, the lycopene content in raw and thermally processed (baked, microwaved, and fried) tomato slurries was investigated and analyzed using a high-performance liquid chromatography (HPLC) method. In the thermal stability study using a pure lycopene standard, 50% of lycopene was degraded at 100 ◦C after 60 min, 125 ◦C after 20 min, and 150 ◦C after less than 10 min. Only 64.1% and 51.5% lycopene was retained when the tomato slurry was baked at 177 ◦C and 218 ◦C for 15 min, respectively. At these temperatures, only 37.3% and 25.1% of lycopene was retained after baking for 45 min. In 1 min of the high power of microwave heating, 64.4% of lycopene still remained. However, more degradation of lycopene in the slurry was found in the frying study. Only 36.6% and 35.5% of lycopene was retained after frying at 145 and 165 ◦C for 1 min, respectively.",
"title": ""
},
{
"docid": "b088438d5e44d9fc2bd4156dbb708b1a",
"text": "Applying parallelism to constraint solving seems a promising approach and it has been done with varying degrees of success. Early attempts to parallelize constraint propagation, which constitutes the core of traditional interleaved propagation and search constraint solving, were hindered by its essentially sequential nature. Recently, parallelization efforts have focussed mainly on the search part of constraint solving, as well as on local-search based solving. Lately, a particular source of parallelism has become pervasive, in the guise of GPUs, able to run thousands of parallel threads, and they have naturally drawn the attention of researchers in parallel constraint solving. In this paper, we address challenges faced when using multiple devices for constraint solving, especially GPUs, such as deciding on the appropriate level of parallelism to employ, load balancing and inter-device communication, and present our current solutions.",
"title": ""
},
{
"docid": "5a25af5b9c51b7b1a7b36f0c9b121add",
"text": "BACKGROUND\nCircumcision is a common procedure, but regional and societal attitudes differ on whether there is a need for a male to be circumcised and, if so, at what age. This is an important issue for many parents, but also pediatricians, other doctors, policy makers, public health authorities, medical bodies, and males themselves.\n\n\nDISCUSSION\nWe show here that infancy is an optimal time for clinical circumcision because an infant's low mobility facilitates the use of local anesthesia, sutures are not required, healing is quick, cosmetic outcome is usually excellent, costs are minimal, and complications are uncommon. The benefits of infant circumcision include prevention of urinary tract infections (a cause of renal scarring), reduction in risk of inflammatory foreskin conditions such as balanoposthitis, foreskin injuries, phimosis and paraphimosis. When the boy later becomes sexually active he has substantial protection against risk of HIV and other viral sexually transmitted infections such as genital herpes and oncogenic human papillomavirus, as well as penile cancer. The risk of cervical cancer in his female partner(s) is also reduced. Circumcision in adolescence or adulthood may evoke a fear of pain, penile damage or reduced sexual pleasure, even though unfounded. Time off work or school will be needed, cost is much greater, as are risks of complications, healing is slower, and stitches or tissue glue must be used.\n\n\nSUMMARY\nInfant circumcision is safe, simple, convenient and cost-effective. The available evidence strongly supports infancy as the optimal time for circumcision.",
"title": ""
},
{
"docid": "5816f70a7f4d7d0beb6e0653db962df3",
"text": "Packaging appearance is extremely important in cigarette manufacturing. Typically, there are two types of cigarette packaging defects: (1) cigarette laying defects such as incorrect cigarette numbers and irregular layout; (2) tin paper handle defects such as folded paper handles. In this paper, an automated vision-based defect inspection system is designed for cigarettes packaged in tin containers. The first type of defects is inspected by counting the number of cigarettes in a tin container. First k-means clustering is performed to segment cigarette regions. After noise filtering, valid cigarette regions are identified by estimating individual cigarette area using linear regression. The k clustering centers and area estimation function are learned off-line on training images. The second kind of defect is detected by checking the segmented paper handle region. Experimental results on 500 test images demonstrate the effectiveness of the proposed inspection system. The proposed method also contributes to the general detection and classification system such as identifying mitosis in early diagnosis of cervical cancer.",
"title": ""
},
{
"docid": "dfca655ee52769c9c1d26e8c3f5b883f",
"text": "BACKGROUND\nDihydrocapsiate (DCT) is a natural safe food ingredient which is structurally related to capsaicin from chili pepper and is found in the non-pungent pepper strain, CH-19 Sweet. It has been shown to elicit the thermogenic effects of capsaicin but without its gastrointestinal side effects.\n\n\nMETHODS\nThe present study was designed to examine the effects of DCT on both adaptive thermogenesis as the result of caloric restriction with a high protein very low calorie diet (VLCD) and to determine whether DCT would increase post-prandial energy expenditure (PPEE) in response to a 400 kcal/60 g protein liquid test meal. Thirty-three subjects completed an outpatient very low calorie diet (800 kcal/day providing 120 g/day protein) over 4 weeks and were randomly assigned to receive either DCT capsules three times per day (3 mg or 9 mg) or placebo. At baseline and 4 weeks, fasting basal metabolic rate and PPEE were measured in a metabolic hood and fat free mass (FFM) determined using displacement plethysmography (BOD POD).\n\n\nRESULTS\nPPEE normalized to FFM was increased significantly in subjects receiving 9 mg/day DCT by comparison to placebo (p < 0.05), but decreases in resting metabolic rate were not affected. Respiratory quotient (RQ) increased by 0.04 in the placebo group (p < 0.05) at end of the 4 weeks, but did not change in groups receiving DCT.\n\n\nCONCLUSIONS\nThese data provide evidence for postprandial increases in thermogenesis and fat oxidation secondary to administration of dihydrocapsiate.\n\n\nTRIAL REGISTRATION\nclinicaltrial.govNCT01142687.",
"title": ""
},
{
"docid": "627f3c07a8ce5f0935ced97f685f44f4",
"text": "Click-through rate (CTR) prediction plays a central role in search advertising. One needs CTR estimates unbiased by positional effect in order for ad ranking, allocation, and pricing to be based upon ad relevance or quality in terms of click propensity. However, the observed click-through data has been confounded by positional bias, that is, users tend to click more on ads shown in higher positions than lower ones, regardless of the ad relevance. We describe a probabilistic factor model as a general principled approach to studying these exogenous and often overwhelming phenomena. The model is simple and linear in nature, while empirically justified by the advertising domain. Our experimental results with artificial and real-world sponsored search data show the soundness of the underlying model assumption, which in turn yields superior prediction accuracy.",
"title": ""
},
{
"docid": "be3296a4c18c8c102d9365d9ab092cf4",
"text": "Color barcode-based visible light communication (VLC) over screen-camera links has attracted great research interest in recent years due to its many desirable properties, including free of charge, free of interference, free of complex network configuration and well-controlled communication security. To achieve high-throughput barcode streaming, previous systems separately address design challenges such as image blur, imperfect frame synchronization and error correction etc., without being investigated as an interrelated whole. This does not fully exploit the capacity of color barcode streaming, and these solutions all have their own limitations from a practical perspective. This paper proposes RainBar, a new and improved color barcode-based visual communication system, which features a carefully-designed high-capacity barcode layout design to allow flexible frame synchronization and accurate code extraction. A progressive code locator detection and localization scheme and a robust color recognition scheme are proposed to enhance system robustness and hence the decoding rate under various working conditions. An extensive experimental study is presented to demonstrate the effectiveness and flexibility of RainBar. Results on Android smartphones show that our system achieves higher average throughput than previous systems, under various working environments.",
"title": ""
},
{
"docid": "45ea8497ccd9f63d519e40ef41938331",
"text": "The appearance of an object in an image encodes invaluable information about that object and the surrounding scene. Inferring object reflectance and scene illumination from an image would help us decode this information: reflectance can reveal important properties about the materials composing an object; the illumination can tell us, for instance, whether the scene is indoors or outdoors. Recovering reflectance and illumination from a single image in the real world, however, is a difficult task. Real scenes illuminate objects from every visible direction and real objects vary greatly in reflectance behavior. In addition, the image formation process introduces ambiguities, like color constancy, that make reversing the process ill-posed. To address this problem, we propose a Bayesian framework for joint reflectance and illumination inference in the real world. We develop a reflectance model and priors that precisely capture the space of real-world object reflectance and a flexible illumination model that can represent real-world illumination with priors that combat the deleterious effects of image formation. We analyze the performance of our approach on a set of synthetic data and demonstrate results on real-world scenes. These contributions enable reliable reflectance and illumination inference in the real world.",
"title": ""
},
{
"docid": "bb9f86e800e3f00bf7b34be85d846ff0",
"text": "This paper presents a survey of the autopilot systems for small fixed-wing unmanned air vehicles (UAVs). The UAV flight control basics are introduced first. The radio control system and autopilot control system are then explained from both hardware and software viewpoints. Several typical commercial off-the-shelf autopilot packages are compared in detail. In addition, some research autopilot systems are introduced. Finally, conclusions are made with a summary of the current autopilot market and a remark on the future development.This paper presents a survey of the autopilot systems for small fixed-wing unmanned air vehicles (UAVs). The UAV flight control basics are introduced first. The radio control system and autopilot control system are then explained from both hardware and software viewpoints. Several typical commercial off-the-shelf autopilot packages are compared in detail. In addition, some research autopilot systems are introduced. Finally, conclusions are made with a summary of the current autopilot market and a remark on the future development.",
"title": ""
}
] |
scidocsrr
|
a04245add9a1b1f59b8f46260db49621
|
Supplementary material for “ Masked Autoregressive Flow for Density Estimation ”
|
[
{
"docid": "b6a8f45bd10c30040ed476b9d11aa908",
"text": "PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.",
"title": ""
},
{
"docid": "4c21ec3a600d773ea16ce6c45df8fe9d",
"text": "The efficacy of particle identification is compared using artificial neutral networks and boosted decision trees. The comparison is performed in the context of the MiniBooNE, an experiment at Fermilab searching for neutrino oscillations. Based on studies of Monte Carlo samples of simulated data, particle identification with boosting algorithms has better performance than that with artificial neural networks for the MiniBooNE experiment. Although the tests in this paper were for one experiment, it is expected that boosting algorithms will find wide application in physics. r 2005 Elsevier B.V. All rights reserved. PACS: 29.85.+c; 02.70.Uu; 07.05.Mh; 14.60.Pq",
"title": ""
},
{
"docid": "3cdab5427efd08edc4f73266b7ed9176",
"text": "Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.",
"title": ""
}
] |
[
{
"docid": "2a5710aeaba7e39c5e08c1a5310c89f6",
"text": "We present an augmented reality system that supports human workers in a rapidly changing production environment. By providing spatially registered information on the task directly in the user's field of view the system can guide the user through unfamiliar tasks (e.g. assembly of new products) and visualize information directly in the spatial context were it is relevant. In the first version we present the user with picking and assembly instructions in an assembly application. In this paper we present the initial experience with this system, which has already been used successfully by several hundred users who had no previous experience in the assembly task.",
"title": ""
},
{
"docid": "527c4c17aadb23a991d85511004a7c4f",
"text": "Accurate and robust recognition and prediction of traffic situation plays an important role in autonomous driving, which is a prerequisite for risk assessment and effective decision making. Although there exist a lot of works dealing with modeling driver behavior of a single object, it remains a challenge to make predictions for multiple highly interactive agents that react to each other simultaneously. In this work, we propose a generic probabilistic hierarchical recognition and prediction framework which employs a two-layer Hidden Markov Model (TLHMM) to obtain the distribution of potential situations and a learning-based dynamic scene evolution model to sample a group of future trajectories. Instead of predicting motions of a single entity, we propose to get the joint distribution by modeling multiple interactive agents as a whole system. Moreover, due to the decoupling property of the layered structure, our model is suitable for knowledge transfer from simulation to real world applications as well as among different traffic scenarios, which can reduce the computational efforts of training and the demand for a large data amount. A case study of highway ramp merging scenario is demonstrated to verify the effectiveness and accuracy of the proposed framework.",
"title": ""
},
{
"docid": "08c26a40328648cf6a6d0a7efc3917a5",
"text": "Person re-identification (ReID) is an important task in video surveillance and has various applications. It is non-trivial due to complex background clutters, varying illumination conditions, and uncontrollable camera settings. Moreover, the person body misalignment caused by detectors or pose variations is sometimes too severe for feature matching across images. In this study, we propose a novel Convolutional Neural Network (CNN), called Spindle Net, based on human body region guided multi-stage feature decomposition and tree-structured competitive feature fusion. It is the first time human body structure information is considered in a CNN framework to facilitate feature learning. The proposed Spindle Net brings unique advantages: 1) it separately captures semantic features from different body regions thus the macro-and micro-body features can be well aligned across images, 2) the learned region features from different semantic regions are merged with a competitive scheme and discriminative features can be well preserved. State of the art performance can be achieved on multiple datasets by large margins. We further demonstrate the robustness and effectiveness of the proposed Spindle Net on our proposed dataset SenseReID without fine-tuning.",
"title": ""
},
{
"docid": "b2f0b5ef76d9e98e93e6c5ed64642584",
"text": "The yeast and fungal prions determine heritable and infectious traits, and are thus genes composed of protein. Most prions are inactive forms of a normal protein as it forms a self-propagating filamentous β-sheet-rich polymer structure called amyloid. Remarkably, a single prion protein sequence can form two or more faithfully inherited prion variants, in effect alleles of these genes. What protein structure explains this protein-based inheritance? Using solid-state nuclear magnetic resonance, we showed that the infectious amyloids of the prion domains of Ure2p, Sup35p and Rnq1p have an in-register parallel architecture. This structure explains how the amyloid filament ends can template the structure of a new protein as it joins the filament. The yeast prions [PSI(+)] and [URE3] are not found in wild strains, indicating that they are a disadvantage to the cell. Moreover, the prion domains of Ure2p and Sup35p have functions unrelated to prion formation, indicating that these domains are not present for the purpose of forming prions. Indeed, prion-forming ability is not conserved, even within Saccharomyces cerevisiae, suggesting that the rare formation of prions is a disease. The prion domain sequences generally vary more rapidly in evolution than does the remainder of the molecule, producing a barrier to prion transmission, perhaps selected in evolution by this protection.",
"title": ""
},
{
"docid": "7b88e651bf87e3a780fd1cf31b997bc5",
"text": "While the use of the internet and social media as a tool for extremists and terrorists has been well documented, understanding the mechanisms at work has been much more elusive. This paper begins with a grounded theory approach guided by a new theoretical approach to power that utilizes both terrorism cases and extremist social media groups to develop an explanatory model of radicalization. Preliminary hypotheses are developed, explored and refined in order to develop a comprehensive model which is then presented. This model utilizes and applies concepts from social theorist Michel Foucault, including the use of discourse and networked power relations in order to normalize and modify thoughts and behaviors. The internet is conceptualized as a type of institution in which this framework of power operates and seeks to recruit and radicalize. Overall, findings suggest that the explanatory model presented is a well suited, yet still incomplete in explaining the process of online radicalization.",
"title": ""
},
{
"docid": "d1ebf47c1f0b1d8572d526e9260dbd32",
"text": "In this paper, mortality in the immediate aftermath of an earthquake is studied on a worldwide scale using multivariate analysis. A statistical method is presented that analyzes reported earthquake fatalities as a function of a heterogeneous set of parameters selected on the basis of their presumed influence on earthquake mortality. The ensemble was compiled from demographic, seismic, and reported fatality data culled from available records of past earthquakes organized in a geographic information system. The authors consider the statistical relation between earthquake mortality and the available data ensemble, analyze the validity of the results in view of the parametric uncertainties, and propose a multivariate mortality analysis prediction method. The analysis reveals that, although the highest mortality rates are expected in poorly developed rural areas, high fatality counts can result from a wide range of mortality ratios that depend on the effective population size.",
"title": ""
},
{
"docid": "2e812c0a44832721fcbd7272f9f6a465",
"text": "Previous research has shown that people differ in their implicit theories about the essential characteristics of intelligence and emotions. Some people believe these characteristics to be predetermined and immutable (entity theorists), whereas others believe that these characteristics can be changed through learning and behavior training (incremental theorists). The present study provides evidence that in healthy adults (N = 688), implicit beliefs about emotions and emotional intelligence (EI) may influence performance on the ability-based Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). Adults in our sample with incremental theories about emotions and EI scored higher on the MSCEIT than entity theorists, with implicit theories about EI showing a stronger relationship to scores than theories about emotions. Although our participants perceived both emotion and EI as malleable, they viewed emotions as more malleable than EI. Women and young adults in general were more likely to be incremental theorists than men and older adults. Furthermore, we found that emotion and EI theories mediated the relationship of gender and age with ability EI. Our findings suggest that people's implicit theories about EI may influence their emotional abilities, which may have important consequences for personal and professional EI training.",
"title": ""
},
{
"docid": "2f4cfa040664d08b1540677c8d72f962",
"text": "We study the problem of modeling spatiotemporal trajectories over long time horizons using expert demonstrations. For instance, in sports, agents often choose action sequences with long-term goals in mind, such as achieving a certain strategic position. Conventional policy learning approaches, such as those based on Markov decision processes, generally fail at learning cohesive long-term behavior in such high-dimensional state spaces, and are only effective when fairly myopic decisionmaking yields the desired behavior. The key difficulty is that conventional models are “single-scale” and only learn a single state-action policy. We instead propose a hierarchical policy class that automatically reasons about both long-term and shortterm goals, which we instantiate as a hierarchical neural network. We showcase our approach in a case study on learning to imitate demonstrated basketball trajectories, and show that it generates significantly more realistic trajectories compared to non-hierarchical baselines as judged by professional sports analysts.",
"title": ""
},
{
"docid": "83413682f018ae5aec9ec415679de940",
"text": "An 18-year-old female patient arrived at the emergency department complaining of abdominal pain and fullness after a heavy meal. Physical examination revealed she was filthy and cover in feces, and she experienced severe abdominal distension. She died in ED and a diagnostic autopsy examination was requested. At external examination, the pathologist observed a significant dilation of the anal sphincter and suspected sexual assault, thus alerting the Judicial Authority who assigned the case to our department for a forensic autopsy. During the autopsy, we observed anal orifice expansion without signs of violence; food was found in the pleural cavity. The stomach was hyper-distended and perforated at three different points as well as the diaphragm. The patient was suffering from anorexia nervosa with episodes of overeating followed by manual voiding of her feces from the anal cavity (thus explaining the anal dilatation). The forensic pathologists closed the case as an accidental death.",
"title": ""
},
{
"docid": "b692e35c404da653d27dc33c01867b6e",
"text": "We demonstrate that it is possible to perform automatic sentiment classification in the very noisy domain of customer feedback data. We show that by using large feature vectors in combination with feature reduction, we can train linear support vector machines that achieve high classification accuracy on data that present classification challenges even for a human annotator. We also show that, surprisingly, the addition of deep linguistic analysis features to a set of surface level word n-gram features contributes consistently to classification accuracy in this domain.",
"title": ""
},
{
"docid": "1701da2aed094fdcbfaca6c2252d2e53",
"text": "Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range. These features, along with a very low power consumption, make event cameras an ideal complement to standard cameras for VR/AR and video game applications. With these applications in mind, this paper tackles the problem of accurate, low-latency tracking of an event camera from an existing photometric depth map (i.e., intensity plus depth information) built via classic dense reconstruction pipelines. Our approach tracks the 6-DOF pose of the event camera upon the arrival of each event, thus virtually eliminating latency. We successfully evaluate the method in both indoor and outdoor scenes and show that—because of the technological advantages of the event camera—our pipeline works in scenes characterized by high-speed motion, which are still inaccessible to standard cameras.",
"title": ""
},
{
"docid": "1c79bf1b4dcad01f9afc54f467d8067f",
"text": "With the rapid growth of network bandwidth, increases in CPU cores on a single machine, and application API models demanding more short-lived connections, a scalable TCP stack is performance-critical. Although many clean-state designs have been proposed, production environments still call for a bottom-up parallel TCP stack design that is backward-compatible with existing applications.\n We present Fastsocket, a BSD Socket-compatible and scalable kernel socket design, which achieves table-level connection partition in TCP stack and guarantees connection locality for both passive and active connections. Fastsocket architecture is a ground up partition design, from NIC interrupts all the way up to applications, which naturally eliminates various lock contentions in the entire stack. Moreover, Fastsocket maintains the full functionality of the kernel TCP stack and BSD-socket-compatible API, and thus applications need no modifications.\n Our evaluations show that Fastsocket achieves a speedup of 20.4x on a 24-core machine under a workload of short-lived connections, outperforming the state-of-the-art Linux kernel TCP implementations. When scaling up to 24 CPU cores, Fastsocket increases the throughput of Nginx and HAProxy by 267% and 621% respectively compared with the base Linux kernel. We also demonstrate that Fastsocket can achieve scalability and preserve BSD socket API at the same time. Fastsocket is already deployed in the production environment of Sina WeiBo, serving 50 million daily active users and billions of requests per day.",
"title": ""
},
{
"docid": "92ac3bfdcf5e554152c4ce2e26b77315",
"text": "How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.",
"title": ""
},
{
"docid": "caf88f7fd5ec7f3a3499f46f541b985b",
"text": "Photo-based question answering is a useful way of finding information about physical objects. Current question answering (QA) systems are text-based and can be difficult to use when a question involves an object with distinct visual features. A photo-based QA system allows direct use of a photo to refer to the object. We develop a three-layer system architecture for photo-based QA that brings together recent technical achievements in question answering and image matching. The first, template-based QA layer matches a query photo to online images and extracts structured data from multimedia databases to answer questions about the photo. To simplify image matching, it exploits the question text to filter images based on categories and keywords. The second, information retrieval QA layer searches an internal repository of resolved photo-based questions to retrieve relevant answers. The third, human-computation QA layer leverages community experts to handle the most difficult cases. A series of experiments performed on a pilot dataset of 30,000 images of books, movie DVD covers, grocery items, and landmarks demonstrate the technical feasibility of this architecture. We present three prototypes to show how photo-based QA can be built into an online album, a text-based QA, and a mobile application.",
"title": ""
},
{
"docid": "78a2bf1c2edec7ec9eb48f8b07dc9e04",
"text": "The performance of the most commonly used metal antennas close to the human body is one of the limiting factors of the performance of bio-sensors and wireless body area networks (WBAN). Due to the high dielectric and conductivity contrast with respect to most parts of the human body (blood, skin, …), the range of most of the wireless sensors operating in RF and microwave frequencies is limited to 1–2 cm when attached to the body. In this paper, we introduce the very novel idea of liquid antennas, that is based on engineering the properties of liquids. This approach allows for the improvement of the range by a factor of 5–10 in a very easy-to-realize way, just modifying the salinity of the aqueous solution of the antenna. A similar methodology can be extended to the development of liquid RF electronics for implantable devices and wearable real-time bio-signal monitoring, since it can potentially lead to very flexible antenna and electronic configurations.",
"title": ""
},
{
"docid": "e8055c37b0082cff57e02389949fb7ca",
"text": "Distributed SDN controllers have been proposed to address performance and resilience issues. While approaches for datacenters are built on strongly-consistent state sharing among controllers, others for WAN and constrained networks rely on a loosely-consistent distributed state. In this paper, we address the problem of failover for distributed SDN controllers by proposing two strategies for neighbor active controllers to take over the control of orphan OpenFlow switches: (1) a greedy incorporation and (2) a pre-partitioning among controllers. We built a prototype with distributed Floodlight controllers to evaluate these strategies. The results show that the failover duration with the greedy approach is proportional to the quantity of orphan switches while the pre-partitioning approach, introducing a very small additional control traffic, enables to react quicker in less than 200ms.",
"title": ""
},
{
"docid": "7eed5e11e47807a3ff0af21461e88385",
"text": "We propose Attentive Regularization (AR), a method to constrain the activation maps of kernels in Convolutional Neural Networks (CNNs) to specific regions of interest (ROIs). Each kernel learns a location of specialization along with its weights through standard backpropagation. A differentiable attention mechanism requiring no additional supervision is used to optimize the ROIs. Traditional CNNs of different types and structures can be modified with this idea into equivalent Targeted Kernel Networks (TKNs), while keeping the network size nearly identical. By restricting kernel ROIs, we reduce the number of sliding convolutional operations performed throughout the network in its forward pass, speeding up both training and inference. We evaluate our proposed architecture on both synthetic and natural tasks across multiple domains. TKNs obtain significant improvements over baselines, requiring less computation (around an order of magnitude) while achieving superior performance.",
"title": ""
},
{
"docid": "5e4ef99cd48e385984509613b3697e37",
"text": "RC4 has been the most popular stream cipher in the history of symmetric key cryptography. Its internal state contains a permutation over all possible bytes from 0 to 255, and it attempts to generate a pseudo-random sequence of bytes (called keystream) by extracting elements of this permutation. Over the last twenty years, numerous cryptanalytic results on RC4 stream cipher have been published, many of which are based on non-random (biased) events involving the secret key, the state variables, and the keystream of the cipher. Though biases based on the secret key are common in RC4 literature, none of the existing ones depends on the length of the secret key. In the first part of this paper, we investigate the effect of RC4 keylength on its keystream, and report significant biases involving the length of the secret key. In the process, we prove the two known empirical biases that were experimentally reported and used in recent attacks against WEP and WPA by Sepehrdad, Vaudenay and Vuagnoux in EUROCRYPT 2011. After our current work, there remains no bias in the literature of WEP and WPA attacks without a proof. In the second part of the paper, we present theoretical proofs of some significant initial-round empirical biases observed by Sepehrdad, Vaudenay and Vuagnoux in SAC 2010. In the third part, we present the derivation of the complete probability distribution of the first byte of RC4 keystream, a problem left open for a decade since the observation by Mironov in CRYPTO 2002. Further, the existence of positive biases towards zero for all the initial bytes 3 to 255 is proved and exploited towards a generalized broadcast attack on RC4. We also investigate for long-term non-randomness in the keystream, and prove a new long-term bias of RC4.",
"title": ""
},
{
"docid": "95a376ec68ac3c4bd6b0fd236dca5bcd",
"text": "Long-term suppression of postprandial glucose concentration is an important dietary strategy for the prevention and treatment of type 2 diabetes. Because previous reports have suggested that seaweed may exert anti-diabetic effects in animals, the effects of Wakame or Mekabu intake with 200 g white rice, 50 g boiled soybeans, 60 g potatoes, and 40 g broccoli on postprandial glucose, insulin and free fatty acid levels were investigated in healthy subjects. Plasma glucose levels at 30 min and glucose area under the curve (AUC) at 0-30 min after the Mekabu meal were significantly lower than that after the control meal. Plasma glucose and glucose AUC were not different between the Wakame and control meals. Postprandial serum insulin and its AUC and free fatty acid concentration were not different among the three meals. In addition, fullness, satisfaction, and wellness scores were not different among the three meals. Thus, consumption of 70 g Mekabu with a white rice-based breakfast reduces postprandial glucose concentration.",
"title": ""
},
{
"docid": "c18903fad6b70086de9be9bafffb2b65",
"text": "In this work we determine how well the common objective image quality measures (Mean Squared Error (MSE), local MSE, Signalto-Noise Ratio (SNR), Structural Similarity Index (SSIM), Visual Signalto-Noise Ratio (VSNR) and Visual Information Fidelity (VIF)) predict subjective radiologists’ assessments for brain and body computed tomography (CT) images. A subjective experiment was designed where radiologists were asked to rate the quality of compressed medical images in a setting similar to clinical. We propose a modified Receiver Operating Characteristic (ROC) analysis method for comparison of the image quality measures where the “ground truth” is considered to be given by subjective scores. The best performance was achieved by the SSIM index and VIF for brain and body CT images. The worst results were observed for VSNR. We have utilized a logistic curve model which can be used to predict the subjective assessments with an objective criteria. This is a practical tool that can be used to determine the quality of medical images.",
"title": ""
}
] |
scidocsrr
|
a92bf41d623dafed5ba22750e516b746
|
Current and Future Trends in Mobile Device Forensics: A Survey
|
[
{
"docid": "55f80d7b459342a41bb36a5c0f6f7e0d",
"text": "A smart phone is a handheld device that combines the functionality of a cellphone, a personal digital assistant (PDA) and other information appliances such a music player. These devices can however be used in a crime and would have to be quickly analysed for evidence. This data is collected using either a forensic tool which resides on a PC or specialised hardware. This paper proposes the use of an on-phone forensic tool to collect the contents of the device and store it on removable storage. This approach requires less equipment and can retrieve the volatile information that resides on the phone such as running processes. The paper discusses the Symbian operating system, the evidence that is stored on the device and contrasts the approach with that followed by other tools.",
"title": ""
}
] |
[
{
"docid": "a406ed86fdc5b68c66561fb672037438",
"text": "We investigate techniques based on deep neural networks (DNNs) for attacking the single-channel multi-talker speech recognition problem. Our proposed approach contains five key ingredients: a multi-style training strategy on artificially mixed speech data, a separate DNN to estimate senone posterior probabilities of the louder and softer speakers at each frame, a weighted finite-state transducer (WFST)-based two-talker decoder to jointly estimate and correlate the speaker and speech, a speaker switching penalty estimated from the energy pattern change in the mixed-speech, and a confidence based system combination strategy. Experiments on the 2006 speech separation and recognition challenge task demonstrate that our proposed DNN-based system has remarkable noise robustness to the interference of a competing speaker. The best setup of our proposed systems achieves an average word error rate (WER) of 18.8% across different SNRs and outperforms the state-of-the-art IBM superhuman system by 2.8% absolute with fewer assumptions.",
"title": ""
},
{
"docid": "82d4b2aa3e3d3ec10425c6250268861c",
"text": "Deep Neural Networks (DNNs) are typically trained by backpropagation in a batch learning setting, which requires the entire training data to be made available prior to the learning task. This is not scalable for many real-world scenarios where new data arrives sequentially in a stream form. We aim to address an open challenge of “Online Deep Learning” (ODL) for learning DNNs on the fly in an online setting. Unlike traditional online learning that often optimizes some convex objective function with respect to a shallow model (e.g., a linear/kernel-based hypothesis), ODL is significantly more challenging since the optimization of the DNN objective function is non-convex, and regular backpropagation does not work well in practice, especially for online learning settings. In this paper, we present a new online deep learning framework that attempts to tackle the challenges by learning DNN models of adaptive depth from a sequence of training data in an online learning setting. In particular, we propose a novel Hedge Backpropagation (HBP) method for online updating the parameters of DNN effectively, and validate the efficacy of our method on large-scale data sets, including both stationary and concept drifting scenarios.",
"title": ""
},
{
"docid": "d5237790e32a2155a842d4b8afbf413e",
"text": "1077-3142/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cviu.2013.11.008 q This paper has been recommended for acceptance by Nicu Sebe. ⇑ Corresponding author. Address: 601 University Drive, Department of Computer Science, Texas State University, San Marcos, TX 78666, United States. Fax: +1 512 245 8750. E-mail address: [email protected] (Y. Lu). Bo Li , Yijuan Lu a,⇑, Afzal Godil , Tobias Schreck , Benjamin Bustos , Alfredo Ferreira , Takahiko Furuya , Manuel J. Fonseca , Henry Johan , Takahiro Matsuda , Ryutarou Ohbuchi , Pedro B. Pascoal , Jose M. Saavedra d,h",
"title": ""
},
{
"docid": "ee20233660c2caa4a24dbfb512172277",
"text": "Any projection of a 3D scene into a wide-angle image unavoidably results in distortion. Current projection methods either bend straight lines in the scene, or locally distort the shapes of scene objects. We present a method that minimizes this distortion by adapting the projection to content in the scene, such as salient scene regions and lines, in order to preserve their shape. Our optimization technique computes a spatially-varying projection that respects user-specified constraints while minimizing a set of energy terms that measure wide-angle image distortion. We demonstrate the effectiveness of our approach by showing results on a variety of wide-angle photographs, as well as comparisons to standard projections.",
"title": ""
},
{
"docid": "1458c69afa813c6a8df73d5a3bad3fc1",
"text": "This paper presents closed-form solutions for the inverse and direct dynamic models of the Gough-Stewart parallel robot. The models are obtained in terms of the Cartesian dynamic model elements of the legs and of the Newton-Euler equation of the platform. The final form has an interesting and intuitive physical interpretation. The base inertial parameters of the robot, which constitute the minimum number of inertial parameters, are explicitly determined. The number of operations to compute the inverse and direct dynamic models are given.",
"title": ""
},
{
"docid": "48096a9a7948a3842afc082fa6e223a6",
"text": "We present a method for using previously-trained ‘teacher’ agents to kickstart the training of a new ‘student’ agent. To this end, we leverage ideas from policy distillation (Rusu et al., 2015; Parisotto et al., 2015) and population based training (Jaderberg et al., 2017). Our method places no constraints on the architecture of the teacher or student agents, and it regulates itself to allow the students to surpass their teachers in performance. We show that, on a challenging and computationally-intensive multi-task benchmark (Beattie et al., 2016), kickstarted training improves the data efficiency of new agents, making it significantly easier to iterate on their design. We also show that the same kickstarting pipeline can allow a single student agent to leverage multiple ‘expert’ teachers which specialise on individual tasks. In this setting kickstarting yields surprisingly large gains, with the kickstarted agent matching the performance of an agent trained from scratch in almost 10× fewer steps, and surpassing its final performance by 42%. Kickstarting is conceptually simple and can easily be incorporated into reinforcement learning experiments.",
"title": ""
},
{
"docid": "104e9a0e95ec6eeaefc441bf69cf3e9b",
"text": "Celebrity worship is a form of parasocial interaction in which individuals become obsessed with 1 or more celebrities, similar to an erotomanic type of delusional disorder. Drawing on the cognitive factors implicated in erotomania, the authors hypothesized that celebrity worshippers might be expected to exhibit verbal, visuospatial, and cognitive deficits related to flexibility and associative learning. This general hypothesis was tested in a sample of 102 participants who completed the Celebrity Attitude Scale (L. E. McCutcheon, R. Lange, & J. Houran, 2002), the Entertainment-Social, Intense-Personal, and Borderline Pathological subscales, and 6 cognitive measures that included creativity (verbal), crystallized intelligence, critical thinking, spatial ability, and need for cognition. The results were consistent with predictions and suggest that cognitive deficits only help facilitate an individual's susceptibility to engage in celebrity worship. The results are discussed in terms of the multivariate absorption-addiction model of celebrity worship.",
"title": ""
},
{
"docid": "d924282668c0c5dfc0908205402dfabf",
"text": "Performance appraisal (PA) is a crucial HR process that enables an organization to periodically measure and evaluate every employee’s performance and also to drive performance improvements. In this paper, we describe a novel system called HiSPEED to analyze PA data using automated statistical, data mining and text mining techniques, to generate novel and actionable insights/patterns and to help in improving the quality and effectiveness of the PA process. The goal is to produce insights that can be used to answer (in part) the crucial “business questions” that HR executives and business leadership face in talent management. The business questions pertain to (1) improving the quality of the goal setting process, (2) improving the quality of the self-appraisal comments and supervisor feedback comments, (3) discovering high-quality supervisor suggestions for performance improvements, (4) discovering evidence provided by employees to support their self-assessments, (5) measuring the quality of supervisor assessments, (6) understanding the root causes of poor and exceptional performances, (7) detecting instances of personal and systemic biases and so forth. The paper discusses specially designed algorithms to answer these business questions and illustrates them by reporting the insights produced on a real-life PA dataset from a large multinational IT services organization.",
"title": ""
},
{
"docid": "01a636d56a324f8bb8367b8fc73c8687",
"text": "Formal risk analysis and management in software engineering is still an emerging part of project management. We provide a brief introduction to the concepts of risk management for software development projects, and then an overview of a new risk management framework. Risk management for software projects is intended to minimize the chances of unexpected events, or more specifically to keep all possible outcomes under tight management control. Risk management is also concerned with making judgments about how risk events are to be treated, valued, compared and combined. The ProRisk management framework is intended to account for a number of the key risk management principles required for managing the process of software development. It also provides a support environment to operationalize these management tasks.",
"title": ""
},
{
"docid": "03e48fbf57782a713bd218377290044c",
"text": "Several researchers have shown that the efficiency of value iteration, a dynamic programming algorithm for Markov decision processes, can be improved by prioritizing the order of Bellman backups to focus computation on states where the value function can be improved the most. In previous work, a priority queue has been used to order backups. Although this incurs overhead for maintaining the priority queue, previous work has argued that the overhead is usually much less than the benefit from prioritization. However this conclusion is usually based on a comparison to a non-prioritized approach that performs Bellman backups on states in an arbitrary order. In this paper, we show that the overhead for maintaining the priority queue can be greater than the benefit, when it is compared to very simple heuristics for prioritizing backups that do not require a priority queue. Although the order of backups induced by our simple approach is often sub-optimal, we show that its smaller overhead allows it to converge faster than other state-of-the-art priority-based solvers.",
"title": ""
},
{
"docid": "b670bfebc1effc53ebb8b34b57ce81ed",
"text": "The palm-print verification system is based on two modes, viz, Enrollment mode and Recognition mode. In the enrollment mode, the palm-print features are acquired from the sensor and stored in a database along with the person's identity for the recognition of his/her identity. In the recognition mode, the palm-print features are re-acquired from the sensor and compared against the stored data to determine the user identity. In the pre-processing stage, two segmentation processes are proposed to extract the region of interest (ROI) of palm. The first skin-color segmentation is used to extract the hand image from the background. The second region of interest of the palm is segmented by using the valley detection algorithm. The Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) are applied for the purpose of extracting the features. Further, the Sobel Operator and Local Binary Pattern (LBP) are used for increasing the number of features. The mean and standard deviation of DWT, DCT and LBP are computed. Twenty hand-scanned images and ten samples of CASIA palmprint database were used in this experiment. The performance of the proposed system is found to be satisfactory.",
"title": ""
},
{
"docid": "ec5d110ea0267fc3e72e4fa2cb4f186e",
"text": "We present a secure Internet of Things (IoT) architecture for Smart Cities. The large-scale deployment of IoT technologies within a city promises to make city operations efficient while improving quality of life for city inhabitants. Mission-critical Smart City data, captured from and carried over IoT networks, must be secured to prevent cyber attacks that might cripple city functions, steal personal data and inflict catastrophic harm. We present an architecture containing four basic IoT architectural blocks for secure Smart Cities: Black Network, Trusted SDN Controller, Unified Registry and Key Management System. Together, these basic IoT-centric blocks enable a secure Smart City that mitigates cyber attacks beginning at the IoT nodes themselves.",
"title": ""
},
{
"docid": "75a416e67e26cad0ce0809127c67209b",
"text": "We describe a general framework for modelling probabilistic databases using factorization approaches. The framework includes tensor-based approaches which have been very successful in modelling triple-oriented databases and also includes recently developed neural network models. We consider the case that the target variable models the existence of a tuple, a continuous quantity associated with a tuple, multiclass variables or count variables. We discuss appropriate cost functions with different parameterizations and optimization approaches. We argue that, in general, some combination of models leads to best predictive results. We present experimental results on the modelling of existential variables and count variables.",
"title": ""
},
{
"docid": "17e280502d20361d920fa0e00aa6f98a",
"text": "In recent years, having the advantages of being small, low in cost and high in efficiency, Half bridge (HB) LLC resonant converter for power density and high efficiency is increasingly required in the battery charge application. The HB LLC resonant converters have been used for reducing the current and voltage stress and switching losses of the components. However, it is not suited for wide range of the voltage and output voltage due to the uneven voltage and current component's stresses. The HB LLC resonant for battery charge of on board is presented in this paper. The theoretical results are verified through an experimental prototype for battery charger on board.",
"title": ""
},
{
"docid": "afcb6c9130e16002100ff68f68d98ff3",
"text": "This study characterizes adults who report being physically abused during childhood, and examines associations of reported type and frequency of abuse with adult mental health. Data were derived from the 2000-2001 and 2004-2005 National Epidemiologic Survey on Alcohol and Related Conditions, a large cross-sectional survey of a representative sample (N = 43,093) of the U.S. population. Weighted means, frequencies, and odds ratios of sociodemographic correlates and prevalence of psychiatric disorders were computed. Logistic regression models were used to examine the strength of associations between child physical abuse and adult psychiatric disorders adjusted for sociodemographic characteristics, other childhood adversities, and comorbid psychiatric disorders. Child physical abuse was reported by 8% of the sample and was frequently accompanied by other childhood adversities. Child physical abuse was associated with significantly increased adjusted odds ratios (AORs) of a broad range of DSM-IV psychiatric disorders (AOR = 1.16-2.28), especially attention-deficit hyperactivity disorder, posttraumatic stress disorder, and bipolar disorder. A dose-response relationship was observed between frequency of abuse and several adult psychiatric disorder groups; higher frequencies of assault were significantly associated with increasing adjusted odds. The long-lasting deleterious effects of child physical abuse underscore the urgency of developing public health policies aimed at early recognition and prevention.",
"title": ""
},
{
"docid": "e31af9137176dd39efe0a9e286dd981b",
"text": "This paper presents a novel automated procedure for discovering expressive shape specifications for sophisticated functional data structures. Our approach extracts potential shape predicates based on the definition of constructors of arbitrary user-defined inductive data types, and combines these predicates within an expressive first-order specification language using a lightweight data-driven learning procedure. Notably, this technique requires no programmer annotations, and is equipped with a type-based decision procedure to verify the correctness of discovered specifications. Experimental results indicate that our implementation is both efficient and effective, capable of automatically synthesizing sophisticated shape specifications over a range of complex data types, going well beyond the scope of existing solutions.",
"title": ""
},
{
"docid": "ba695228c0fbaf91d6db972022095e98",
"text": "This study evaluated the critical period hypothesis for second language (L2) acquisition. The participants were 240 native speakers of Korean who differed according to age of arrival (AOA) in the United States (1 to 23 years), but were all experienced in English (mean length of residence 5 15 years). The native Korean participants’ pronunciation of English was evaluated by having listeners rate their sentences for overall degree of foreign accent; knowledge of English morphosyntax was evaluated using a 144-item grammaticality judgment test. As AOA increased, the foreign accents grew stronger, and the grammaticality judgment test scores decreased steadily. However, unlike the case for the foreign accent ratings, the effect of AOA on the grammaticality judgment test scores became nonsignificant when variables confounded with AOA were controlled. This suggested that the observed decrease in morphosyntax scores was not the result of passing a maturationally defined critical period. Additional analyses showed that the score for sentences testing knowledge of rule based, generalizable aspects of English morphosyntax varied as a function of how much education the Korean participants had received in the United States. The scores for sentences testing lexically based aspects of English morphosyntax, on the other hand, depended on how much the Koreans used English. © 1999 Academic Press",
"title": ""
},
{
"docid": "04c60b1bc04886086382402e9c14717d",
"text": "This paper proposes a novel robust and adaptive sliding-mode (SM) control for a cascaded two-level inverter (CTLI)-based grid-connected photovoltaic (PV) system. The modeling and design of the control scheme for the CTLI-based grid-connected PV system is developed to supply active power and reactive power with variable solar irradiance. A vector controller is developed, keeping the maximum power delivery of the PV in consideration. Two different switching schemes have been considered to design SM controllers and studied under similar operating situations. Instead of the referred space vector pulsewidth modulation (PWM) technique, a simple PWM modulation technique is used for the operation of the proposed SM controller. The performance of the SM controller is improved by using an adaptive hysteresis band calculation. The controller performance is found to be satisfactory for both the schemes at considered load and solar irradiance level variations in simulation environment. The laboratory prototype, operated with the proposed controller, is found to be capable of implementing the control algorithm successfully in the considered situation.",
"title": ""
},
{
"docid": "66133239610bb08d83fb37f2c11a8dc5",
"text": "sists of two excitation laser beams. One beam scans the volume of the brain from the side of a horizontally positioned zebrafish but is rapidly switched off when inside an elliptical exclusion region located over the eye (Fig. 1b). Simultaneously, a second beam scans from the front, to cover the forebrain and the regions between the eyes. Together, these two beams achieve nearly complete coverage of the brain without exposing the retina to direct laser excitation, which allows unimpeded presentation of visual stimuli that are projected onto a screen below the fish. To monitor intended swimming behavior, we used existing methods for recording activity from motor neuron axons in the tail of paralyzed larval zebrafish1 (Fig. 1a and Supplementary Note). This system provides imaging speeds of up to three brain volumes per second (40 planes per brain volume); increases in camera speed will allow for faster volumetric sampling. Because light-sheet imaging may still introduce some additional sensory stimulation (excitation light scattering in the brain and reflected from the glass walls of the chamber), we assessed whether fictive behavior in 5–7 d post-fertilization (d.p.f.) fish was robust to the presence of the light sheets. We tested two visuoLight-sheet functional imaging in fictively behaving zebrafish",
"title": ""
},
{
"docid": "b7914e542be8aeb5755106525916e86d",
"text": "Waymo's self-driving cars contain a broad set of technologies that enable our cars to sense the vehicle surroundings, perceive and understand what is happening in the vehicle vicinity, and determine the safe and efficient actions that the vehicle should take. Many of these technologies are rooted in advanced semiconductor technologies, e.g. faster transistors that enable more compute or low noise designs that enable the faintest sensor signals to be perceived. This paper summarizes a few areas where semiconductor technologies have proven to be fundamentally enabling to self-driving capabilities. The paper also lays out some of the challenges facing advanced semiconductors in the automotive context, as well as some of the opportunities for future innovation.",
"title": ""
}
] |
scidocsrr
|
a3806c15a59a38d67c2afb089204d87c
|
Modeling affordances using Bayesian networks
|
[
{
"docid": "9448a075257110d47c0fefa521aa34c1",
"text": "We present a developmental perspective of robot learning that uses affordances as the link between sensory-motor coordination and imitation. The key concept is a general model for affordances able to learn the statistical relations between actions, object properties and the effects of actions on objects. Based on the learned affordances, it is possible to perform simple imitation games providing both task interpretation and planning capabilities. To evaluate the approach, we provide results of affordance learning with a real robot and simple imitation games with people.",
"title": ""
}
] |
[
{
"docid": "a9768bced10c55345f116d7d07d2bc5a",
"text": "In this paper, we propose a variety of distance measures for hesitant fuzzy sets, based on which the corresponding similarity measures can be obtained. We investigate the connections of the aforementioned distance measures and further develop a number of hesitant ordered weighted distance measures and hesitant ordered weighted similarity measures. They can alleviate the influence of unduly large (or small) deviations on the aggregation results by assigning them low (or high) weights. Several numerical examples are provided to illustrate these distance and similarity measures. 2011 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "ab50f458d919ba3ac3548205418eea62",
"text": "Department of Microbiology, School of Life Sciences, Bharathidasan University, Tiruchirappali 620 024, Tamilnadu, India. Department of Medical Biotechnology, Sri Ramachandra University, Porur, Chennai 600 116, Tamilnadu, India. CAS Marine Biology, Annamalai University, Parangipettai 608 502, Tamilnadu, India. Department of Zoology, DDE, Annamalai University, Annamalai Nagar 608 002, Tamilnadu, India Asian Pacific Journal of Tropical Disease (2012)S291-S295",
"title": ""
},
{
"docid": "2c0b3b58da77cc217e4311142c0aa196",
"text": "In this paper, we show that the hinge loss can be interpreted as the neg-log-likelihood of a semi-parametric model of posterior probabilities. From this point of view, SVMs represent the parametric component of a semi-parametric model fitted by a maximum a posteriori estimation procedure. This connection enables to derive a mapping from SVM scores to estimated posterior probabilities. Unlike previous proposals, the suggested mapping is interval-valued, providing a set of posterior probabilities compatible with each SVM score. This framework offers a new way to adapt the SVM optimization problem to unbalanced classification, when decisions result in unequal (asymmetric) losses. Experiments show improvements over state-of-the-art procedures.",
"title": ""
},
{
"docid": "f66ce6fb4675091de36b3e64e4fb52a5",
"text": "Phishing has been a major problem for information systems managers and users for several years now. In 2008, it was estimated that phishing resulted in close to $50 billion in damages to U.S. consumers and businesses. Even so, research has yet to explore many of the reasons why Internet users continue to be exploited. The goal of this paper is to better understand the behavioral factors that may increase one’s susceptibility for complying with a phisher’s request for personal information. Using past research on deception detection, a research model was developed to help explain compliant phishing responses. The model was tested using a field study in which each participant received a phishing e-mail asking for sensitive information. It was found that four behavioral factors were influential as to whether the phishing e-mails were answered with sensitive information. The paper concludes by suggesting that the behavioral aspect of susceptible users be integrated into the current tools and materials used in antiphishing efforts. Key WoRds and phRases: computer-mediated deception, electronic mail fraud, Internet security, interpersonal deception theory, phishing. The inTeRneT has opened up a WealTh of oppoRTuniTies for individuals and businesses to expand the reach and range of their personal and commercial transactions, but these 274 WRIghT aND MaRETT openings have also created a venue for a number of computer security issues that must be addressed. Investments in security hardware and software are now fundamental parts of a company’s information technology (IT) budget. also, security policies are continually developed and refined to reduce technical vulnerabilities. however, the frequent use of Internet technologies by corporations can also introduce new vulnerabilities. One recent phenomenon that exploits end users’ carelessness is phishing. Phishing uses obfuscation of both e-mails and Web sites to trick Web users into complying with a request for personal information [5, 27]. The deceitful people behind the scam, the “phishers,” are then able to use the personal information for a number of illicit activities, ranging from individual identity theft to the theft of a company’s intellectual property. according to some estimates, phishing results in close to $50 billion of damage to U.S. consumers and businesses a year [49, 71]. In 2007, phishing attacks increased and some 3 million adults lost over $3 billion in the 12 months ending in august 2007 [29]. although some reports indicate that the annual financial damage is not rising dramatically from year to year, the number of reported victims is increasing at a significant rate [35]. Phishing continues to be a very real problem for Web users in all walks of life. Consistent with the “fishing” homonym, phishing attacks are often described by using a “bait-and-hook” metaphor [70]. The “bait” consists of a mass e-mail submission sent to a large number of random and unsuspecting recipients. The message strongly mimics the look and feel of a legitimate business, including the use of familiar logos and slogans. The e-mail often requests the recipient’s aid in correcting a technical problem with his or her user account, ostensibly by confirming or “resupplying” a user ID, a password, a credit card number, or other personal information. The message typically encourages recipients to visit a bogus Web site (the “hook”) that is similar in appearance to an actual corporate Web site, except that user-supplied information is not sent to the legitimate company’s Web server, but to a server of the phisher’s choosing. The phishing effort is relatively low in terms of cost and risk for the phishers. Further, phishers may reside in international locations that place them out of reach of authorities in the victim’s jurisdiction, making prosecution much more difficult [33]. Phishers are rarely apprehended and prosecuted for the fraud they commit. Developing methods for detecting phishing before any damage is inflicted is a priority, and several approaches for detection have resulted from the effort. Technical countermeasures, such as e-mail filtering and antiphishing toolbars, successfully detect phishing attempts in about 35 percent of cases [84]. Updating fraud definitions, flagging bogus Web sites, and preventing false alarms from occurring continues to challenge individual users and IT departments alike. an automated comparison of the design, layout, and style characteristics between authentic and fraudulent Web sites has been shown to be more promising than a simple visual inspection made by a visitor, but an up-to-date registry of valid and invalid Web sites must be available for such a method to be practical [55]. Because of ineffective technological methods of prevention, much of the responsibility for detecting phishing lies with the end user, and an effective strategy for guarding against phishing should include both technological and human detectors. however, prior research has shown that, like technology, people ThE INFlUENCES OF ExPERIENTIal aND DISPOSITIONal FaCTORS IN PhIShINg 275 are also limited in terms of detecting the fraud once they are coerced into visiting a bogus Web site [19]. Once the message recipient chooses to visit a fraudulent Web site, he or she is unlikely to detect the fraudulent nature of the request and the “hook” will have been set. In order to prevent users from sending sensitive information to phishers, educating and training e-mail users about fraud prevention and detection at the “bait” stage must be considered the first line of defense [53]. The goal of this paper is to better understand, given the large number of phishing attempts and the vast amount of attention given to phishing in the popular press, why users of online applications such as e-mail and instant messaging still fall prey to these fraudulent efforts.",
"title": ""
},
{
"docid": "33ce6e07bc4031f1b915e32769d5c984",
"text": "MOTIVATION\nDIYABC is a software package for a comprehensive analysis of population history using approximate Bayesian computation on DNA polymorphism data. Version 2.0 implements a number of new features and analytical methods. It allows (i) the analysis of single nucleotide polymorphism data at large number of loci, apart from microsatellite and DNA sequence data, (ii) efficient Bayesian model choice using linear discriminant analysis on summary statistics and (iii) the serial launching of multiple post-processing analyses. DIYABC v2.0 also includes a user-friendly graphical interface with various new options. It can be run on three operating systems: GNU/Linux, Microsoft Windows and Apple Os X.\n\n\nAVAILABILITY\nFreely available with a detailed notice document and example projects to academic users at http://www1.montpellier.inra.fr/CBGP/diyabc CONTACT: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online.",
"title": ""
},
{
"docid": "5e6209b4017039a809f605d0847a57af",
"text": "Bag-of-ngrams (BoN) models are commonly used for representing text. One of the main drawbacks of traditional BoN is the ignorance of n-gram’s semantics. In this paper, we introduce the concept of Neural Bag-of-ngrams (Neural-BoN), which replaces sparse one-hot n-gram representation in traditional BoN with dense and rich-semantic n-gram representations. We first propose context guided n-gram representation by adding n-grams to word embeddings model. However, the context guided learning strategy of word embeddings is likely to miss some semantics for text-level tasks. Text guided ngram representation and label guided n-gram representation are proposed to capture more semantics like topic or sentiment tendencies. Neural-BoN with the latter two n-gram representations achieve state-of-the-art results on 4 documentlevel classification datasets and 6 semantic relatedness categories. They are also on par with some sophisticated DNNs on 3 sentence-level classification datasets. Similar to traditional BoN, Neural-BoN is efficient, robust and easy to implement. We expect it to be a strong baseline and be used in more real-world applications.",
"title": ""
},
{
"docid": "df3b0590054fb3056ed82247d01bc951",
"text": "Understanding nonverbal behaviors in human machine interaction is a complex and challenge task. One of the key aspects is to recognize human emotion states accurately. This paper presents our effort to the Audio/Visual Emotion Challenge (AVEC'14), whose goal is to predict the continuous values of the emotion dimensions arousal, valence and dominance at each moment in time. The proposed method utilizes deep belief network based models to recognize emotion states from audio and visual modalities. Firstly, we employ temporal pooling functions in the deep neutral network to encode dynamic information in the features, which achieves the first time scale temporal modeling. Secondly, we combine the predicted results from different modalities and emotion temporal context information simultaneously. The proposed multimodal-temporal fusion achieves temporal modeling for the emotion states in the second time scale. Experiments results show the efficiency of each key point of the proposed method and competitive results are obtained",
"title": ""
},
{
"docid": "57332cd7472707617e864d196ff454ef",
"text": "Vehicle platoons are fully automated vehicles driving in close proximity of each other, where both distance keeping and steering is under automatic control. This paper is aiming at a variant of vehicle platoons, where the lateral control is using forward looking sensors, i.e. camera, radar. Such a system solution implies that the vehicle dynamics are coupled together laterally, in contrast to the classical look-down solutions. For such a platoon, lateral string stability is an important property that the controller needs to guarantee. This article proposes a method for designing such a distributed controller. It also examines the effect of model uncertainties on the lateral string stability of the platoon for the proposed method.",
"title": ""
},
{
"docid": "6e07b4d373b340512e06520c3aecd80c",
"text": "Text mining has found a variety of applications in diverse domains. Of late, prolific work is reported in using text mining techniques to solve problems in financial domain. The objective of this paper is to provide a state-of-the-art survey of various applications of Text mining to finance. These applications are categorized broadly into FOREX rate prediction, stock market prediction, customer relationship management (CRM) and cyber security. Since finance is a service industry, these problems are paramount in operational and customer growth aspects. We reviewed 89 research papers that appeared during the period 2000-2016, highlighted some of the issues, gaps, key challenges in this area and proposed some future research directions. Finally, this review can be extremely useful to budding researchers in this area, as many open problems are highlighted.",
"title": ""
},
{
"docid": "a252ec33139d9489133b91c2551a694f",
"text": "The lucrative rewards of security penetrations into large organizations have motivated the development and use of many sophisticated rootkit techniques to maintain an attacker's presence on a compromised system. Due to the evasive nature of such infections, detecting these rootkit infestations is a problem facing modern organizations. While many approaches to this problem have been proposed, various drawbacks that range from signature generation issues, to coverage, to performance, prevent these approaches from being ideal solutions.\n In this paper, we present Blacksheep, a distributed system for detecting a rootkit infestation among groups of similar machines. This approach was motivated by the homogenous natures of many corporate networks. Taking advantage of the similarity amongst the machines that it analyses, Blacksheep is able to efficiently and effectively detect both existing and new infestations by comparing the memory dumps collected from each host.\n We evaluate Blacksheep on two sets of memory dumps. One set is taken from virtual machines using virtual machine introspection, mimicking the deployment of Blacksheep on a cloud computing provider's network. The other set is taken from Windows XP machines via a memory acquisition driver, demonstrating Blacksheep's usage under more challenging image acquisition conditions. The results of the evaluation show that by leveraging the homogeneous nature of groups of computers, it is possible to detect rootkit infestations.",
"title": ""
},
{
"docid": "50d42d832a0cd04becdaa26cc33a9782",
"text": "The performance of Fingerprint recognition system depends on minutiae which are extracted from raw fingerprint image. Often the raw fingerprint image captured from a scanner may not be of good quality, which leads to inaccurate extraction of minutiae. Hence it is essential to preprocess the fingerprint image before extracting the reliable minutiae for matching of two fingerprint images. Image enhancement technique followed by minutiae extraction completes the fingerprint recognition process. Fingerprint recognition process with a matcher constitutes Fingerprint recognition system ASIC implementation of image enhancement technique for fingerprint recognition process using Cadence tool is proposed. Further, the result obtained from hardware design is compared with that of software using MatLab tool.",
"title": ""
},
{
"docid": "1e56ff2af1b76571823d54d1f7523b49",
"text": "Open-source intelligence offers value in information security decision making through knowledge of threats and malicious activities that potentially impact business. Open-source intelligence using the internet is common, however, using the darknet is less common for the typical cybersecurity analyst. The challenges to using the darknet for open-source intelligence includes using specialized collection, processing, and analysis tools. While researchers share techniques, there are few publicly shared tools; therefore, this paper explores an open-source intelligence automation toolset that scans across the darknet connecting, collecting, processing, and analyzing. It describes and shares the tools and processes to build a secure darknet connection, and then how to collect, process, store, and analyze data. Providing tools and processes serves as an on-ramp for cybersecurity intelligence analysts to search for threats. Future studies may refine, expand, and deepen this paper's toolset framework. © 2 01 7 T he SA NS In sti tut e, Au tho r R eta ins Fu ll R igh ts © 2017 The SANS Institute Author retains full rights. Data Mining in the Dark 2 Nafziger, Brian",
"title": ""
},
{
"docid": "1da19f806430077f7ad957dbeb0cb8d1",
"text": "BACKGROUND\nTo date, periorbital melanosis is an ill-defined entity. The condition has been stated to be darkening of the skin around the eyes, dark circles, infraorbital darkening and so on.\n\n\nAIMS\nThis study was aimed at exploring the nature of pigmentation in periorbital melanosis.\n\n\nMETHODS\nOne hundred consecutive patients of periorbital melanosis were examined and investigated to define periorbital melanosis. Extent of periorbital melanosis was determined by clinical examination. Wood's lamp examination was performed in all the patients to determine the depth of pigmentation. A 2-mm punch biopsy was carried out in 17 of 100 patients.\n\n\nRESULTS\nIn 92 (92%) patients periorbital melanosis was an extension of pigmentary demarcation line over the face (PDL-F).\n\n\nCONCLUSION\nPeriorbital melanosis and pigmentary demarcation line of the face are not two different conditions; rather they are two different manifestations of the same disease.",
"title": ""
},
{
"docid": "77749f228ebcadfbff9202ee17225752",
"text": "Temporal object detection has attracted significant attention, but most popular detection methods cannot leverage rich temporal information in videos. Very recently, many algorithms have been developed for video detection task, yet very few approaches can achieve real-time online object detection in videos. In this paper, based on the attention mechanism and convolutional long short-term memory (ConvLSTM), we propose a temporal single-shot detector (TSSD) for real-world detection. Distinct from the previous methods, we take aim at temporally integrating pyramidal feature hierarchy using ConvLSTM, and design a novel structure, including a low-level temporal unit as well as a high-level one for multiscale feature maps. Moreover, we develop a creative temporal analysis unit, namely, attentional ConvLSTM, in which a temporal attention mechanism is specially tailored for background suppression and scale suppression, while a ConvLSTM integrates attention-aware features across time. An association loss and a multistep training are designed for temporal coherence. Besides, an online tubelet analysis (OTA) is exploited for identification. Our framework is evaluated on ImageNet VID dataset and 2DMOT15 dataset. Extensive comparisons on the detection and tracking capability validate the superiority of the proposed approach. Consequently, the developed TSSD-OTA achieves a fast speed and an overall competitive performance in terms of detection and tracking. Finally, a real-world maneuver is conducted for underwater object grasping.",
"title": ""
},
{
"docid": "ddf56804605fd0957316979af50f010a",
"text": "In this work, we provide an overview of our previously published works on incorporating demand uncertainty in midterm planning of multisite supply chains. A stochastic programming based approach is described to model the planning process as it reacts to demand realizations unfolding over time. In the proposed bilevel-framework, the manufacturing decisions are modeled as ‘here-and-now’ decisions, which are made before demand realization. Subsequently, the logistics decisions are postponed in a ‘waitand-see’ mode to optimize in the face of uncertainty. In addition, the trade-off between customer satisfaction level and production costs is also captured in the model. The proposed model provides an effective tool for evaluating and actively managing the exposure of an enterprises assets (such as inventory levels and profit margins) to market uncertainties. The key features of the proposed framework are highlighted through a supply chain planning case study. # 2003 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "125655821a44bbce2646157c8465e345",
"text": "Due to its wide applicability, the problem of semi-supervised classification is attracting increasing attention in machine learning. Semi-Supervised Support Vector Machines (S3VMs) are based on applying the margin maximization principle to both labeled and unlabeled examples. Unlike SVMs, their formulation leads to a non-convex optimization problem. A suite of algorithms have recently been proposed for solving S3VMs. This paper reviews key ideas in this literature. The performance and behavior of various S3VM algorithms is studied together, under a common experimental setting.",
"title": ""
},
{
"docid": "deb1c65a6e2dfb9ab42f28c74826309c",
"text": "Large knowledge bases consisting of entities and relationships between them have become vital sources of information for many applications. Most of these knowledge bases adopt the Semantic-Web data model RDF as a representation model. Querying these knowledge bases is typically done using structured queries utilizing graph-pattern languages such as SPARQL. However, such structured queries require some expertise from users which limits the accessibility to such data sources. To overcome this, keyword search must be supported. In this paper, we propose a retrieval model for keyword queries over RDF graphs. Our model retrieves a set of subgraphs that match the query keywords, and ranks them based on statistical language models. We show that our retrieval model outperforms the-state-of-the-art IR and DB models for keyword search over structured data using experiments over two real-world datasets.",
"title": ""
},
{
"docid": "0e32cef3d4f4e6bd23a3004a44b138a6",
"text": "There have been some works that learn a lexicon together with the corpus to improve the word embeddings. However, they either model the lexicon separately but update the neural networks for both the corpus and the lexicon by the same likelihood, or minimize the distance between all of the synonym pairs in the lexicon. Such methods do not consider the relatedness and difference of the corpus and the lexicon, and may not be the best optimized. In this paper, we propose a novel method that considers the relatedness and difference of the corpus and the lexicon. It trains word embeddings by learning the corpus to predicate a word and its corresponding synonym under the context at the same time. For polysemous words, we use a word sense disambiguation filter to eliminate the synonyms that have different meanings for the context. To evaluate the proposed method, we compare the performance of the word embeddings trained by our proposed model, the control groups without the filter or the lexicon, and the prior works in the word similarity tasks and text classification task. The experimental results show that the proposed model provides better embeddings for polysemous words and improves the performance for text classification.",
"title": ""
},
{
"docid": "f1d11ef2739e02af2a95cbc93036bf43",
"text": "Extended Collaborative Less-is-More Filtering xCLiMF is a learning to rank model for collaborative filtering that is specifically designed for use with data where information on the level of relevance of the recommendations exists, e.g. through ratings. xCLiMF can be seen as a generalization of the Collaborative Less-is-More Filtering (CLiMF) method that was proposed for top-N recommendations using binary relevance (implicit feedback) data. The key contribution of the xCLiMF algorithm is that it builds a recommendation model by optimizing Expected Reciprocal Rank, an evaluation metric that generalizes reciprocal rank in order to incorporate user feedback with multiple levels of relevance. Experimental results on real-world datasets show the effectiveness of xCLiMF, and also demonstrate its advantage over CLiMF when more than two levels of relevance exist in the data.",
"title": ""
},
{
"docid": "b7387928fe8307063cafd6723c0dd103",
"text": "We introduce learned attention models into the radio machine learning domain for the task of modulation recognition by leveraging spatial transformer networks and introducing new radio domain appropriate transformations. This attention model allows the network to learn a localization network capable of synchronizing and normalizing a radio signal blindly with zero knowledge of the signal's structure based on optimization of the network for classification accuracy, sparse representation, and regularization. Using this architecture we are able to outperform our prior results in accuracy vs signal to noise ratio against an identical system without attention, however we believe such an attention model has implication far beyond the task of modulation recognition.",
"title": ""
}
] |
scidocsrr
|
a22e3ee7da53c0b8fe9336622d42fa38
|
A character-based convolutional neural network for language-agnostic Twitter sentiment analysis
|
[
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
},
{
"docid": "fabc65effd31f3bb394406abfa215b3e",
"text": "Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems. A more detailed overview of the theory (without proofs) can be found in Vapnik (1995). In Vapnik (1998) one can find detailed description of the theory (including proofs).",
"title": ""
}
] |
[
{
"docid": "ff272c41a811b6e0031d6e90a895f919",
"text": "Three-dimensional reconstruction of dynamic scenes is an important prerequisite for applications like mobile robotics or autonomous driving. While much progress has been made in recent years, imaging conditions in natural outdoor environments are still very challenging for current reconstruction and recognition methods. In this paper, we propose a novel unified approach which reasons jointly about 3D scene flow as well as the pose, shape and motion of vehicles in the scene. Towards this goal, we incorporate a deformable CAD model into a slanted-plane conditional random field for scene flow estimation and enforce shape consistency between the rendered 3D models and the parameters of all superpixels in the image. The association of superpixels to objects is established by an index variable which implicitly enables model selection. We evaluate our approach on the challenging KITTI scene flow dataset in terms of object and scene flow estimation. Our results provide a prove of concept and demonstrate the usefulness of our method.",
"title": ""
},
{
"docid": "6c9acb831bc8dc82198aef10761506be",
"text": "In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Rules extracted from databases by data mining techniques, such as classification or association rules, when used for decision tasks such as benefit or credit approval, can be discriminatory in the above sense. In this paper, the notion of discriminatory classification rules is introduced and studied. Providing a guarantee of non-discrimination is shown to be a non trivial task. A naive approach, like taking away all discriminatory attributes, is shown to be not enough when other background knowledge is available. Our approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge. An empirical assessment of the results on the German credit dataset is also provided.",
"title": ""
},
{
"docid": "14e6cf0e85c184f85ae0ae6202246d91",
"text": "The use of probiotics for human and animal health is continuously increasing. The probiotics used in humans commonly come from dairy foods, whereas the sources of probiotics used in animals are often the animals' own digestive tracts. Increasingly, probiotics from sources other than milk products are being selected for use in people who are lactose intolerant. These sources are non-dairy fermented foods and beverages, non-dairy and non-fermented foods such as fresh fruits and vegetables, feces of breast-fed infants and human breast milk. The probiotics that are used in both humans and animals are selected in stages; after the initial isolation of the appropriate culture medium, the probiotics must meet important qualifications, including being non-pathogenic acid and bile-tolerant strains that possess the ability to act against pathogens in the gastrointestinal tract and the safety-enhancing property of not being able to transfer any antibiotic resistance genes to other bacteria. The final stages of selection involve the accurate identification of the probiotic species.",
"title": ""
},
{
"docid": "b56f65fd08c8b6a9fe9ff05441ff8734",
"text": "While symbolic parsers can be viewed as deduction systems, t his view is less natural for probabilistic parsers. We present a view of parsing as directed hypergraph analysis which naturally covers both symbolic and probabilistic parsing. We illustrate the approach by showing how a dynamic extension of Dijkstra’s algorithm can be used to construct a probabilistic chart parser with an O(n3) time bound for arbitrary PCFGs, while preserving as much of t he flexibility of symbolic chart parsers as allowed by the inher ent ordering of probabilistic dependencies.",
"title": ""
},
{
"docid": "cb67ffc6559d42628022994961179208",
"text": "Accurate and reliable brain tumor segmentation is a critical component in cancer diagnosis, treatment planning, and treatment outcome evaluation. Build upon successful deep learning techniques, a novel brain tumor segmentation method is developed by integrating fully convolutional neural networks (FCNNs) and Conditional Random Fields (CRFs) in a unified framework to obtain segmentation results with appearance and spatial consistency. We train a deep learning based segmentation model using 2D image patches and image slices in following steps: 1) training FCNNs using image patches; 2) training CRFs as Recurrent Neural Networks (CRF-RNN) using image slices with parameters of FCNNs fixed; and 3) fine-tuning the FCNNs and the CRF-RNN using image slices. Particularly, we train 3 segmentation models using 2D image patches and slices obtained in axial, coronal and sagittal views respectively, and combine them to segment brain tumors using a voting based fusion strategy. Our method could segment brain images slice-by-slice, much faster than those based on image patches. We have evaluated our method based on imaging data provided by the Multimodal Brain Tumor Image Segmentation Challenge (BRATS) 2013, BRATS 2015 and BRATS 2016. The experimental results have demonstrated that our method could build a segmentation model with Flair, T1c, and T2 scans and achieve competitive performance as those built with Flair, T1, T1c, and T2 scans.",
"title": ""
},
{
"docid": "efb81d85abcf62f4f3747a58154c5144",
"text": "Visual signals in a video can be divided into content and motion. While content specifies which objects are in the video, motion describes their dynamics. Based on this prior, we propose the Motion and Content decomposed Generative Adversarial Network (MoCoGAN) framework for video generation. The proposed framework generates a video by mapping a sequence of random vectors to a sequence of video frames. Each random vector consists of a content part and a motion part. While the content part is kept fixed, the motion part is realized as a stochastic process. To learn motion and content decomposition in an unsupervised manner, we introduce a novel adversarial learning scheme utilizing both image and video discriminators. Extensive experimental results on several challenging datasets with qualitative and quantitative comparison to the state-of-the-art approaches, verify effectiveness of the proposed framework. In addition, we show that MoCoGAN allows one to generate videos with same content but different motion as well as videos with different content and same motion. Our code is available at https://github.com/sergeytulyakov/mocogan.",
"title": ""
},
{
"docid": "4424a73177671ce5f1abcd304e546434",
"text": "Photorealistic frontal view synthesis from a single face image has a wide range of applications in the field of face recognition. Although data-driven deep learning methods have been proposed to address this problem by seeking solutions from ample face data, this problem is still challenging because it is intrinsically ill-posed. This paper proposes a Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic frontal view synthesis by simultaneously perceiving global structures and local details. Four landmark located patch networks are proposed to attend to local textures in addition to the commonly used global encoderdecoder network. Except for the novel architecture, we make this ill-posed problem well constrained by introducing a combination of adversarial loss, symmetry loss and identity preserving loss. The combined loss function leverages both frontal face distribution and pre-trained discriminative deep face models to guide an identity preserving inference of frontal views from profiles. Different from previous deep learning methods that mainly rely on intermediate features for recognition, our method directly leverages the synthesized identity preserving image for downstream tasks like face recognition and attribution estimation. Experimental results demonstrate that our method not only presents compelling perceptual results but also outperforms state-of-theart results on large pose face recognition.",
"title": ""
},
{
"docid": "51fb43ac979ce0866eb541adc145ba70",
"text": "In many cooperatively breeding species, group members form a dominance hierarchy or queue to inherit the position of breeder. Models aimed at understanding individual variation in helping behavior, however, rarely take into account the effect of dominance rank on expected future reproductive success and thus the potential direct fitness costs of helping. Here we develop a kin-selection model of helping behavior in multimember groups in which only the highest ranking individual breeds. Each group member can invest in the dominant’s offspring at a cost to its own survivorship. The model predicts that lower ranked subordinates, who have a smaller probability of inheriting the group, should work harder than higher ranked subordinates. This prediction holds regardless of whether the intrinsic mortality rate of subordinates increases or decreases with rank. The prediction does not necessarily hold, however, where the costs of helping are higher for lower ranked individuals: a situation that may be common in vertebrates. The model makes two further testable predictions: that the helping effort of an individual of given rank should be lower in larger groups, and the reproductive success of dominants should be greater where group members are more closely related. Empirical evidence for these predictions is discussed. We argue that the effects of rank on stable helping effort may explain why attempts to correlate individual helping effort with relatedness in cooperatively breeding species have met with limited success.",
"title": ""
},
{
"docid": "3f4c1474f79a4d3b179d2a8391719d5f",
"text": "An unresolved challenge for all kind of temporal data is the reliable anomaly detection, especially when adaptability is required in the case of non-stationary time series or when the nature of future anomalies is unknown or only vaguely defined. Most of the current anomaly detection algorithms follow the general idea to classify an anomaly as a significant deviation from the prediction. In this paper we present a comparative study where several online anomaly detection algorithms are compared on the large Yahoo Webscope S5 anomaly benchmark. We show that a relatively Simple Online Regression Anomaly Detector (SORAD) is quite successful compared to other anomaly detectors. We discuss the importance of several adaptive and online elements of the algorithm and their influence on the overall anomaly detection accuracy.",
"title": ""
},
{
"docid": "4c2b22c651aa4cc40807cc92a044a008",
"text": "Robotic grasping is very sensitive to how accurate is the pose estimation of the object to grasp. Even a small error in the estimated pose may cause the planned grasp to fail. Several methods for robust grasp planning exploit the object geometry or tactile sensor feedback. However, object pose range estimation introduces specific uncertainties that can also be exploited to choose more robust grasps. We present a grasp planning method that explicitly considers the uncertainties on the visually-estimated object pose. We assume a known shape (e.g. primitive shape or triangle mesh), observed as a –possibly sparse– point cloud. The measured points are usually not uniformly distributed over the surface as the object is seen from a particular viewpoint; additionally this non-uniformity can be the result of heterogeneous textures over the object surface, when using stereo-vision algorithms based on robust feature-point matching. Consequently the pose estimation may be more accurate in some directions and contain unavoidable ambiguities. The proposed grasp planner is based on a particle filter to estimate the object probability distribution as a discrete set. We show that, for grasping, some ambiguities are less unfavorable so the distribution can be used to select robust grasps. Some experiments are presented with the humanoid robot iCub and its stereo cameras.",
"title": ""
},
{
"docid": "74686e9acab0a4d41c87cadd7da01889",
"text": "Automatic analysis of biomedical time series such as electroencephalogram (EEG) and electrocardiographic (ECG) signals has attracted great interest in the community of biomedical engineering due to its important applications in medicine. In this work, a simple yet effective bag-of-words representation that is able to capture both local and global structure similarity information is proposed for biomedical time series representation. In particular, similar to the bag-of-words model used in text document domain, the proposed method treats a time series as a text document and extracts local segments from the time series as words. The biomedical time series is then represented as a histogram of codewords, each entry of which is the count of a codeword appeared in the time series. Although the temporal order of the local segments is ignored, the bag-of-words representation is able to capture high-level structural information because both local and global structural information are well utilized. The performance of the bag-of-words model is validated on three datasets extracted from real EEG and ECG signals. The experimental results demonstrate that the proposed method is not only insensitive to parameters of the bag-of-words model such as local segment length and codebook size, but also robust to noise.",
"title": ""
},
{
"docid": "7e557091d8cfe6209b1eda3b664ab551",
"text": "With the increasing penetration of mobile phones, problematic use of mobile phone (PUMP) deserves attention. In this study, using a path model we examined the relationship between depression and PUMP, with motivations as mediators. Findings suggest that depressed people may rely on mobile phone to alleviate their negative feelings and spend more time on communication activities via mobile phone, which in turn can deteriorate into PUMP. However, face-to-face communication with others played a moderating role, weakening the link between use of mobile phone for communication activities and dete-",
"title": ""
},
{
"docid": "9f005054e640c2db97995c7540fe2034",
"text": "Attack detection is usually approached as a classification problem. However, standard classification tools often perform poorly, because an adaptive attacker can shape his attacks in response to the algorithm. This has led to the recent interest in developing methods for adversarial classification, but to the best of our knowledge, there have been a very few prior studies that take into account the attacker’s tradeoff between adapting to the classifier being used against him with his desire to maintain the efficacy of his attack. Including this effect is a key to derive solutions that perform well in practice. In this investigation, we model the interaction as a game between a defender who chooses a classifier to distinguish between attacks and normal behavior based on a set of observed features and an attacker who chooses his attack features (class 1 data). Normal behavior (class 0 data) is random and exogenous. The attacker’s objective balances the benefit from attacks and the cost of being detected while the defender’s objective balances the benefit of a correct attack detection and the cost of false alarm. We provide an efficient algorithm to compute all Nash equilibria and a compact characterization of the possible forms of a Nash equilibrium that reveals intuitive messages on how to perform classification in the presence of an attacker. We also explore qualitatively and quantitatively the impact of the non-attacker and underlying parameters on the equilibrium strategies.",
"title": ""
},
{
"docid": "9fd3321922a73539210cb5b73d8d5d9c",
"text": "This paper presents a new model for controlling information flow in systems with mutual distrust and decentralized authority. The model allows users to share information with distrusted code (e.g., downloaded applets), yet still control how that code disseminates the shared information to others. The model improves on existing multilevel security models by allowing users to declassify information in a decentralized way, and by improving support for fine-grained data sharing. The paper also shows how static program analysis can be used to certify proper information flows in this model and to avoid most run-time information flow checks.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "4287b25e6e80d16d3d19f69bece2dfcc",
"text": "Short ranges and limited field-of-views in semi-passive radio frequency identification (RFID) tags are the most prominent obstacles that limit the number of RFID applications relying on backscatter modulation to exchange data between a reader and a tag. We propose a retrodirective array structure that, if equipped on a tag, can increase the field-of-view and the coverage area of RFID systems by making the tag insensitive to its orientation with respect to one or more RFID readers. In this article, we derive and experimentally validate the conditions under which a rat-race coupler is retrodirective. The performance of the fabricated passive retrodirective structure is evaluated through the retrodirective ideality factor (RIF) amounting to a value of 1.003, which is close to the ideal RIF of one. The article ends with a discussion on how the proposed design can improve current RFID systems from a communication perspective.",
"title": ""
},
{
"docid": "67544e71b45acb84923a3db84534a377",
"text": "The precision of point-of-gaze (POG) estimation during a fixation is an important factor in determining the usability of a noncontact eye-gaze tracking system for real-time applications. The objective of this paper is to define and measure POG fixation precision, propose methods for increasing the fixation precision, and examine the improvements when the methods are applied to two POG estimation approaches. To achieve these objectives, techniques for high-speed image processing that allow POG sampling rates of over 400 Hz are presented. With these high-speed POG sampling rates, the fixation precision can be improved by filtering while maintaining an acceptable real-time latency. The high-speed sampling and digital filtering techniques developed were applied to two POG estimation techniques, i.e., the highspeed pupil-corneal reflection (HS P-CR) vector method and a 3-D model-based method allowing free head motion. Evaluation on the subjects has shown that when operating at 407 frames per second (fps) with filtering, the fixation precision for the HS P-CR POG estimation method was improved by a factor of 5.8 to 0.035deg (1.6 screen pixels) compared to the unfiltered operation at 30 fps. For the 3-D POG estimation method, the fixation precision was improved by a factor of 11 to 0.050deg (2.3 screen pixels) compared to the unfiltered operation at 30 fps.",
"title": ""
},
{
"docid": "5d48b6fcc1d8f1050b5b5dc60354fedb",
"text": "The latency in the current neural based dialogue state tracking models prohibits them from being used efficiently for deployment in production systems, albeit their highly accurate performance. This paper proposes a new scalable and accurate neural dialogue state tracking model, based on the recently proposed Global-Local Self-Attention encoder (GLAD) model by Zhong et al. (2018) which uses global modules to share parameters between estimators for different types (called slots) of dialogue states, and uses local modules to learn slot-specific features. By using only one recurrent networks with global conditioning, compared to (1 + # slots) recurrent networks with global and local conditioning used in the GLAD model, our proposed model reduces the latency in training and inference times by 35% on average, while preserving performance of belief state tracking, by 97.38% on turn request and 88.51% on joint goal and accuracy. Evaluation on Multi-domain dataset (Multi-WoZ) also demonstrates that our model outperforms GLAD on turn inform and joint goal accuracy.",
"title": ""
},
{
"docid": "fe38b44457f89bcb63aabe65babccd03",
"text": "Single sample face recognition have become an important problem because of the limitations on the availability of gallery images. In many real-world applications such as passport or driver license identification, there is only a single facial image per subject available. The variations between the single gallery face image and the probe face images, captured in unconstrained environments, make the single sample face recognition even more difficult. In this paper, we present a fully automatic face recognition system robust to most common face variations in unconstrained environments. Our proposed system is capable of recognizing faces from non-frontal views and under different illumination conditions using only a single gallery sample for each subject. It normalizes the face images for both in-plane and out-of-plane pose variations using an enhanced technique based on active appearance models (AAMs). We improve the performance of AAM fitting, not only by training it with in-the-wild images and using a powerful optimization technique, but also by initializing the AAM with estimates of the locations of the facial landmarks obtained by a method based on flexible mixture of parts. The proposed initialization technique results in significant improvement of AAM fitting to non-frontal poses and makes the normalization process robust, fast and reliable. Owing to the proper alignment of the face images, made possible by this approach, we can use local feature descriptors, such as Histograms of Oriented Gradients (HOG), for matching. The use of HOG features makes the system robust against illumination variations. In order to improve the discriminating information content of the feature vectors, we also extract Gabor features from the normalized face images and fuse them with HOG features using Canonical Correlation Analysis (CCA). Experimental results performed on various databases outperform the state-of-the-art methods and show the effectiveness of our proposed method in normalization and recognition of face images obtained in unconstrained environments.",
"title": ""
}
] |
scidocsrr
|
130d198e2811a56148735b62373a75a0
|
End-to-End Incremental Learning
|
[
{
"docid": "41a0681812527ef288ac4016550e53dd",
"text": "Supervised learning using deep convolutional neural network has shown its promise in large-scale image classification task. As a building block, it is now well positioned to be part of a larger system that tackles real-life multimedia tasks. An unresolved issue is that such model is trained on a static snapshot of data. Instead, this paper positions the training as a continuous learning process as new classes of data arrive. A system with such capability is useful in practical scenarios, as it gradually expands its capacity to predict increasing number of new classes. It is also our attempt to address the more fundamental issue: a good learning system must deal with new knowledge that it is exposed to, much as how human do.\n We developed a training algorithm that grows a network not only incrementally but also hierarchically. Classes are grouped according to similarities, and self-organized into levels. The newly added capacities are divided into component models that predict coarse-grained superclasses and those return final prediction within a superclass. Importantly, all models are cloned from existing ones and can be trained in parallel. These models inherit features from existing ones and thus further speed up the learning. Our experiment points out advantages of this approach, and also yields a few important open questions.",
"title": ""
}
] |
[
{
"docid": "b42f3575dad9615a40f491291661e7c5",
"text": "Novel neural models have been proposed in recent years for learning under domain shift. Most models, however, only evaluate on a single task, on proprietary datasets, or compare to weak baselines, which makes comparison of models difficult. In this paper, we re-evaluate classic general-purpose bootstrapping approaches in the context of neural networks under domain shifts vs. recent neural approaches and propose a novel multi-task tri-training method that reduces the time and space complexity of classic tri-training. Extensive experiments on two benchmarks are negative: while our novel method establishes a new state-of-the-art for sentiment analysis, it does not fare consistently the best. More importantly, we arrive at the somewhat surprising conclusion that classic tri-training, with some additions, outperforms the state of the art. We conclude that classic approaches constitute an important and strong baseline.",
"title": ""
},
{
"docid": "ea4c62866699e239a277c62501cccd11",
"text": "The oil, aqueous infusion and decoction of oregano (Origanum vulgare), of the family Limiaceae, were asessed for antibacterial activity against 11 different genera of Gram–ve bacilli viz., Aeromonas hydrophila, Citrobacter sp., Enterobacter aerogenese, Escherichia coli, Flavobacterium sp., Klebsiella ozaenae, K. pneumoniae, Proteus mirabilis, Pseudomonas aeruginosa, Salmonella typhi, S. paratyphi B, Serratia marcescens and Shigella dysenteriae, by disc diffusion method. Oregano oil exhibited the highest activity against Citrobacter species with mean zone of inhibition of 24.0 mm ± 0.5. The aqueous infusion also showed significant inhibitory activity against Klebsiella pneumoniae (20.1 mm ± 6.1 SD), Klebsiella ozaenae (19.5 mm ± 0.5 SD) and Enterobacter aerogenes (18.0 mm). Besides, all isolates were found resistant to the aqueous decoction of oregano seeds.",
"title": ""
},
{
"docid": "df0be45b6db0de70acb6bbf44e7898aa",
"text": "The paper focuses on conservation agriculture (CA), defined as minimal soil disturbance (no-till, NT) and permanent soil cover (mulch) combined with rotations, as a more sustainable cultivation system for the future. Cultivation and tillage play an important role in agriculture. The benefits of tillage in agriculture are explored before introducing conservation tillage (CT), a practice that was borne out of the American dust bowl of the 1930s. The paper then describes the benefits of CA, a suggested improvement on CT, where NT, mulch and rotations significantly improve soil properties and other biotic factors. The paper concludes that CA is a more sustainable and environmentally friendly management system for cultivating crops. Case studies from the rice-wheat areas of the Indo-Gangetic Plains of South Asia and the irrigated maize-wheat systems of Northwest Mexico are used to describe how CA practices have been used in these two environments to raise production sustainably and profitably. Benefits in terms of greenhouse gas emissions and their effect on global warming are also discussed. The paper concludes that agriculture in the next decade will have to sustainably produce more food from less land through more efficient use of natural resources and with minimal impact on the environment in order to meet growing population demands. Promoting and adopting CA management systems can help meet this goal.",
"title": ""
},
{
"docid": "65e273d046a8120532d8cd04bcadca56",
"text": "This paper explores the relationship between domain scheduling in avirtual machine monitor (VMM) and I/O performance. Traditionally, VMM schedulers have focused on fairly sharing the processor resources among domains while leaving the scheduling of I/O resources as asecondary concern. However, this can resultin poor and/or unpredictable application performance, making virtualization less desirable for applications that require efficient and consistent I/O behavior.\n This paper is the first to study the impact of the VMM scheduler on performance using multiple guest domains concurrently running different types of applications. In particular, different combinations of processor-intensive, bandwidth-intensive, andlatency-sensitive applications are run concurrently to quantify the impacts of different scheduler configurations on processor and I/O performance. These applications are evaluated on 11 different scheduler configurations within the Xen VMM. These configurations include a variety of scheduler extensions aimed at improving I/O performance. This cross product of scheduler configurations and application types offers insight into the key problems in VMM scheduling for I/O and motivates future innovation in this area.",
"title": ""
},
{
"docid": "0315f0355168a78bdead8d06d5f571b4",
"text": "Machine learning techniques are increasingly being applied to clinical text that is already captured in the Electronic Health Record for the sake of delivering quality care. Applications for example include predicting patient outcomes, assessing risks, or performing diagnosis. In the past, good results have been obtained using classical techniques, such as bag-of-words features, in combination with statistical models. Recently however Deep Learning techniques, such as Word Embeddings and Recurrent Neural Networks, have shown to possibly have even greater potential. In this work, we apply several Deep Learning and classical machine learning techniques to the task of predicting violence incidents during psychiatric admission using clinical text that is already registered at the start of admission. For this purpose, we use a novel and previously unexplored dataset from the Psychiatry Department of the University Medical Center Utrecht in The Netherlands. Results show that predicting violence incidents with state-of-the-art performance is possible, and that using Deep Learning techniques provides a relatively small but consistent improvement in performance. We finally discuss the potential implication of our findings for the psychiatric practice.",
"title": ""
},
{
"docid": "a11a5cb713feedbe2165ea7470eddfb8",
"text": "End-to-end trained neural networks (NNs) are a compelling approach to autonomous vehicle control because of their ability to learn complex tasks without manual engineering of rule-based decisions. However, challenging road conditions, ambiguous navigation situations, and safety considerations require reliable uncertainty estimation for the eventual adoption of full-scale autonomous vehicles. Bayesian deep learning approaches provide a way to estimate uncertainty by approximating the posterior distribution of weights given a set of training data. Dropout training in deep NNs approximates Bayesian inference in a deep Gaussian process and can thus be used to estimate model uncertainty. In this paper, we propose a Bayesian NN for end-to-end control that estimates uncertainty by exploiting feature map correlation during training. This approach achieves improved model fits, as well as tighter uncertainty estimates, than traditional element-wise dropout. We evaluate our algorithms on a challenging dataset collected over many different road types, times of day, and weather conditions, and demonstrate how uncertainties can be used in conjunction with a human controller in a parallel autonomous setting.",
"title": ""
},
{
"docid": "d6586a261e22e9044425cb27462c3435",
"text": "In this work, we develop a planner for high-speed navigation in unknown environments, for example reaching a goal in an unknown building in minimum time, or flying as fast as possible through a forest. This planning task is challenging because the distribution over possible maps, which is needed to estimate the feasibility and cost of trajectories, is unknown and extremely hard to model for real-world environments. At the same time, the worst-case assumptions that a receding-horizon planner might make about the unknown regions of the map may be overly conservative, and may limit performance. Therefore, robots must make accurate predictions about what will happen beyond the map frontiers to navigate as fast as possible. To reason about uncertainty in the map, we model this problem as a POMDP and discuss why it is so difficult given that we have no accurate probability distribution over real-world environments. We then present a novel method of predicting collision probabilities based on training data, which compensates for the missing environment distribution and provides an approximate solution to the POMDP. Extending our previous work, the principal result of this paper is that by using a Bayesian non-parametric learning algorithm that encodes formal safety constraints as a prior over collision probabilities, our planner seamlessly reverts to safe behavior when it encounters a novel environment for which it has no relevant training data. This strategy generalizes our method across all environment types, including those for which we have training data as well as those for which we do not. In familiar environment types with dense training data, we show an 80% speed improvement compared to a planner that is constrained to guarantee safety. In experiments, our planner has reached over 8 m/s in unknown cluttered indoor spaces. Video of our experimental demonstration is available at http://groups.csail.mit.edu/ rrg/bayesian_learning_high_speed_nav.",
"title": ""
},
{
"docid": "9b52a659fb6383e92c5968a082b01b71",
"text": "The internet of things (IoT) has a variety of application domains, including smart homes. This paper analyzes distinct IoT security and privacy features, including security requirements, threat models, and attacks from the smart home perspective. Further, this paper proposes an intelligent collaborative security management model to minimize security risk. The security challenges of the IoT for a smart home scenario are encountered, and a comprehensive IoT security management for smart homes has been proposed.",
"title": ""
},
{
"docid": "eff17ece2368b925f0db8e18ea0fc897",
"text": "Blockchain, as the backbone technology of the current popular Bitcoin digital currency, has become a promising decentralized data management framework. Although blockchain has been widely adopted in many applications (e.g., finance, healthcare, and logistics), its application in mobile services is still limited. This is due to the fact that blockchain users need to solve preset proof-of-work puzzles to add new data (i.e., a block) to the blockchain. Solving the proof of work, however, consumes substantial resources in terms of CPU time and energy, which is not suitable for resource-limited mobile devices. To facilitate blockchain applications in future mobile Internet of Things systems, multiple access mobile edge computing appears to be an auspicious solution to solve the proof-of-work puzzles for mobile users. We first introduce a novel concept of edge computing for mobile blockchain. Then we introduce an economic approach for edge computing resource management. Moreover, a prototype of mobile edge computing enabled blockchain systems is presented with experimental results to justify the proposed concept.",
"title": ""
},
{
"docid": "79101cf8d241e2fcb96138ad48b32406",
"text": "In this paper we discuss the benefits and the limitations, as well as different implementation options for smooth immersion into a HMD-based IVE. We evaluated our concept in a preliminary user study, in which we have tested users' awareness, reality judgment and experience in the IVE, when using different transition techniques to enter it. Our results show that a smooth transition to the IVE improves the awareness of the user and may increase the perceived interactivity of the system.",
"title": ""
},
{
"docid": "d9b19dd523fd28712df61384252d331c",
"text": "Purpose – The purpose of this paper is to examine the ways in which governments build social media and information and communication technologies (ICTs) into e-government transparency initiatives, to promote collaboration with members of the public and the ways in members of the public are able to employ the same social media to monitor government activities. Design/methodology/approach – This study used an iterative strategy that involved conducting a literature review, content analysis, and web site analysis, offering multiple perspectives on government transparency efforts, the role of ICTs and social media in these efforts, and the ability of e-government initiatives to foster collaborative transparency through embedded ICTs and social media. Findings – The paper identifies key initiatives, potential impacts, and future challenges for collaborative e-government as a means of transparency. Originality/value – The paper is one of the first to examine the interrelationships between ICTs, social media, and collaborative e-government to facilitate transparency.",
"title": ""
},
{
"docid": "aa3332930b0c0f94d3dde10d40b9a420",
"text": "1. Motivation and Goals. e success of data mining and search technologies is largely aributed to the ecient and eective analysis of structured data. e construction of a well-structured, machine-actionable database from raw data sources is oen the premise of consequent applications. Meanwhile, the ability of mining and reasoning over such constructed databases is at the core of powering various downstream applications on web and mobile devices. Recently, we have witnessed a signicant amount of interests in building large-scale knowledge bases (KBs) from massive, unstructured data sources (e.g., Wikipedia-based methods such as DBpedia [9], YAGO [19], Wikidata [22], automated systems like Snowball [1], KnowItAll [5], NELL [4] and DeepDive [15], and opendomain approaches like Open IE [2] and Universal Schema [14]); as well as mining and reasoning over such knowledge bases to empower a wide variety of intelligent services, including question answering [6], recommender systems [3] and semantic search [8]. Automated construction, mining and reasoning of the knowledge bases have become possible as research advances in many related areas such as information extraction, natural language processing, data mining, search, machine learning, databases and data integration. However, there are still substantial scientic and engineering challenges in advancing and integrating such relevant methodologies. e goal of this proposed workshop is to gather together leading experts from industry and academia to share their visions about the eld, discuss latest research results, and exchange exciting ideas. With a focus on invited talks and position papers, the workshop aims to provide a vivid forum of discussion about knowledge base-related research. 2. Relevance to WSDM. Knowledge base construction, mining and reasoning is closely related to a wide variety of applications in WSDM, including web search, question answering, and recommender systems. Building a high-quality knowledge base from",
"title": ""
},
{
"docid": "e4dd72a52d4961f8d4d8ee9b5b40d821",
"text": "Social media users spend several hours a day to read, post and search for news on microblogging platforms. Social media is becoming a key means for discovering news. However, verifying the trustworthiness of this information is becoming even more challenging. In this study, we attempt to address the problem of rumor detection and belief investigation on Twitter. Our definition of rumor is an unverifiable statement, which spreads misinformation or disinformation. We adopt a supervised rumors classification task using the standard dataset. By employing the Tweet Latent Vector (TLV) feature, which creates a 100-d vector representative of each tweet, we increased the rumor retrieval task precision up to 0.972. We also introduce the belief score and study the belief change among the rumor posters between 2010 and 2016.",
"title": ""
},
{
"docid": "80ff93b5f2e0ff3cff04c314e28159fc",
"text": "In the past 30 years there has been a growing body of research using different methods (behavioural, electrophysiological, neuropsychological, TMS and imaging studies) asking whether processing words from different grammatical classes (especially nouns and verbs) engage different neural systems. To date, however, each line of investigation has provided conflicting results. Here we present a review of this literature, showing that once we take into account the confounding in most studies between semantic distinctions (objects vs. actions) and grammatical distinction (nouns vs. verbs), and the conflation between studies concerned with mechanisms of single word processing and those studies concerned with sentence integration, the emerging picture is relatively clear-cut: clear neural separability is observed between the processing of object words (nouns) and action words (typically verbs), grammatical class effects emerge or become stronger for tasks and languages imposing greater processing demands. These findings indicate that grammatical class per se is not an organisational principle of knowledge in the brain; rather, all the findings we review are compatible with two general principles described by typological linguistics as underlying grammatical class membership across languages: semantic/pragmatic, and distributional cues in language that distinguish nouns from verbs. These two general principles are incorporated within an emergentist view which takes these constraints into account.",
"title": ""
},
{
"docid": "3bc34f3ef98147015e2ad94a6c615348",
"text": "Objective methods for assessing perceptual image quality traditionally attempt to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MatLab implementation of the proposed algorithm is available online at http://www.cns.nyu.edu/~lcv/ssim/. Keywords— Image quality assessment, perceptual quality, human visual system, error sensitivity, structural similarity, structural information, image coding, JPEG, JPEG2000",
"title": ""
},
{
"docid": "e737bb31bb7dbb6dbfdfe0fd01bfe33c",
"text": "Cannabidiol (CBD) is a non-psychotomimetic phytocannabinoid derived from Cannabis sativa. It has possible therapeutic effects over a broad range of neuropsychiatric disorders. CBD attenuates brain damage associated with neurodegenerative and/or ischemic conditions. It also has positive effects on attenuating psychotic-, anxiety- and depressive-like behaviors. Moreover, CBD affects synaptic plasticity and facilitates neurogenesis. The mechanisms of these effects are still not entirely clear but seem to involve multiple pharmacological targets. In the present review, we summarized the main biochemical and molecular mechanisms that have been associated with the therapeutic effects of CBD, focusing on their relevance to brain function, neuroprotection and neuropsychiatric disorders.",
"title": ""
},
{
"docid": "b3881be74f7338038b53dc6ddfa1183d",
"text": "Molecular chaperones, ubiquitin ligases and proteasome impairment have been implicated in several neurodegenerative diseases, including Alzheimer's and Parkinson's disease, which are characterized by accumulation of abnormal protein aggregates (e.g. tau and alpha-synuclein respectively). Here we report that CHIP, an ubiquitin ligase that interacts directly with Hsp70/90, induces ubiquitination of the microtubule associated protein, tau. CHIP also increases tau aggregation. Consistent with this observation, diverse of tau lesions in human postmortem tissue were found to be immunopositive for CHIP. Conversely, induction of Hsp70 through treatment with either geldanamycin or heat shock factor 1 leads to a decrease in tau steady-state levels and a selective reduction in detergent insoluble tau. Furthermore, 30-month-old mice overexpressing inducible Hsp70 show a significant reduction in tau levels. Together these data demonstrate that the Hsp70/CHIP chaperone system plays an important role in the regulation of tau turnover and the selective elimination of abnormal tau species. Hsp70/CHIP may therefore play an important role in the pathogenesis of tauopathies and also represents a potential therapeutic target.",
"title": ""
},
{
"docid": "5f36e4130459cac6cb6b2b42135ca5d5",
"text": "The term idiolect refers to the unique and distinctive use of language of an individual and it is the theoretical foundation of Authorship Attribution. In this paper we are focusing on learning distributed representations (embeddings) of social media users that reflect their writing style. These representations can be considered as stylistic fingerprints of the authors. We are exploring the performance of the two main flavours of distributed representations, namely embeddings produced by Neural Probabilistic Language models (such as word2vec) and matrix factorization (such as GloVe).",
"title": ""
},
{
"docid": "7882a3a5796052253db44cbb76f2e1eb",
"text": "The discovery of regulated cell death presents tantalizing possibilities for gaining control over the life–death decisions made by cells in disease. Although apoptosis has been the focus of drug discovery for many years, recent research has identified regulatory mechanisms and signalling pathways for previously unrecognized, regulated necrotic cell death routines. Distinct critical nodes have been characterized for some of these alternative cell death routines, whereas other cell death routines are just beginning to be unravelled. In this Review, we describe forms of regulated necrotic cell death, including necroptosis, the emerging cell death modality of ferroptosis (and the related oxytosis) and the less well comprehended parthanatos and cyclophilin D-mediated necrosis. We focus on small molecules, proteins and pathways that can induce and inhibit these non-apoptotic forms of cell death, and discuss strategies for translating this understanding into new therapeutics for certain disease contexts.",
"title": ""
},
{
"docid": "7509fb44b404e92abe5d4b41d108822c",
"text": "Theory predicts, and recent empirical studies have shown, that the diversity of plant species determines the diversity of associated herbivores and mediates ecosystem processes, such as aboveground net primary productivity (ANPP). However, an often-overlooked component of plant diversity, namely population genotypic diversity, may also have wide-ranging effects on community structure and ecosystem processes. We showed experimentally that increasing population genotypic diversity in a dominant old-field plant species, Solidago altissima, determined arthropod diversity and community structure and increased ANPP. The effects of genotypic diversity on arthropod diversity and ANPP were comparable to the effects of plant species diversity measured in other studies.",
"title": ""
}
] |
scidocsrr
|
d20ee5fba86a814522db8b64606f8b5e
|
Migrant mothers, left-behind fathers: the negotiation of gender subjectivities in Indonesia and the Philippines
|
[
{
"docid": "2da67ed8951caf3388ca952465d61b37",
"text": "As a significant supplier of labour migrants, Southeast Asia presents itself as an important site for the study of children in transnational families who are growing up separated from at least one migrant parent and sometimes cared for by 'other mothers'. Through the often-neglected voices of left-behind children, we investigate the impact of parental migration and the resulting reconfiguration of care arrangements on the subjective well-being of migrants' children in two Southeast Asian countries, Indonesia and the Philippines. We theorise the child's position in the transnational family nexus through the framework of the 'care triangle', representing interactions between three subject groups- 'left-behind' children, non-migrant parents/other carers; and migrant parent(s). Using both quantitative (from 1010 households) and qualitative (from 32 children) data from a study of child health and migrant parents in Southeast Asia, we examine relationships within the caring spaces both of home and of transnational spaces. The interrogation of different dimensions of care reveals the importance of contact with parents (both migrant and nonmigrant) to subjective child well-being, and the diversity of experiences and intimacies among children in the two study countries.",
"title": ""
}
] |
[
{
"docid": "513224bb1034217b058179f3805dd37f",
"text": "Existing work on subgraph isomorphism search mainly focuses on a-query-at-a-time approaches: optimizing and answering each query separately. When multiple queries arrive at the same time, sequential processing is not always the most efficient. In this paper, we study multi-query optimization for subgraph isomorphism search. We first propose a novel method for efficiently detecting useful common subgraphs and a data structure to organize them. Then we propose a heuristic algorithm based on the data structure to compute a query execution order so that cached intermediate results can be effectively utilized. To balance memory usage and the time for cached results retrieval, we present a novel structure for caching the intermediate results. We provide strategies to revise existing single-query subgraph isomorphism algorithms to seamlessly utilize the cached results, which leads to significant performance improvement. Extensive experiments verified the effectiveness of our solution.",
"title": ""
},
{
"docid": "9cf1791f7d73f7e2471b27dd7667e023",
"text": "We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.",
"title": ""
},
{
"docid": "571c7cb6e0670539a3effbdd65858d2a",
"text": "When writing software, developers often employ abbreviations in identifier names. In fact, some abbreviations may never occur with the expanded word, or occur more often in the code. However, most existing program comprehension and search tools do little to address the problem of abbreviations, and therefore may miss meaningful pieces of code or relationships between software artifacts. In this paper, we present an automated approach to mining abbreviation expansions from source code to enhance software maintenance tools that utilize natural language information. Our scoped approach uses contextual information at the method, program, and general software level to automatically select the most appropriate expansion for a given abbreviation. We evaluated our approach on a set of 250 potential abbreviations and found that our scoped approach provides a 57% improvement in accuracy over the current state of the art.",
"title": ""
},
{
"docid": "8dc9170093a0317fff3971b18f758ff3",
"text": "In many Web applications, such as blog classification and new-sgroup classification, labeled data are in short supply. It often happens that obtaining labeled data in a new domain is expensive and time consuming, while there may be plenty of labeled data in a related but different domain. Traditional text classification ap-proaches are not able to cope well with learning across different domains. In this paper, we propose a novel cross-domain text classification algorithm which extends the traditional probabilistic latent semantic analysis (PLSA) algorithm to integrate labeled and unlabeled data, which come from different but related domains, into a unified probabilistic model. We call this new model Topic-bridged PLSA, or TPLSA. By exploiting the common topics between two domains, we transfer knowledge across different domains through a topic-bridge to help the text classification in the target domain. A unique advantage of our method is its ability to maximally mine knowledge that can be transferred between domains, resulting in superior performance when compared to other state-of-the-art text classification approaches. Experimental eval-uation on different kinds of datasets shows that our proposed algorithm can improve the performance of cross-domain text classification significantly.",
"title": ""
},
{
"docid": "dd7a87be674da00360de58df77bf980a",
"text": "This paper presents an overview of single-pass interferometric Synthetic Aperture Radar (SAR) missions employing two or more satellites flying in a close formation. The simultaneous reception of the scattered radar echoes from different viewing directions by multiple spatially distributed antennas enables the acquisition of unique Earth observation products for environmental and climate monitoring. After a short introduction to the basic principles and applications of SAR interferometry, designs for the twin satellite missions TanDEM-X and Tandem-L are presented. The primary objective of TanDEM-X (TerraSAR-X add-on for Digital Elevation Measurement) is the generation of a global Digital Elevation Model (DEM) with unprecedented accuracy as the basis for a wide range of scientific research as well as for commercial DEM production. This goal is achieved by enhancing the TerraSAR-X mission with a second TerraSAR-X like satellite that will be launched in spring 2010. Both satellites act then as a large single-pass SAR interferometer with the opportunity for flexible baseline selection. Building upon the experience gathered with the TanDEM-X mission design, the fully polarimetric L-band twin satellite formation Tandem-L is proposed. Important objectives of this highly capable interferometric SAR mission are the global acquisition of three-dimensional forest structure and biomass inventories, large-scale measurements of millimetric displacements due to tectonic shifts, and systematic observations of glacier movements. The sophisticated mission concept and the high data-acquisition capacity of Tandem-L will moreover provide a unique data source to systematically observe, analyze, and quantify the dynamics of a wide range of additional processes in the bio-, litho-, hydro-, and cryosphere. By this, Tandem-L will be an essential step to advance our understanding of the Earth system and its intricate dynamics. Enabling technologies and techniques are described in detail. An outlook on future interferometric and tomographic concepts and developments, including multistatic SAR systems with multiple receivers, is provided.",
"title": ""
},
{
"docid": "539a25209bf65c8b26cebccf3e083cd0",
"text": "We study the problem of web search result diversification in the case where intent based relevance scores are available. A diversified search result will hopefully satisfy the information need of user-L.s who may have different intents. In this context, we first analyze the properties of an intent-based metric, ERR-IA, to measure relevance and diversity altogether. We argue that this is a better metric than some previously proposed intent aware metrics and show that it has a better correlation with abandonment rate. We then propose an algorithm to rerank web search results based on optimizing an objective function corresponding to this metric and evaluate it on shopping related queries.",
"title": ""
},
{
"docid": "79fdfee8b42fe72a64df76e64e9358bc",
"text": "An algorithm is described to solve multiple-phase optimal control problems using a recently developed numerical method called the Gauss pseudospectral method. The algorithm is well suited for use in modern vectorized programming languages such as FORTRAN 95 and MATLAB. The algorithm discretizes the cost functional and the differential-algebraic equations in each phase of the optimal control problem. The phases are then connected using linkage conditions on the state and time. A large-scale nonlinear programming problem (NLP) arises from the discretization and the significant features of the NLP are described in detail. A particular reusable MATLAB implementation of the algorithm, called GPOPS, is applied to three classical optimal control problems to demonstrate its utility. The algorithm described in this article will provide researchers and engineers a useful software tool and a reference when it is desired to implement the Gauss pseudospectral method in other programming languages.",
"title": ""
},
{
"docid": "b75f793f4feac0b658437026d98a1e8b",
"text": "From a certain (admittedly narrow) perspective, one of the annoying features of natural language is the ubiquitous syntactic ambiguity. For a computational model intended to assign syntactic descriptions to natural language text, this seem like a design defect. In general, when context and lexical content are taken into account, such syntactic ambiguity can be resolved: sentences used in context show, for the most part, little ambiguity. But the grammar provides many alternative analyses, and gives little guidance about resolving the ambiguity. Prepositional phrase attachment is the canonical case of structural ambiguity, as in the time worn example,",
"title": ""
},
{
"docid": "9b42c1b58bb7b74bdcf09c7556800ad5",
"text": "In this paper, we propose a method to find the safest path between two locations, based on the geographical model of crime intensities. We consider the police records and news articles for finding crime density of different areas of the city. It is essential to consider news articles as there is a significant delay in updating police crime records. We address this problem by updating the crime intensities based on current news feeds. Based on the updated crime intensities, we identify the safest path. It is this real time updation of crime intensities which makes our model way better than the models that are presently in use. Our model would also inform the user of crime sprees in a particular area thereby ensuring that user avoids these crime hot spots.",
"title": ""
},
{
"docid": "59ba83e88085445e3bcf009037af6617",
"text": "— We examine the relationship between resource abundance and several indicators of human welfare. Consistent with the existing literature on the relationship between resource abundance and economic growth we find that, given an initial income level, resource-intensive countries tend to suffer lower levels of human development. While we find only weak support for a direct link between resources and welfare, there is an indirect link that operates through institutional quality. There are also significant differences in the effects that resources have on different measures of institutional quality. These results imply that the ‘‘resource curse’’ is a more encompassing phenomenon than previously considered, and that key differences exist between the effects of different resource types on various aspects of governance and human welfare. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "68bc2abd13bcd19566eed66f0031c934",
"text": "As DRAM density keeps increasing, more rows need to be protected in a single refresh with the constant refresh number. Since no memory access is allowed during a refresh, the refresh penalty is no longer trivial and can result in significant performance degradation. To mitigate the refresh penalty, a Concurrent-REfresh-Aware Memory system (CREAM) is proposed in this work so that memory access and refresh can be served in parallel. The proposed CREAM architecture distinguishes itself with the following key contributions: (1) Under a given DRAM power budget, sub-rank-level refresh (SRLR) is developed to reduce refresh power and the saved power is used to enable concurrent memory access; (2) sub-array-level refresh (SALR) is also devised to effectively lower the probability of the conflict between memory access and refresh; (3) In addition, novel sub-array level refresh scheduling schemes, such as sub-array round-robin and dynamic scheduling, are designed to further improve the performance. A quasi-ROR interface protocol is proposed so that CREAM is fully compatible with JEDEC-DDR standard with negligible hardware overhead and no extra pin-out. The experimental results show that CREAM can improve the performance by 12.9% and 7.1% over the conventional DRAM and the Elastic-Refresh DRAM memory, respectively.",
"title": ""
},
{
"docid": "73edaa7319dcf225c081f29146bbb385",
"text": "Sign language is a specific area of human gesture communication and a full-edged complex language that is used by various deaf communities. In Bangladesh, there are many deaf and dumb people. It becomes very difficult to communicate with them for the people who are unable to understand the Sign Language. In this case, an interpreter can help a lot. So it is desirable to make computer to understand the Bangladeshi sign language that can serve as an interpreter. In this paper, a Computer Vision-based Bangladeshi Sign Language Recognition System (BdSL) has been proposed. In this system, separate PCA (Principal Component Analysis) is used for Bengali Vowels and Bengali Numbers recognition. The system is tested for 6 Bengali Vowels and 10 Bengali Numbers.",
"title": ""
},
{
"docid": "3292af68a03deb0cffcf3b701e1c0f63",
"text": "Limitations imposed by the traditional practice in financial institutions of running risk analysis on the desktop mean many rely on models which assume a “normal” Gaussian distribution of events which can seriously underestimate the real risk. In this paper, we propose an alternative service which uses the elastic capacities of Cloud Computing to escape the limitations of the desktop and produce accurate results more rapidly. The Business Intelligence as a Service (BIaaS) in the Cloud has a dual-service approach to compute risk and pricing for financial analysis. In the first type of BIaaS service uses three APIs to simulate the Heston Model to compute the risks and asset prices, and computes the volatility (unsystematic risks) and the implied volatility (systematic risks) which can be tracked down at any time. The second type of BIaaS service uses two APIs to provide business analytics for stock market analysis, and compute results in the visualised format, so that stake holders without prior knowledge can understand. A full case study with two sets of experiments is presented to support the validity and originality of BIaaS. Additional three examples are used to support accuracy of the predicted stock index movement as a result of the use of Heston Model and its associated APIs. We describe the architecture of deployment, together with examples and results which show how our approach improves risk and investment analysis and maintaining accuracy and efficiency whilst improving performance over desktops.",
"title": ""
},
{
"docid": "f6264315a5bbf32b9fa21488b4c80f03",
"text": "into empirical, corpus-based learning approaches to natural language processing (NLP). Most empirical NLP work to date has focused on relatively low-level language processing such as part-ofspeech tagging, text segmentation, and syntactic parsing. The success of these approaches has stimulated research in using empirical learning techniques in other facets of NLP, including semantic analysis—uncovering the meaning of an utterance. This article is an introduction to some of the emerging research in the application of corpusbased learning techniques to problems in semantic interpretation. In particular, we focus on two important problems in semantic interpretation, namely, word-sense disambiguation and semantic parsing.",
"title": ""
},
{
"docid": "28f61d005f1b53ad532992e30b9b9b71",
"text": "We propose a method for nonlinear residual echo suppression that consists of extracting spectral features from the far-end signal, and using an artificial neural network to model the residual echo magnitude spectrum from these features. We compare the modeling accuracy achieved by realizations with different features and network topologies, evaluating the mean squared error of the estimated residual echo magnitude spectrum. We also present a low complexity real-time implementation combining an offline-trained network with online adaptation, and investigate its performance in terms of echo suppression and speech distortion for real mobile phone recordings.",
"title": ""
},
{
"docid": "45d6563b2b4c64bb11ad65c3cff0d843",
"text": "The performance of single cue object tracking algorithms may degrade due to complex nature of visual world and environment challenges. In recent past, multicue object tracking methods using single or multiple sensors such as vision, thermal, infrared, laser, radar, audio, and RFID are explored to a great extent. It was acknowledged that combining multiple orthogonal cues enhance tracking performance over single cue methods. The aim of this paper is to categorize multicue tracking methods into single-modal and multi-modal and to list out new trends in this field via investigation of representative work. The categorized works are also tabulated in order to give detailed overview of latest advancement. The person tracking datasets are analyzed and their statistical parameters are tabulated. The tracking performance measures are also categorized depending upon availability of ground truth data. Our review gauges the gap between reported work and future demands for object tracking.",
"title": ""
},
{
"docid": "40ab6e98dbf02235b882ea56a8675bba",
"text": "BACKGROUND\nThe lowering of cholesterol concentrations in individuals at high risk of cardiovascular disease improves outcome. No study, however, has assessed benefits of cholesterol lowering in the primary prevention of coronary heart disease (CHD) in hypertensive patients who are not conventionally deemed dyslipidaemic.\n\n\nMETHODS\nOf 19342 hypertensive patients (aged 40-79 years with at least three other cardiovascular risk factors) randomised to one of two antihypertensive regimens in the Anglo-Scandinavian Cardiac Outcomes Trial, 10305 with non-fasting total cholesterol concentrations 6.5 mmol/L or less were randomly assigned additional atorvastatin 10 mg or placebo. These patients formed the lipid-lowering arm of the study. We planned follow-up for an average of 5 years, the primary endpoint being non-fatal myocardial infarction and fatal CHD. Data were analysed by intention to treat.\n\n\nFINDINGS\nTreatment was stopped after a median follow-up of 3.3 years. By that time, 100 primary events had occurred in the atorvastatin group compared with 154 events in the placebo group (hazard ratio 0.64 [95% CI 0.50-0.83], p=0.0005). This benefit emerged in the first year of follow-up. There was no significant heterogeneity among prespecified subgroups. Fatal and non-fatal stroke (89 atorvastatin vs 121 placebo, 0.73 [0.56-0.96], p=0.024), total cardiovascular events (389 vs 486, 0.79 [0.69-0.90], p=0.0005), and total coronary events (178 vs 247, 0.71 [0.59-0.86], p=0.0005) were also significantly lowered. There were 185 deaths in the atorvastatin group and 212 in the placebo group (0.87 [0.71-1.06], p=0.16). Atorvastatin lowered total serum cholesterol by about 1.3 mmol/L compared with placebo at 12 months, and by 1.1 mmol/L after 3 years of follow-up.\n\n\nINTERPRETATION\nThe reductions in major cardiovascular events with atorvastatin are large, given the short follow-up time. These findings may have implications for future lipid-lowering guidelines.",
"title": ""
},
{
"docid": "ff345d732a273577ca0f965b92e1bbbd",
"text": "Integrated circuit (IC) testing for quality assurance is approaching 50% of the manufacturing costs for some complex mixed-signal IC’s. For many years the market growth and technology advancements in digital IC’s were driving the developments in testing. The increasing trend to integrate information acquisition and digital processing on the same chip has spawned increasing attention to the test needs of mixed-signal IC’s. The recent advances in wireless communications indicate a trend toward the integration of the RF and baseband mixed signal technologies. In this paper we examine the developments in IC testing form the historic, current status and future view points. In separate sections we address the testing developments for digital, mixed signal and RF IC’s. With these reviews as context, we relate new test paradigms that have the potential to fundamentally alter the methods used to test mixed-signal and RF parts.",
"title": ""
},
{
"docid": "f617b8b5c2c5fc7829cbcd0b2e64ed2d",
"text": "This paper proposes a novel lifelong learning (LL) approach to sentiment classification. LL mimics the human continuous learning process, i.e., retaining the knowledge learned from past tasks and use it to help future learning. In this paper, we first discuss LL in general and then LL for sentiment classification in particular. The proposed LL approach adopts a Bayesian optimization framework based on stochastic gradient descent. Our experimental results show that the proposed method outperforms baseline methods significantly, which demonstrates that lifelong learning is a promising research direction.",
"title": ""
},
{
"docid": "856012f3cf81a1527916da8a5136ce79",
"text": "Folk psychology postulates a spatial unity of self and body, a \"real me\" that resides in one's body and is the subject of experience. The spatial unity of self and body has been challenged by various philosophical considerations but also by several phenomena, perhaps most notoriously the \"out-of-body experience\" (OBE) during which one's visuo-spatial perspective and one's self are experienced to have departed from their habitual position within one's body. Here the authors marshal evidence from neurology, cognitive neuroscience, and neuroimaging that suggests that OBEs are related to a failure to integrate multisensory information from one's own body at the temporo-parietal junction (TPJ). It is argued that this multisensory disintegration at the TPJ leads to the disruption of several phenomenological and cognitive aspects of self-processing, causing illusory reduplication, illusory self-location, illusory perspective, and illusory agency that are experienced as an OBE.",
"title": ""
}
] |
scidocsrr
|
58b5ade47ae4cc04654b2487e690c059
|
Global Laparoscopy Positioning System with a Smart Trocar
|
[
{
"docid": "68f74c4fc9d1afb00ac2ec0221654410",
"text": "Most algorithms in 3-D Computer Vision rely on the pinhole camera model because of its simplicity, whereas video optics, especially low-cost wide-angle or fish-eye lens, generate a lot of non-linear distortion which can be critical. To find the distortion parameters of a camera, we use the following fundamental property: a camera follows the pinhole model if and only if the projection of every line in space onto the camera is a line. Consequently, if we find the transformation on the video image so that every line in space is viewed in the transformed image as a line, then we know how to remove the distortion from the image. The algorithm consists of first doing edge extraction on a possibly distorted video sequence, then doing polygonal approximation with a large tolerance on these edges to extract possible lines from the sequence, and then finding the parameters of our distortion model that best transform these edges to segments. Results are presented on real video images, compared with distortion calibration obtained by a full camera calibration method which uses a calibration grid.",
"title": ""
}
] |
[
{
"docid": "97da7e7b07775f58c86d26a2b714ba9f",
"text": "Nowadays, visual object recognition is one of the key applications for computer vision and deep learning techniques. With the recent development in mobile computing technology, many deep learning framework software support Personal Digital Assistant systems, i.e., smart phones or tablets, allowing developers to conceive innovative applications. In this work, we intend to employ such ICT strategies with the aim of supporting the tourism in an art city: for these reasons, we propose to provide tourists with a mobile application in order to better explore artistic heritage within an urban environment by using just their smartphone's camera. The software solution is based on Google TensorFlow, an innovative deep learning framework mainly designed for pattern recognition tasks. The paper presents our design choices and an early performance evaluation.",
"title": ""
},
{
"docid": "0a1db5c3eb76d4f0f9338bae157226f0",
"text": "Reasoning is the fundamental capability which requires knowledge. Various graph models have proven to be very valuable in knowledge representation and reasoning. Recently, explosive data generation and accumulation capabilities have paved way for Big Data and Data Intensive Systems. Knowledge Representation and Reasoning with large and growing data is extremely challenging but crucial for businesses to predict trends and support decision making. Any contemporary, reasonably complex knowledge based system will have to consider this onslaught of data, to use appropriate and sufficient reasoning for semantic processing of information by machines. This paper surveys graph based knowledge representation and reasoning, various graph models such as Conceptual Graphs, Concept Graphs, Semantic Networks, Inference Graphs and Causal Bayesian Networks used for representation and reasoning, common and recent research uses of these graph models, typically in Big Data environment, and the near future needs and challenges for graph based KRR in computing systems. Observations are presented in a table, highlighting suitability of the surveyed graph models for contemporary scenarios.",
"title": ""
},
{
"docid": "73ac084079951348463de3aa619eee58",
"text": "Multi Density DBSCAN (Density Based Spatial Clustering of Application with Noise) is an excellent density-based clustering algorithm, which extends DBSCAN algorithm so as to be able to discover the different densities clusters, and retains the advantage of separating noise and finding arbitrary shape clusters. But, because of great memory demand and low calculation efficiency, Multi Density DBSCAN can't deal with large database. Therefore, GCMDDBSCAN is proposed in this paper, and within it 'migration-coefficient' conception is introduced firstly. In GCMDDBSCAN, with the grid technique, the optimization effect of contribution and migration-coefficient, and the efficient SP-tree query index, the runtime is reduced a lot, and the capability of clustering large database is obviously enhanced, at the same time, the accuracy of clustering result is not degraded.",
"title": ""
},
{
"docid": "a32ea25ea3adc455dd3dfd1515c97ae3",
"text": "Item-to-item collaborative filtering (aka.item-based CF) has been long used for building recommender systems in industrial settings, owing to its interpretability and efficiency in real-time personalization. It builds a user's profile as her historically interacted items, recommending new items that are similar to the user's profile. As such, the key to an item-based CF method is in the estimation of item similarities. Early approaches use statistical measures such as cosine similarity and Pearson coefficient to estimate item similarities, which are less accurate since they lack tailored optimization for the recommendation task. In recent years, several works attempt to learn item similarities from data, by expressing the similarity as an underlying model and estimating model parameters by optimizing a recommendation-aware objective function. While extensive efforts have been made to use shallow linear models for learning item similarities, there has been relatively less work exploring nonlinear neural network models for item-based CF. In this work, we propose a neural network model named Neural Attentive Item Similarity model (NAIS) for item-based CF. The key to our design of NAIS is an attention network, which is capable of distinguishing which historical items in a user profile are more important for a prediction. Compared to the state-of-the-art item-based CF method Factored Item Similarity Model (FISM) [1] , our NAIS has stronger representation power with only a few additional parameters brought by the attention network. Extensive experiments on two public benchmarks demonstrate the effectiveness of NAIS. This work is the first attempt that designs neural network models for item-based CF, opening up new research possibilities for future developments of neural recommender systems.",
"title": ""
},
{
"docid": "010cfba81f65e6e135efcb0aa3bf5d6c",
"text": "Recent studies on cloud-radio access networks (CRANs) assume the availability of a single processor (cloud) capable of managing the entire network performance; inter-cloud interference is treated as background noise. This paper considers the more practical scenario of the downlink of a CRAN formed by multiple clouds, where each cloud is connected to a cluster of multiple-antenna base stations (BSs) via high-capacity wireline backhaul links. The network is composed of several disjoint BSs' clusters, each serving a pre-known set of single-antenna users. To account for both inter- cloud and intra-cloud interference, the paper considers the problem of minimizing the total network power consumption subject to quality of service constraints, by jointly determining the set of active BSs connected to each cloud and the beamforming vectors of every user across the network. The paper solves the problem using Lagrangian duality theory through a dual decomposition approach, which decouples the problem into multiple and independent subproblems, the solution of which depends on the dual optimization problem. The solution then proceeds in updating the dual variables and the active set of BSs at each cloud iteratively. The proposed approach leads to a distributed implementation across the multiple clouds through a reasonable exchange of information between adjacent clouds. The paper further proposes a centralized solution to the problem. Simulation results suggest that the proposed algorithms significantly outperform the conventional per-cloud update solution, especially at high signal-to-interference-plus- noise ratio (SINR) target.",
"title": ""
},
{
"docid": "da129ff6527c7b8af0f34a910051e5ef",
"text": "A compact ultra-wideband (UWB) bandpass filter is proposed based on the coplanar-waveguide (CPW) split-mode resonator. By suitably introducing a short-circuited stub to implement the shunt inductance between two quarter wavelength CPW stepped-impedance resonators, a strong magnetic coupling may be realized so that a CPW split-mode resonator may be constructed. Moreover, by properly designing the dual-metal-plane structure, one may accomplish a microstrip-to-CPW feeding mechanism to provide strong enough capacitive coupling for bandwidth enhancement and also introduce an extra electric coupling between input and output ports so that two transmission zeros may be created for selectivity improvement. The implemented UWB filter shows a fractional bandwidth of 116% and two transmission zeros at 1.705 and 11.39 GHz. Good agreement between simulated and measured responses is observed.",
"title": ""
},
{
"docid": "a46721e527f1fefd0380b7c8c40729ca",
"text": "The use of game-based learning in the classroom has become more common in recent years. Many game-based learning tools and platforms are based on a quiz concept where the students can score points if they can choose the correct answer among multiple answers. The article describes an experiment where the game-based student response system Kahoot! was compared to a traditional non-gamified student response system, as well as the usage of paper forms for formative assessment. The goal of the experiment was to investigate whether gamified formative assessments improve the students’ engagement, motivation, enjoyment, concentration, and learning. In the experiment, the three different formative assessment tools/methods were used to review and summarize the same topic in three parallel lectures in an IT introductory course. The first method was to have the students complete a paper quiz, and then review the results afterwards using hand raising. The second method was to use the non-gamified student response system Clicker where the students gave their response to a quiz through polling. The third method was to use the game-based student response system Kahoot!. All three lectures were taught in the exact same way, teaching the same syllabus and using the same teacher. The only difference was the method use to summarize the lecture. A total of 384 students participated in the experiment, where 127 subjects did the paper quiz, 175 used the non-gamified student response system, and 82 students using the gamified approach. The gender distribution was 48% female students and 52% male students. Preand a post-test were used to assess the learning outcome of the lectures, and a questionnaire was used to get data on the students’ engagement and motivation. The results show significant improvement in motivation, engagement, enjoyment, and concentration for the gamified approach, but we did not find significant learning improvement.",
"title": ""
},
{
"docid": "0b4c076b80d91eb20ef71e63f17e9654",
"text": "Current sports injury reporting systems lack a common conceptual basis. We propose a conceptual foundation as a basis for the recording of health problems associated with participation in sports, based on the notion of impairment used by the World Health Organization. We provide definitions of sports impairment concepts to represent the perspectives of health services, the participants in sports and physical exercise themselves, and sports institutions. For each perspective, the duration of the causative event is used as the norm for separating concepts into those denoting impairment conditions sustained instantly and those developing gradually over time. Regarding sports impairment sustained in isolated events, 'sports injury' denotes the loss of bodily function or structure that is the object of observations in clinical examinations; 'sports trauma' is defined as an immediate sensation of pain, discomfort or loss of functioning that is the object of athlete self-evaluations; and 'sports incapacity' is the sidelining of an athlete because of a health evaluation made by a legitimate sports authority that is the object of time loss observations. Correspondingly, sports impairment caused by excessive bouts of physical exercise is denoted as 'sports disease' (overuse syndrome) when observed by health service professionals during clinical examinations, 'sports illness' when observed by the athlete in self-evaluations, and 'sports sickness' when recorded as time loss from sports participation by a sports body representative. We propose a concerted development effort in this area that takes advantage of concurrent ontology management resources and involves the international sporting community in building terminology systems that have broad relevance.",
"title": ""
},
{
"docid": "d505a0fe73296fe19f0f683773c9520d",
"text": "Abstractive text summarization is a complex task whose goal is to generate a concise version of a text without necessarily reusing the sentences from the original source, but still preserving the meaning and the key contents. In this position paper we address this issue by modeling the problem as a sequence to sequence learning and exploiting Recurrent Neural Networks (RNN). Moreover, we discuss the idea of combining RNNs and probabilistic models in a unified way in order to incorporate prior knowledge, such as linguistic features. We believe that this approach can obtain better performance than the state-of-the-art models for generating well-formed summaries.",
"title": ""
},
{
"docid": "734fc66c7c745498ca6b2b7fc6780919",
"text": "In this paper, we investigate the use of an unsupervised label clustering technique and demonstrate that it enables substantial improvements in visual relationship prediction accuracy on the Person in Context (PIC) dataset. We propose to group object labels with similar patterns of relationship distribution in the dataset into fewer categories. Label clustering not only mitigates both the large classification space and class imbalance issues, but also potentially increases data samples for each clustered category. We further propose to incorporate depth information as an additional feature into the instance segmentation model. The additional depth prediction path supplements the relationship prediction model in a way that bounding boxes or segmentation masks are unable to deliver. We have rigorously evaluated the proposed techniques and performed various ablation analysis to validate the benefits of them.",
"title": ""
},
{
"docid": "74e4d1886594ecce6d60861bec6ac3d8",
"text": "From small voltage regulators to large motor drives, power electronics play a very important role in present day technology. The power electronics market is currently dominated by silicon based devices. However due to inherent limitations of silicon material they are approaching thermal limit in terms of high power and high temperature operation. Performance can only be improved with the development of new power devices with better material properties. Silicon Carbide devices are now gaining popularity as next generation semiconductor devices. Due to its inherent material properties such as high breakdown field, wide band gap, high electron saturation velocity, and high thermal conductivity, they serve as a better alternative to the silicon counterparts. Here an attempt is made to study the unique properties of SiC MOSFET and requirements for designing a gate drive circuit for the same. The switching characteristics of SCH2080KE are analyzed using LTspice by performing double pulse test. Also driver circuit is designed for SiC MOSFET SCH2080KE and its performance is tested by implementing a buck converter.",
"title": ""
},
{
"docid": "8c58b608430e922284d8b4b8cd5cc51d",
"text": "At the end of the 19th century, researchers observed that biological substances have frequency- dependent electrical properties and that tissue behaves \"like a capacitor\" [1]. Consequently, in the first half of the 20th century, the permittivity of many types of cell suspensions and tissues was characterized up to frequencies of approximately 100 MHz. From the measurements, conclusions were drawn, in particular, about the electrical properties of the cell membranes, which are the main contributors to the tissue impedance at frequencies below 10 MHz [2]. In 1926, a study found a significant different permittivity for breast cancer tissue compared with healthy tissue at 20 kHz [3]. After World War II, new instrumentation enabled measurements up to 10 GHz, and a vast amount of data on the dielectric properties of different tissue types in the microwave range was published [4]-[6].",
"title": ""
},
{
"docid": "e0fc6fc1425bb5786847c3769c1ec943",
"text": "Developing manufacturing simulation models usually requires experts with knowledge of multiple areas including manufacturing, modeling, and simulation software. The expertise requirements increase for virtual factory models that include representations of manufacturing at multiple resolution levels. This paper reports on an initial effort to automatically generate virtual factory models using manufacturing configuration data in standard formats as the primary input. The execution of the virtual factory generates time series data in standard formats mimicking a real factory. Steps are described for auto-generation of model components in a software environment primarily oriented for model development via a graphic user interface. Advantages and limitations of the approach and the software environment used are discussed. The paper concludes with a discussion of challenges in verification and validation of the virtual factory prototype model with its multiple hierarchical models and future directions.",
"title": ""
},
{
"docid": "2f7a63571f8d695d402a546a457470c4",
"text": "Why does Deep Learning work? What representations does it capture? How do higher-order representations emerge? We study these questions from the perspective of group theory, thereby opening a new approach towards a theory of Deep learning. One factor behind the recent resurgence of the subject is a key algorithmic step called pretraining: first search for a good generative model for the input samples, and repeat the process one layer at a time. We show deeper implications of this simple principle, by establishing a connection with the interplay of orbits and stabilizers of group actions. Although the neural networks themselves may not form groups, we show the existence of shadow groups whose elements serve as close approximations. Over the shadow groups, the pretraining step, originally introduced as a mechanism to better initialize a network, becomes equivalent to a search for features with minimal orbits. Intuitively, these features are in a way the simplest. Which explains why a deep learning network learns simple features first. Next, we show how the same principle, when repeated in the deeper layers, can capture higher order representations, and why representation complexity increases as the layers get deeper.",
"title": ""
},
{
"docid": "450fdd88aa45a405eace9a5a1e0113f7",
"text": "DNN-based cross-modal retrieval has become a research hotspot, by which users can search results across various modalities like image and text. However, existing methods mainly focus on the pairwise correlation and reconstruction error of labeled data. They ignore the semantically similar and dissimilar constraints between different modalities, and cannot take advantage of unlabeled data. This paper proposes Cross-modal Deep Metric Learning with Multi-task Regularization (CDMLMR), which integrates quadruplet ranking loss and semi-supervised contrastive loss for modeling cross-modal semantic similarity in a unified multi-task learning architecture. The quadruplet ranking loss can model the semantically similar and dissimilar constraints to preserve cross-modal relative similarity ranking information. The semi-supervised contrastive loss is able to maximize the semantic similarity on both labeled and unlabeled data. Compared to the existing methods, CDMLMR exploits not only the similarity ranking information but also unlabeled cross-modal data, and thus boosts cross-modal retrieval accuracy.",
"title": ""
},
{
"docid": "54eb416e4bc32654c6e55a58baacd853",
"text": "Condom catheters are often used in the management of male urinary incontinence, and are considered to be safe. As condom catheters are placed on the male genitalia, sometimes adequate care is not taken after placement owing to poor medical care of debilitated patients and feelings of embarrassment and shame. Similarly, sometimes the correct size of penile sheath is not used. Strangulation of penis due to condom catheter is a rare condition; only few such cases have been reported in the literature. Proper application and routine care of condom catheters are important in preventing this devastating complication especially in a neurologically debilitated population. We present a case of penile necrosis due to condom catheter. We will also discuss proper catheter care and treatment of possible complications.",
"title": ""
},
{
"docid": "6206b0e393c54cfe2921604c2405bfeb",
"text": "and fibrils along a greater length of tendon would follow, converting the new fibrils bridging the repair site into fibers and bundles. The discrete mass of collagen needed to completely heal a transected tendon would be quite small relative to the total collagen mass of the tendon; this is consistent with previous findings (Goldfarb et al., 2001). The tendon could be the sole source of this existing collagen with little effect on its overall strength once the remodelling process was complete, decreasing the need for new collagen synthesis. Postoperative ruptures at or adjacent to the healing site, the hand surgeon’s feared yet poorly understood complication of flexor tendon repair, may be explained by the inherent weakness caused by the recycling process itself. Biochemical modifications at the time of repair may evolve through an improved understanding of the seemingly paradoxical role of collagen fibril segment recycling in temporarily weakening healing tendon so that it may be strengthened. The principal author wishes to thank H.P. Ehrlich, PhD, Hershey, P.A. and M.R. Forough, PhD, Seattle, WA, for their comments and insight, and the late H. Sittertz-Bhatkar, PhD, for the images herein.",
"title": ""
},
{
"docid": "736a7f4cad46138f350fda904d5de624",
"text": "In the last decades, the development of new technologies applied to lipidomics has revitalized the analysis of lipid profile alterations and the understanding of the underlying molecular mechanisms of lipid metabolism, together with their involvement in the occurrence of human disease. Of particular interest is the study of omega-3 and omega-6 long chain polyunsaturated fatty acids (LC-PUFAs), notably EPA (eicosapentaenoic acid, 20:5n-3), DHA (docosahexaenoic acid, 22:6n-3), and ARA (arachidonic acid, 20:4n-6), and their transformation into bioactive lipid mediators. In this sense, new families of PUFA-derived lipid mediators, including resolvins derived from EPA and DHA, and protectins and maresins derived from DHA, are being increasingly investigated because of their active role in the \"return to homeostasis\" process and resolution of inflammation. Recent findings reviewed in the present study highlight that the omega-6 fatty acid ARA appears increased, and omega-3 EPA and DHA decreased in most cancer tissues compared to normal ones, and that increments in omega-3 LC-PUFAs consumption and an omega-6/omega-3 ratio of 2-4:1, are associated with a reduced risk of breast, prostate, colon and renal cancers. Along with their lipid-lowering properties, omega-3 LC-PUFAs also exert cardioprotective functions, such as reducing platelet aggregation and inflammation, and controlling the presence of DHA in our body, especially in our liver and brain, which is crucial for optimal brain functionality. Considering that DHA is the principal omega-3 FA in cortical gray matter, the importance of DHA intake and its derived lipid mediators have been recently reported in patients with major depressive and bipolar disorders, Alzheimer disease, Parkinson's disease, and amyotrophic lateral sclerosis. The present study reviews the relationships between major diseases occurring today in the Western world and LC-PUFAs. More specifically this review focuses on the dietary omega-3 LC-PUFAs and the omega-6/omega-3 balance, in a wide range of inflammation disorders, including autoimmune diseases. This review suggests that the current recommendations of consumption and/or supplementation of omega-3 FAs are specific to particular groups of age and physiological status, and still need more fine tuning for overall human health and well being.",
"title": ""
},
{
"docid": "1d9b50bf7fa39c11cca4e864bbec5cf3",
"text": "FPGA-based embedded soft vector processors can exceed the performance and energy-efficiency of embedded GPUs and DSPs for lightweight deep learning applications. For low complexity deep neural networks targeting resource constrained platforms, we develop optimized Caffe-compatible deep learning library routines that target a range of embedded accelerator-based systems between 4 -- 8 W power budgets such as the Xilinx Zedboard (with MXP soft vector processor), NVIDIA Jetson TK1 (GPU), InForce 6410 (DSP), TI EVM5432 (DSP) as well as the Adapteva Parallella board (custom multi-core with NoC). For MNIST (28×28 images) and CIFAR10 (32×32 images), the deep layer structure is amenable to MXP-enhanced FPGA mappings to deliver 1.4 -- 5× higher energy efficiency than all other platforms. Not surprisingly, embedded GPU works better for complex networks with large image resolutions.",
"title": ""
},
{
"docid": "c221568e2ed4d6192ab04119046c4884",
"text": "An efficient Ultra-Wideband (UWB) Frequency Selective Surface (FSS) is presented to mitigate the potential harmful effects of Electromagnetic Interference (EMI) caused by the radiations emitted by radio devices. The proposed design consists of circular and square elements printed on the opposite surfaces of FR4 substrate of 3.2 mm thickness. It ensures better angular stability by up to 600, bandwidth has been significantly enhanced by up to 16. 21 GHz to provide effective shielding against X-, Ka- and K-bands. While signal attenuation has also been improved remarkably in the desired band compared to the results presented in the latest research. Theoretical results are presented for TE and TM polarization for normal and oblique angles of incidence.",
"title": ""
}
] |
scidocsrr
|
eacb65f10b0211b0129209075e070a3f
|
A serious game model for cultural heritage
|
[
{
"docid": "49e3c33aa788d3d075c7569c6843065a",
"text": "Cultural heritage around the globe suffers from wars, natural disasters and human negligence. The importance of cultural heritage documentation is well recognized and there is an increasing pressure to document our heritage both nationally and internationally. This has alerted international organizations to the need for issuing guidelines describing the standards for documentation. Charters, resolutions and declarations by international organisations underline the importance of documentation of cultural heritage for the purposes of conservation works, management, appraisal, assessment of the structural condition, archiving, publication and research. Important ones include the International Council on Monuments and Sites, ICOMOS (ICOMOS, 2005) and UNESCO, including the famous Venice Charter, The International Charter for the Conservation and Restoration of Monuments and Sites, 1964, (UNESCO, 2005).",
"title": ""
},
{
"docid": "c1e12a4feec78d480c8f0c02cdb9cb7d",
"text": "Although the Parthenon has stood on the Athenian Acropolis for nearly 2,500 years, its sculptural decorations have been scattered to museums around the world. Many of its sculptures have been damaged or lost. Fortunately, most of the decoration survives through drawings, descriptions, and casts. A component of our Parthenon Project has been to assemble digital models of the sculptures and virtually reunite them with the Parthenon. This sketch details our effort to digitally record the Parthenon sculpture collection in the Basel Skulpturhalle museum, which exhibits plaster casts of almost all of the existing pediments, metopes, and frieze. Our techniques have been designed to work as quickly as possible and at low cost.",
"title": ""
}
] |
[
{
"docid": "5d8bc135f10c1a9b741cc60ad7aae04f",
"text": "In this work, we cast text summarization as a sequence-to-sequence problem and apply the attentional encoder-decoder RNN that has been shown to be successful for Machine Translation (Bahdanau et al. (2014)). Our experiments show that the proposed architecture significantly outperforms the state-of-the art model of Rush et al. (2015) on the Gigaword dataset without any additional tuning. We also propose additional extensions to the standard architecture, which we show contribute to further improvement in performance.",
"title": ""
},
{
"docid": "75e1e8e65bd5dcf426bf9f3ee7c666a5",
"text": "This paper offers a new, nonlinear model of informationseeking behavior, which contrasts with earlier stage models of information behavior and represents a potential cornerstone for a shift toward a new perspective for understanding user information behavior. The model is based on the findings of a study on interdisciplinary information-seeking behavior. The study followed a naturalistic inquiry approach using interviews of 45 academics. The interview results were inductively analyzed and an alternative framework for understanding information-seeking behavior was developed. This model illustrates three core processes and three levels of contextual interaction, each composed of several individual activities and attributes. These interact dynamically through time in a nonlinear manner. The behavioral patterns are analogous to an artist’s palette, in which activities remain available throughout the course of information-seeking. In viewing the processes in this way, neither start nor finish points are fixed, and each process may be repeated or lead to any other until either the query or context determine that information-seeking can end. The interactivity and shifts described by the model show information-seeking to be nonlinear, dynamic, holistic, and flowing. The paper offers four main implications of the model as it applies to existing theory and models, requirements for future research, and the development of information literacy curricula. Central to these implications is the creation of a new nonlinear perspective from which user information-seeking can be interpreted.",
"title": ""
},
{
"docid": "d3b24655e01cbb4f5d64006222825361",
"text": "A number of leading cognitive architectures that are inspired by the human brain, at various levels of granularity, are reviewed and compared, with special attention paid to the way their internal structures and dynamics map onto neural processes. Four categories of Biologically Inspired Cognitive Architectures (BICAs) are considered, with multiple examples of each category briefly reviewed, and selected examples discussed in more depth: primarily symbolic architectures (e.g. ACT-R), emergentist architectures (e.g. DeSTIN), developmental robotics architectures (e.g. IM-CLEVER), and our central focus, hybrid architectures (e.g. LIDA, CLARION, 4D/RCS, DUAL, MicroPsi, and OpenCog). Given the state of the art in BICA, it is not yet possible to tell whether emulating the brain on the architectural level is going to be enough to allow rough emulation of brain function; and given the state of the art in neuroscience, it is not yet possible to connect BICAs with large-scale brain simulations in a thoroughgoing way. However, it is nonetheless possible to draw reasonably close function connections between various components of various BICAs and various brain regions and dynamics, and as both BICAs and brain simulations mature, these connections should become richer and may extend further into the domain of internal dynamics as well as overall behavior. & 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "808115043786372af3e3fb726cc3e191",
"text": "Scapy is a free and open source packet manipulation environment written in Python language. In this paper we present a Modbus extension to Scapy, and show how this environment can be used to build tools for security analysis of industrial network protocols. Our implementation can be extended to other industrial network protocols and can help security analysts to understand how these protocols work under attacks or adverse conditions.",
"title": ""
},
{
"docid": "a3345ad4a18be52b478d3e75cf05a371",
"text": "In the course of the routine use of NMR as an aid for organic chemistry, a day-to-day problem is the identification of signals deriving from common contaminants (water, solvents, stabilizers, oils) in less-than-analytically-pure samples. This data may be available in the literature, but the time involved in searching for it may be considerable. Another issue is the concentration dependence of chemical shifts (especially 1H); results obtained two or three decades ago usually refer to much more concentrated samples, and run at lower magnetic fields, than today’s practice. We therefore decided to collect 1H and 13C chemical shifts of what are, in our experience, the most popular “extra peaks” in a variety of commonly used NMR solvents, in the hope that this will be of assistance to the practicing chemist.",
"title": ""
},
{
"docid": "12f6f7e9350d436cc167e00d72b6e1b1",
"text": "This paper reviews the state of the art of a polyphase complex filter for RF front-end low-IF transceivers applications. We then propose a multi-stage polyphase filter design to generate a quadrature I/Q signal to achieve a wideband precision quadrature phase shift with a constant 90 ° phase difference for self-interference cancellation circuit for full duplex radio. The number of the stages determines the bandwidth requirement of the channel. An increase of 87% in bandwidth is attained when our design is implemented in multi-stage from 2 to an extended 6 stages. A 4-stage polyphase filter achieves 2.3 GHz bandwidth.",
"title": ""
},
{
"docid": "671eb73ad86525cb183e2b8dbfe09947",
"text": "We propose a metalearning approach for learning gradient-based reinforcement learning (RL) algorithms. The idea is to evolve a differentiable loss function, such that an agent, which optimizes its policy to minimize this loss, will achieve high rewards. The loss is parametrized via temporal convolutions over the agent’s experience. Because this loss is highly flexible in its ability to take into account the agent’s history, it enables fast task learning. Empirical results show that our evolved policy gradient algorithm (EPG) achieves faster learning on several randomized environments compared to an off-the-shelf policy gradient method. We also demonstrate that EPG’s learned loss can generalize to out-of-distribution test time tasks, and exhibits qualitatively different behavior from other popular metalearning algorithms.",
"title": ""
},
{
"docid": "6021968dc39e13620e90c30d9c008d19",
"text": "In recent years, Deep Reinforcement Learning has made impressive advances in solving several important benchmark problems for sequential decision making. Many control applications use a generic multilayer perceptron (MLP) for non-vision parts of the policy network. In this work, we propose a new neural network architecture for the policy network representation that is simple yet effective. The proposed Structured Control Net (SCN) splits the generic MLP into two separate sub-modules: a nonlinear control module and a linear control module. Intuitively, the nonlinear control is for forward-looking and global control, while the linear control stabilizes the local dynamics around the residual of global control. We hypothesize that this will bring together the benefits of both linear and nonlinear policies: improve training sample efficiency, final episodic reward, and generalization of learned policy, while requiring a smaller network and being generally applicable to different training methods. We validated our hypothesis with competitive results on simulations from OpenAI MuJoCo, Roboschool, Atari, and a custom 2D urban driving environment, with various ablation and generalization tests, trained with multiple black-box and policy gradient training methods. The proposed architecture has the potential to improve upon broader control tasks by incorporating problem specific priors into the architecture. As a case study, we demonstrate much improved performance for locomotion tasks by emulating the biological central pattern generators (CPGs) as the nonlinear part of the architecture.",
"title": ""
},
{
"docid": "cd3bbec4c7f83c9fb553056b1b593bec",
"text": "We present results from experiments in using several pitch representations for jazz-oriented musical tasks performed by a recurrent neural network. We have run experiments with several kinds of recurrent networks for this purpose, and have found that Long Short-term Memory networks provide the best results. We show that a new pitch representation called Circles of Thirds works as well as two other published representations for these tasks, yet it is more succinct and enables faster learning. Recurrent Neural Networks and Music Many researchers are familiar with feedforward neural networks consisting of 2 or more layers of processing units, each with weighted connections to the next layer. Each unit passes the sum of its weighted inputs through a nonlinear sigmoid function. Each layer’s outputs are fed forward through the network to the next layer, until the output layer is reached. Weights are initialized to small initial random values. Via the back-propagation algorithm (Rumelhart et al. 1986), outputs are compared to targets, and the errors are propagated back through the connection weights. Weights are updated by gradient descent. Through an iterative training procedure, examples (inputs) and targets are presented repeatedly; the network learns a nonlinear function of the inputs. It can then generalize and produce outputs for new examples. These networks have been explored by the computer music community for classifying chords (Laden and Keefe 1991) and other musical tasks (Todd and Loy 1991, Griffith and Todd 1999). A recurrent network uses feedback from one or more of its units as input in choosing the next output. This means that values generated by units at time step t-1, say y(t-1), are part of the inputs x(t) used in selecting the next set of outputs y(t). A network may be fully recurrent; that is all units are connected back to each other and to themselves. Or part of the network may be fed back in recurrent links. Todd (Todd 1991) uses a Jordan recurrent network (Jordan 1986) to reproduce classical songs and then to produce new songs. The outputs are recurrently fed back as inputs as shown in Figure 1. In addition, self-recurrence on the inputs provides a decaying history of these inputs. The weight update algorithm is back-propagation, using teacher forcing (Williams and Zipser 1988). With teacher forcing, the target outputs are presented to the recurrent inputs from the output units (instead of the actual outputs, which are not correct yet during training). Pitches (on output or input) are represented in a localized binary representation, with one bit for each of the 12 chromatic notes. More bits can be added for more octaves. C is represented as 100000000000. C# is 010000000000, D is 001000000000. Time is divided into 16th note increments. Note durations are determined by how many increments a pitch’s output unit is on (one). E.g. an eighth note lasts for two time increments. Rests occur when all outputs are off (zero). Figure 1. Jordan network, with outputs fed back to inputs. (Mozer 1994)’s CONCERT uses a backpropagationthrough-time (BPTT) recurrent network to learn various musical tasks and to learn melodies with harmonic accompaniment. Then, CONCERT can run in generation mode to compose new music. The BPTT algorithm (Williams and Zipser 1992, Werbos 1988, Campolucci 1998) can be used with a fully recurrent network where the outputs of all units are connected to the inputs of all units, including themselves. The network can include external inputs and optionally, may include a regular feedforward output network (see Figure 2). The BPTT weight updates are proportional to the gradient of the sum of errors over every time step in the interval between start time t0 and end time t1, assuming the error at time step t is affected by the outputs at all previous time steps, starting with t0. BPTT requires saving all inputs, states, and errors for all time steps, and updating the weights in a batch operation at the end, time t1. One sequence (each example) requires one batch weight update. Figure 2. A fully self-recurrent network with external inputs, and optional feedforward output attachment. If there is no output attachment, one or more recurrent units are designated as output units. CONCERT is a combination of BPTT with a layer of output units that are probabilistically interpreted, and a maximum likelihood training criterion (rather than a squared error criterion). There are two sets of outputs (and two sets of inputs), one set for pitch and the other for duration. One pass through the network corresponds to a note, rather than a slice of time. We present only the pitch representation here since that is our focus. Mozer uses a psychologically based representation of musical notes. Figure 3 shows the chromatic circle (CC) and the circle of fifths (CF), used with a linear octave value for CONCERT’s pitch representation. Ignoring octaves, we refer to the rest of the representation as CCCF. Six digits represent the position of a pitch on CC and six more its position on CF. C is represented as 000000 000000, C# as 000001 111110, D as 000011 111111, and so on. Mozer uses -1,1 rather than 0,1 because of implementation details. Figure 3. Chromatic Circle on Left, Circle of Fifths on Right. Pitch position on each circle determines its representation. For chords, CONCERT uses the overlapping subharmonics representation of (Laden and Keefe, 1991). Each chord tone starts in Todd’s binary representation, but 5 harmonics (integer multiples of its frequency) are added. C3 is now C3, C4, G4, C5, E5 requiring a 3 octave representation. Because the 7th of the chord does not overlap with the triad harmonics, Laden and Keefe use triads only. C major triad C3, E3, G3, with harmonics, is C3, C4, G4, C5, E5, E3, E4, B4, E5, G#5, G3, G4, D4, G5, B5. The triad pitches and harmonics give an overlapping representation. Each overlapping pitch adds 1 to its corresponding input. CONCERT excludes octaves, leaving 12 highly overlapping chord inputs, plus an input that is positive when certain key-dependent chords appear, and learns waltzes over a harmonic chord structure. Eck and Schmidhuber (2002) use Long Short-term Memory (LSTM) recurrent networks to learn and compose blues music (Hochreiter and Schmidhuber 1997, and see Gers et al., 2000 for succinct pseudo-code for the algorithm). An LSTM network consists of input units, output units, and a set of memory blocks, each of which includes one or more memory cells. Blocks are connected to each other recurrently. Figure 4 shows an LSTM network on the left, and the contents of one memory block (this one with one cell) on the right. There may also be a direct connection from external inputs to the output units. This is the configuration found in Gers et al., and the one we use in our experiments. Eck and Schmidhuber also add recurrent connections from output units to memory blocks. Each block contains one or more memory cells that are self-recurrent. All other units in the block gate the inputs, outputs, and the memory cell itself. A memory cell can “cache” errors and release them for weight updates much later in time. The gates can learn to delay a block’s outputs, to reset the memory cells, and to inhibit inputs from reaching the cell or to allow inputs in. Figure 4. An LSTM network on the left and a one-cell memory block on the right, with input, forget, and output gates. Black squares on gate connections show that the gates can control whether information is passed to the cell, from the cell, or even within the cell. Weight updates are based on gradient descent, with multiplicative gradient calculations for gates, and approximations from the truncated BPTT (Williams and Peng 1990) and Real-Time Recurrent Learning (RTRL) (Robinson and Fallside 1987) algorithms. LSTM networks are able to perform counting tasks in time-series. Eck and Schmidhuber’s model of blues music is a 12-bar chord sequence over which music is composed/improvised. They successfully trained an LSTM network to learn a sequence of blues chords, with varying durations. Splitting time into 8th note increments, each chord’s duration is either 8 or 4 time steps (whole or half durations). Chords are sets of 3 or 4 tones (triads or triads plus sevenths), represented in a 12-bit localized binary representation with values of 1 for a chord pitch, and 0 for a non-chord pitch. Chords are inverted to fit in 1 octave. For example, C7 is represented as 100010010010 (C,E,G,B-flat), and F7 is 100101000100 (F,A,C,E-flat inverted to C,E-flat,F,A). The network has 4 memory blocks, each containing 2 cells. The outputs are considered probabilities of whether the corresponding note is on or off. The goal is to obtain an output of more that .5 for each note that should be on in a particular chord, with all other outputs below .5. Eck and Schmidhuber’s work includes learning melody and chords with two LSTM networks containing 4 blocks each. Connections are made from the chord network to the melody network, but not vice versa. The authors composed short 1-bar melodies over each of the 12 possible bars. The network is trained on concatenations of the short melodies over the 12-bar blues chord sequence. The melody network is trained until the chords network has learned according to the criterion. In music generation mode, the network can generate new melodies using this training. In a system called CHIME (Franklin 2000, 2001), we first train a Jordan recurrent network (Figure 1) to produce 3 Sonny Rollins jazz/blues melodies. The current chord and index number of the song are non-recurrent inputs to the network. Chords are represented as sets of 4 note values of 1 in a 12-note input layer, with non-chord note inputs set to 0 just as in Eck and Schmidhuber’s chord representation. Chords are also inverted to fit within one octave. 24 (2 octaves) of the outputs are notes, and the 25th is a rest. Of these 25, the unit with the largest value ",
"title": ""
},
{
"docid": "092bf4ee1626553206ee9b434cda957b",
"text": ".......................................................................................................... 3 Introduction ...................................................................................................... 4 Methods........................................................................................................... 7 Procedure ..................................................................................................... 7 Inclusion and exclusion criteria ..................................................................... 8 Data extraction and quality assessment ....................................................... 8 Results ............................................................................................................ 9 Included studies ........................................................................................... 9 Quality of included articles .......................................................................... 13 Excluded studies ........................................................................................ 15 Fig. 1 CONSORT 2010 Flow Diagram ....................................................... 16 Table 1: Primary studies ............................................................................. 17 Table2: Secondary studies ......................................................................... 18 Discussion ..................................................................................................... 19 Conclusion ..................................................................................................... 22 Acknowledgements ....................................................................................... 22 References .................................................................................................... 23 Appendix ....................................................................................................... 32",
"title": ""
},
{
"docid": "5c7678fae587ef784b4327d545a73a3e",
"text": "The vision of Future Internet based on standard communication protocols considers the merging of computer networks, Internet of Things (IoT), Internet of People (IoP), Internet of Energy (IoE), Internet of Media (IoM), and Internet of Services (IoS), into a common global IT platform of seamless networks and networked “smart things/objects”. However, with the widespread deployment of networked, intelligent sensor technologies, an Internet of Things (IoT) is steadily evolving, much like the Internet decades ago. In the future, hundreds of billions of smart sensors and devices will interact with one another without human intervention, on a Machine-to-Machine (M2M) basis. They will generate an enormous amount of data at an unprecedented scale and resolution, providing humans with information and control of events and objects even in remote physical environments. This paper will provide an overview of performance evaluation, challenges and opportunities of IOT results for machine learning presented by this new paradigm.",
"title": ""
},
{
"docid": "c26919afa32708786ae7f96b88883ed9",
"text": "A Privacy Enhancement Technology (PET) is an application or a mechanism which allows users to protect the privacy of their personally identifiable information. Early PETs were about enabling anonymous mailing and anonymous browsing, but lately there have been active research and development efforts in many other problem domains. This paper describes the first pattern language for developing privacy enhancement technologies. Currently, it contains 12 patterns. These privacy patterns are not limited to a specific problem domain; they can be applied to design anonymity systems for various types of online communication, online data sharing, location monitoring, voting and electronic cash management. The pattern language guides a developer when he or she is designing a PET for an existing problem, or innovating a solution for a new problem.",
"title": ""
},
{
"docid": "c6058966ef994d7b447f47d41d7fff33",
"text": "The advancement in computer technology has encouraged the researchers to develop software for assisting doctors in making decision without consulting the specialists directly. The software development exploits the potential of human intelligence such as reasoning, making decision, learning (by experiencing) and many others. Artificial intelligence is not a new concept, yet it has been accepted as a new technology in computer science. It has been applied in many areas such as education, business, medical and manufacturing. This paper explores the potential of artificial intelligence techniques particularly for web-based medical applications. In addition, a model for web-based medical diagnosis and prediction is",
"title": ""
},
{
"docid": "753b167933f5dd92c4b8021f6b448350",
"text": "The advent of social media and microblogging platforms has radically changed the way we consume information and form opinions. In this paper, we explore the anatomy of the information space on Facebook by characterizing on a global scale the news consumption patterns of 376 million users over a time span of 6 y (January 2010 to December 2015). We find that users tend to focus on a limited set of pages, producing a sharp community structure among news outlets. We also find that the preferences of users and news providers differ. By tracking how Facebook pages \"like\" each other and examining their geolocation, we find that news providers are more geographically confined than users. We devise a simple model of selective exposure that reproduces the observed connectivity patterns.",
"title": ""
},
{
"docid": "b0a0ad5f90d849696e3431373db6b4a5",
"text": "A comparative study of the structure of the flower in three species of Robinia L., R. pseudoacacia, R. × ambigua, and R. neomexicana, was carried out. The widely naturalized R. pseudoacacia, as compared to the two other species, has the smallest sizes of flower organs at all stages of development. Qualitative traits that describe each phase of the flower development were identified. A set of microscopic morphological traits of the flower (both quantitative and qualitative) was analyzed. Additional taxonomic traits were identified: shape of anthers, size and shape of pollen grains, and the extent of pollen fertility.",
"title": ""
},
{
"docid": "da72f2990b3e21c45a92f7b54be1d202",
"text": "A low-profile, high-gain, and wideband metasurface (MS)-based filtering antenna with high selectivity is investigated in this communication. The planar MS consists of nonuniform metallic patch cells, and it is fed by two separated microstrip-coupled slots from the bottom. The separation between the two slots together with a shorting via is used to provide good filtering performance in the lower stopband, whereas the MS is elaborately designed to provide a sharp roll-off rate at upper band edge for the filtering function. The MS also simultaneously works as a high-efficient radiator, enhancing the impedance bandwidth and antenna gain of the feeding slots. To verify the design, a prototype operating at 5 GHz has been fabricated and measured. The reflection coefficient, radiation pattern, antenna gain, and efficiency are studied, and reasonable agreement between the measured and simulated results is observed. The prototype with dimensions of 1.3 λ0 × 1.3 λ0 × 0.06 λ0 has a 10-dB impedance bandwidth of 28.4%, an average gain of 8.2 dBi within passband, and an out-of-band suppression level of more than 20 dB within a very wide stop-band.",
"title": ""
},
{
"docid": "157c084aa6622c74449f248f98314051",
"text": "A magnetically-tuned multi-mode VCO featuring an ultra-wide frequency tuning range is presented. By changing the magnetic coupling coefficient between the primary and secondary coils in the transformer tank, the frequency tuning range of a dual-band VCO is greatly increased to continuously cover the whole E-band. Fabricated in a 65-nm CMOS process, the presented VCO measures a tuning range of 44.2% from 57.5 to 90.1 GHz while consuming 7mA to 9mA at 1.2V supply. The measured phase noises at 10MHz offset from carrier frequencies of 72.2, 80.5 and 90.1 GHz are -111.8, -108.9 and -105 dBc/Hz, respectively, which corresponds to a FOMT between -192.2 and -184.2dBc/Hz.",
"title": ""
},
{
"docid": "5912dda99171351acc25971d3c901624",
"text": "New cultivars with very erect leaves, which increase light capture for photosynthesis and nitrogen storage for grain filling, may have increased grain yields. Here we show that the erect leaf phenotype of a rice brassinosteroid–deficient mutant, osdwarf4-1, is associated with enhanced grain yields under conditions of dense planting, even without extra fertilizer. Molecular and biochemical studies reveal that two different cytochrome P450s, CYP90B2/OsDWARF4 and CYP724B1/D11, function redundantly in C-22 hydroxylation, the rate-limiting step of brassinosteroid biosynthesis. Therefore, despite the central role of brassinosteroids in plant growth and development, mutation of OsDWARF4 alone causes only limited defects in brassinosteroid biosynthesis and plant morphology. These results suggest that regulated genetic modulation of brassinosteroid biosynthesis can improve crops without the negative environmental effects of fertilizers.",
"title": ""
},
{
"docid": "7f3bccab6d6043d3dedc464b195df084",
"text": "This paper introduces a new probabilistic graphical model called gated Bayesian network (GBN). This model evolved from the need to represent processes that include several distinct phases. In essence, a GBN is a model that combines several Bayesian networks (BNs) in such a manner that they may be active or inactive during queries to the model. We use objects called gates to combine BNs, and to activate and deactivate them when predefined logical statements are satisfied. In this paper we also present an algorithm for semi-automatic learning of GBNs. We use the algorithm to learn GBNs that output buy and sell decisions for use in algorithmic trading systems. We show how the learnt GBNs can substantially lower risk towards invested capital, while they at the same time generate similar or better rewards, compared to the benchmark investment strategy buy-and-hold. We also explore some differences and similarities between GBNs and other related formalisms.",
"title": ""
}
] |
scidocsrr
|
a4b6ff84625bc57c265b825f712ba42b
|
Real-Time Face Detection and Motion Analysis With Application in “Liveness” Assessment
|
[
{
"docid": "6f56d10f90b1b3ba0c1700fa06c9199e",
"text": "Finding human faces automatically in an image is a dif cult yet important rst step to a fully automatic face recognition system This paper presents an example based learning approach for locating unoccluded frontal views of human faces in complex scenes The technique represents the space of human faces by means of a few view based face and non face pattern prototypes At each image location a value distance measure is com puted between the local image pattern and each prototype A trained classi er determines based on the set of dis tance measurements whether a human face exists at the current image location We show empirically that our distance metric is critical for the success of our system",
"title": ""
}
] |
[
{
"docid": "82c8a692e3b39e58bd73997b2e922c2c",
"text": "The traditional approaches to building survivable systems assume a framework of absolute trust requiring a provably impenetrable and incorruptible Trusted Computing Base (TCB). Unfortunately, we don’t have TCB’s, and experience suggests that we never will. We must instead concentrate on software systems that can provide useful services even when computational resource are compromised. Such a system will 1) Estimate the degree to which a computational resources may be trusted using models of possible compromises. 2) Recognize that a resource is compromised by relying on a system for long term monitoring and analysis of the computational infrastructure. 3) Engage in self-monitoring, diagnosis and adaptation to best achieve its purposes within the available infrastructure. All this, in turn, depends on the ability of the application, monitoring, and control systems to engage in rational decision making about what resources they should use in order to achieve the best ratio of expected benefit to risk.",
"title": ""
},
{
"docid": "9e1c3d4a8bbe211b85b19b38e39db28e",
"text": "This paper presents a novel context-based scene recognition method that enables mobile robots to recognize previously observed topological places in known environments or categorize previously unseen places in new environments. We achieve this by introducing the Histogram of Oriented Uniform Patterns (HOUP), which provides strong discriminative power for place recognition, while offering a significant level of generalization for place categorization. HOUP descriptors are used for image representation within a subdivision framework, where the size and location of sub-regions are determined using an informative feature selection method based on kernel alignment. Further improvement is achieved by developing a similarity measure that accounts for perceptual aliasing to eliminate the effect of indistinctive but visually similar regions that are frequently present in outdoor and indoor scenes. An extensive set of experiments reveals the excellent performance of our method on challenging categorization and recognition tasks. Specifically, our proposed method outperforms the current state of the art on two place categorization datasets with 15 and 5 place categories, and two topological place recognition datasets, with 5 and 27 places.",
"title": ""
},
{
"docid": "58d629b3ac6bd731cd45126ce3ed8494",
"text": "The Support Vector Machine (SVM) is a common machine learning tool that is widely used because of its high classification accuracy. Implementing SVM for embedded real-time applications is very challenging because of the intensive computations required. This increases the attractiveness of implementing SVM on hardware platforms for reaching high performance computing with low cost and power consumption. This paper provides the first comprehensive survey of current literature (2010-2015) of different hardware implementations of SVM classifier on Field-Programmable Gate Array (FPGA). A classification of existing techniques is presented, along with a critical analysis and discussion. A challenging trade-off between meeting embedded real-time systems constraints and high classification accuracy has been observed. Finally, some key future research directions are suggested.",
"title": ""
},
{
"docid": "c8c57c89f5bd92c726373f9cf77726e0",
"text": "Research of named entity recognition (NER) on electrical medical records (EMRs) focuses on verifying whether methods to NER in traditional texts are effective for that in EMRs, and there is no model proposed for enhancing performance of NER via deep learning from the perspective of multiclass classification. In this paper, we annotate a real EMR corpus to accomplish the model training and evaluation. And, then, we present a Convolutional Neural Network (CNN) based multiclass classification method for mining named entities from EMRs. The method consists of two phases. In the phase 1, EMRs are pre-processed for representing samples with word embedding. In the phase 2, the method is built by segmenting training data into many subsets and training a CNN binary classification model on each of subset. Experimental results showed the effectiveness of our method.",
"title": ""
},
{
"docid": "2d6225b20cf13d2974ce78877642a2f7",
"text": "Low rank and sparse representation based methods, which make few specific assumptions about the background, have recently attracted wide attention in background modeling. With these methods, moving objects in the scene are modeled as pixel-wised sparse outliers. However, in many practical scenarios, the distributions of these moving parts are not truly pixel-wised sparse but structurally sparse. Meanwhile a robust analysis mechanism is required to handle background regions or foreground movements with varying scales. Based on these two observations, we first introduce a class of structured sparsity-inducing norms to model moving objects in videos. In our approach, we regard the observed sequence as being constituted of two terms, a low-rank matrix (background) and a structured sparse outlier matrix (foreground). Next, in virtue of adaptive parameters for dynamic videos, we propose a saliency measurement to dynamically estimate the support of the foreground. Experiments on challenging well known data sets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos.",
"title": ""
},
{
"docid": "40e38080e12b2d73836fcb1cf79db033",
"text": "The research in statistical parametric speech synthesis is towards improving naturalness and intelligibility. In this work, the deviation in spectral tilt of the natural and synthesized speech is analyzed and observed a large gap between the two. Furthermore, the same is analyzed for different classes of sounds, namely low-vowels, mid-vowels, high-vowels, semi-vowels, nasals, and found to be varying with category of sound units. Based on variation, a novel method for spectral tilt enhancement is proposed, where the amount of enhancement introduced is different for different classes of sound units. The proposed method yields improvement in terms of intelligibility, naturalness, and speaker similarity of the synthesized speech.",
"title": ""
},
{
"docid": "0ea239ac71e65397d0713fe8c340f67c",
"text": "Mutations in leucine-rich repeat kinase 2 (LRRK2) are a common cause of familial and sporadic Parkinson's disease (PD). Elevated LRRK2 kinase activity and neurodegeneration are linked, but the phosphosubstrate that connects LRRK2 kinase activity to neurodegeneration is not known. Here, we show that ribosomal protein s15 is a key pathogenic LRRK2 substrate in Drosophila and human neuron PD models. Phosphodeficient s15 carrying a threonine 136 to alanine substitution rescues dopamine neuron degeneration and age-related locomotor deficits in G2019S LRRK2 transgenic Drosophila and substantially reduces G2019S LRRK2-mediated neurite loss and cell death in human dopamine and cortical neurons. Remarkably, pathogenic LRRK2 stimulates both cap-dependent and cap-independent mRNA translation and induces a bulk increase in protein synthesis in Drosophila, which can be prevented by phosphodeficient T136A s15. These results reveal a novel mechanism of PD pathogenesis linked to elevated LRRK2 kinase activity and aberrant protein synthesis in vivo.",
"title": ""
},
{
"docid": "5caa0646c0d5b1a2a0c799e048b5557a",
"text": "The goal of this research is to find the efficient and most widely used cryptographic algorithms form the history, investigating one of its merits and demerits which have not been modified so far. Perception of cryptography, its techniques such as transposition & substitution and Steganography were discussed. Our main focus is on the Playfair Cipher, its advantages and disadvantages. Finally, we have proposed a few methods to enhance the playfair cipher for more secure and efficient cryptography.",
"title": ""
},
{
"docid": "cf374e1d1fa165edaf0b29749f32789c",
"text": "Photovoltaic (PV) system performance extremely depends on local insolation and temperature conditions. Under partial shading, P-I characteristics of PV systems are complicated and may have multiple local maxima. Conventional Maximum Power Point Tracking (MPPT) techniques can easily fail to track global maxima and may be trapped in local maxima under partial shading; this can be one of main causes for reduced energy yield for many PV systems. In order to solve this problem, this paper proposes a novel Maximum Power Point tracking algorithm based on Differential Evolution (DE) that is capable of tracking global MPP under partial shaded conditions. The ability of proposed algorithm and its excellent performances are evaluated with conventional and popular algorithm by means of simulation. The proposed algorithm works in conjunction with a Boost (step up) DC-DC converter to track the global peak. Moreover, this paper includes a MATLAB-based modeling and simulation scheme suitable for photovoltaic characteristics under partial shading.",
"title": ""
},
{
"docid": "2eba092d19cc8fb35994e045f826e950",
"text": "Deep neural networks have proven to be particularly eective in visual and audio recognition tasks. Existing models tend to be computationally expensive and memory intensive, however, and so methods for hardwareoriented approximation have become a hot topic. Research has shown that custom hardware-based neural network accelerators can surpass their general-purpose processor equivalents in terms of both throughput and energy eciency. Application-tailored accelerators, when co-designed with approximation-based network training methods, transform large, dense and computationally expensive networks into small, sparse and hardware-ecient alternatives, increasing the feasibility of network deployment. In this article, we provide a comprehensive evaluation of approximation methods for high-performance network inference along with in-depth discussion of their eectiveness for custom hardware implementation. We also include proposals for future research based on a thorough analysis of current trends. is article represents the rst survey providing detailed comparisons of custom hardware accelerators featuring approximation for both convolutional and recurrent neural networks, through which we hope to inspire exciting new developments in the eld.",
"title": ""
},
{
"docid": "b42f4d645e2a7e24df676a933f414a6c",
"text": "Epilepsy is a common neurological condition which affects the central nervous system that causes people to have a seizure and can be assessed by electroencephalogram (EEG). Electroencephalography (EEG) signals reflect two types of paroxysmal activity: ictal activity and interictal paroxystic events (IPE). The relationship between IPE and ictal activity is an essential and recurrent question in epileptology. The spike detection in EEG is a difficult problem. Many methods have been developed to detect the IPE in the literature. In this paper we propose three methods to detect the spike in real EEG signal: Page Hinkley test, smoothed nonlinear energy operator (SNEO) and fractal dimension. Before using these methods, we filter the signal. The Singular Spectrum Analysis (SSA) filter is used to remove the noise in an EEG signal.",
"title": ""
},
{
"docid": "fa7cbe54e7fdc2ef373cf4b966181eba",
"text": "Fingerprint enhancement is a critical step in fingerprint recognition systems. There are many existing contact-based fingerprint image enhancement methods and they have their own strengths and weaknesses. However, image enhancement approaches that can be used for contactless fingerprints are rarely considered and the number of such approaches is limited. Furthermore, the performance of existing contact-based fingerprint enhancement methods on the contactless fingerprint samples are unsatisfactory. Therefore, in this paper we propose an improved 3-step fingerprint image quality enhancement approach, which can be used for enhancing contactless fingerprint samples. The evaluation results show that, the proposed enhancement method significantly increases the number of detected minutiae, and improves the performance of fingerprint recognition system by reducing 7% and 15% EER compared to existing methods, respectively.",
"title": ""
},
{
"docid": "310036a45a95679a612cc9a60e44e2e0",
"text": "A broadband single layer, dual circularly polarized (CP) reflectarrays with linearly polarized feed is introduced in this paper. To reduce the electrical interference between the two orthogonal polarizations of the CP element, a novel subwavelength multiresonance element with a Jerusalem cross and an open loop is proposed, which presents a broader bandwidth and phase range excessing 360° simultaneously. By tuning the x- and y-axis dimensions of the proposed element, an optimization technique is used to minimize the phase errors on both orthogonal components. Then, a single-layer offset-fed 20 × 20-element dual-CP reflectarray has been designed and fabricated. The measured results show that the 1-dB gain and 3-dB axial ratio (AR) bandwidths of the dual-CP reflectarray can reach 12.5% and 50%, respectively, which shows a significant improvement in gain and AR bandwidths as compared to reflectarrays with conventional λ/2 cross-dipole elements.",
"title": ""
},
{
"docid": "7b7f1f029e13008b1578c87c7319b645",
"text": "This paper presents the design and manufacturing processes of a new piezoactuated XY stage with integrated parallel, decoupled, and stacked kinematics structure for micro-/nanopositioning application. The flexure-based XY stage is composed of two decoupled prismatic-prismatic limbs which are constructed by compound parallelogram flexures and compound bridge-type displacement amplifiers. The two limbs are assembled in a parallel and stacked manner to achieve a compact stage with the merits of parallel kinematics. Analytical models for the mechanical performance assessment of the stage in terms of kinematics, statics, stiffness, load capacity, and dynamics are derived and verified with finite element analysis. A prototype of the XY stage is then fabricated, and its decoupling property is tested. Moreover, the Bouc-Wen hysteresis model of the system is identified by resorting to particle swarm optimization, and a control scheme combining the inverse hysteresis model-based feedforward with feedback control is employed to compensate for the plant nonlinearity and uncertainty. Experimental results reveal that a submicrometer accuracy single-axis motion tracking and biaxial contouring can be achieved by the micropositioning system, which validate the effectiveness of the proposed mechanism and controller designs as well.",
"title": ""
},
{
"docid": "56bd18820903da1917ca5d194b520413",
"text": "The problem of identifying subtle time-space clustering of dis ease, as may be occurring in leukemia, is described and reviewed. Published approaches, generally associated with studies of leuke mia, not dependent on knowledge of the underlying population for their validity, are directed towards identifying clustering by establishing a relationship between the temporal and the spatial separations for the n(n —l)/2 possible pairs which can be formed from the n observed cases of disease. Here it is proposed that statistical power can be improved by applying a reciprocal trans form to these separations. While a permutational approach can give valid probability levels for any observed association, for reasons of practicability, it is suggested that the observed associa tion be tested relative to its permutational variance. Formulas and computational procedures for doing so are given. While the distance measures between points represent sym metric relationships subject to mathematical and geometric regu larities, the variance formula developed is appropriate for ar bitrary relationships. Simplified procedures are given for the ease of symmetric and skew-symmetric relationships. The general pro cedure is indicated as being potentially useful in other situations as, for example, the study of interpersonal relationships. Viewing the procedure as a regression approach, the possibility for extend ing it to nonlinear and mult ¡variatesituations is suggested. Other aspects of the problem and of the procedure developed are discussed.",
"title": ""
},
{
"docid": "86d705256c19f63dac90162b33818a9b",
"text": "Despite the recent success of deep-learning based semantic segmentation, deploying a pre-trained road scene segmenter to a city whose images are not presented in the training set would not achieve satisfactory performance due to dataset biases. Instead of collecting a large number of annotated images of each city of interest to train or refine the segmenter, we propose an unsupervised learning approach to adapt road scene segmenters across different cities. By utilizing Google Street View and its timemachine feature, we can collect unannotated images for each road scene at different times, so that the associated static-object priors can be extracted accordingly. By advancing a joint global and class-specific domain adversarial learning framework, adaptation of pre-trained segmenters to that city can be achieved without the need of any user annotation or interaction. We show that our method improves the performance of semantic segmentation in multiple cities across continents, while it performs favorably against state-of-the-art approaches requiring annotated training data.",
"title": ""
},
{
"docid": "ce429bbed5895731c9a3a9b77e3f488b",
"text": "[Purpose] This study assessed the relationships between the ankle dorsiflexion range of motion and foot and ankle strength. [Subjects and Methods] Twenty-nine healthy (young adults) volunteers participated in this study. Each participant completed tests for ankle dorsiflexion range of motion, hallux flexor strength, and ankle plantar and dorsiflexor strength. [Results] The results showed (1) a moderate correlation between ankle dorsiflexor strength and dorsiflexion range of motion and (2) a moderate correlation between ankle dorsiflexor strength and first toe flexor muscle strength. Ankle dorsiflexor strength is the main contributor ankle dorsiflexion range of motion to and first toe flexor muscle strength. [Conclusion] Ankle dorsiflexion range of motion can play an important role in determining ankle dorsiflexor strength in young adults.",
"title": ""
},
{
"docid": "01a95065526771523795494c9968efb9",
"text": "Depression is one of the most common and debilitating psychiatric disorders and is a leading cause of suicide. Most people who become depressed will have multiple episodes, and some depressions are chronic. Persons with bipolar disorder will also have manic or hypomanic episodes. Given the recurrent nature of the disorder, it is important not just to treat the acute episode, but also to protect against its return and the onset of subsequent episodes. Several types of interventions have been shown to be efficacious in treating depression. The antidepressant medications are relatively safe and work for many patients, but there is no evidence that they reduce risk of recurrence once their use is terminated. The different medication classes are roughly comparable in efficacy, although some are easier to tolerate than are others. About half of all patients will respond to a given medication, and many of those who do not will respond to some other agent or to a combination of medications. Electro-convulsive therapy is particularly effective for the most severe and resistant depressions, but raises concerns about possible deleterious effects on memory and cognition. It is rarely used until a number of different medications have been tried. Although it is still unclear whether traditional psychodynamic approaches are effective in treating depression, interpersonal psychotherapy (IPT) has fared well in controlled comparisons with medications and other types of psychotherapies. It also appears to have a delayed effect that improves the quality of social relationships and interpersonal skills. It has been shown to reduce acute distress and to prevent relapse and recurrence so long as it is continued or maintained. Treatment combining IPT with medication retains the quick results of pharmacotherapy and the greater interpersonal breadth of IPT, as well as boosting response in patients who are otherwise more difficult to treat. The main problem is that IPT has only recently entered clinical practice and is not widely available to those in need. Cognitive behavior therapy (CBT) also appears to be efficacious in treating depression, and recent studies suggest that it can work for even severe depressions in the hands of experienced therapists. Not only can CBT relieve acute distress, but it also appears to reduce risk for the return of symptoms as long as it is continued or maintained. Moreover, it appears to have an enduring effect that reduces risk for relapse or recurrence long after treatment is over. Combined treatment with medication and CBT appears to be as efficacious as treatment with medication alone and to retain the enduring effects of CBT. There also are indications that the same strategies used to reduce risk in psychiatric patients following successful treatment can be used to prevent the initial onset of depression in persons at risk. More purely behavioral interventions have been studied less than the cognitive therapies, but have performed well in recent trials and exhibit many of the benefits of cognitive therapy. Mood stabilizers like lithium or the anticonvulsants form the core treatment for bipolar disorder, but there is a growing recognition that the outcomes produced by modern pharmacology are not sufficient. Both IPT and CBT show promise as adjuncts to medication with such patients. The same is true for family-focused therapy, which is designed to reduce interpersonal conflict in the family. Clearly, more needs to be done with respect to treatment of the bipolar disorders. Good medical management of depression can be hard to find, and the empirically supported psychotherapies are still not widely practiced. As a consequence, many patients do not have access to adequate treatment. Moreover, not everyone responds to the existing interventions, and not enough is known about what to do for people who are not helped by treatment. Although great strides have been made over the past few decades, much remains to be done with respect to the treatment of depression and the bipolar disorders.",
"title": ""
},
{
"docid": "e9ea3dd59bb3ab6bd698b44c993a8b0e",
"text": "We present an optical flow algorithm for large displacement motions. Most existing optical flow methods use the standard coarse-to-fine framework to deal with large displacement motions which has intrinsic limitations. Instead, we formulate the motion estimation problem as a motion segmentation problem. We use approximate nearest neighbor fields to compute an initial motion field and use a robust algorithm to compute a set of similarity transformations as the motion candidates for segmentation. To account for deviations from similarity transformations, we add local deformations in the segmentation process. We also observe that small objects can be better recovered using translations as the motion candidates. We fuse the motion results obtained under similarity transformations and under translations together before a final refinement. Experimental validation shows that our method can successfully handle large displacement motions. Although we particularly focus on large displacement motions in this work, we make no sacrifice in terms of overall performance. In particular, our method ranks at the top of the Middlebury benchmark.",
"title": ""
}
] |
scidocsrr
|
0677c5968c3e97d00c7b64b5465f9a0a
|
SDN and OpenFlow Evolution: A Standards Perspective
|
[
{
"docid": "e93c5395f350d44b59f549a29e65d75c",
"text": "Software Defined Networking (SDN) is an exciting technology that enables innovation in how we design and manage networks. Although this technology seems to have appeared suddenly, SDN is part of a long history of efforts to make computer networks more programmable. In this paper, we trace the intellectual history of programmable networks, including active networks, early efforts to separate the control and data plane, and more recent work on OpenFlow and network operating systems. We highlight key concepts, as well as the technology pushes and application pulls that spurred each innovation. Along the way, we debunk common myths and misconceptions about the technologies and clarify the relationship between SDN and related technologies such as network virtualization.",
"title": ""
}
] |
[
{
"docid": "056d7d639d91636b382860a3df08d0dd",
"text": "This paper describes a novel monolithic low voltage (1-V) CMOS RF front-end architecture with an integrated quadrature coupler (QC) and two subharmonic mixers for direct-down conversion. The LC-folded-cascode technique is adopted to achieve low-voltage operation while the subharmonic mixers in conjunction with the QC are used to eliminate LO self-mixing. In addition, the inherent bandpass characteristic of the LC tanks helps suppression of LO leakage at RF port. The circuit was fabricated in a standard 0.18-mum CMOS process for 5-6 GHz applications. At 5.4 GHz, the RF front-end exhibits a voltage gain of 26.2 dB and a noise figure of 5.2 dB while dissipating 45.5 mW from a 1.0-V supply. The achieved input-referred DC-offset due to LO self-mixing is below -110.7 dBm.",
"title": ""
},
{
"docid": "57fa4164381d9d9691b9ba5c506addbd",
"text": "The aim of this study was to evaluate the acute effects of unilateral ankle plantar flexors static-stretching (SS) on the passive range of movement (ROM) of the stretched limb, surface electromyography (sEMG) and single-leg bounce drop jump (SBDJ) performance measures of the ipsilateral stretched and contralateral non-stretched lower limbs. Seventeen young men (24 ± 5 years) performed SBDJ before and after (stretched limb: immediately post-stretch, 10 and 20 minutes and non-stretched limb: immediately post-stretch) unilateral ankle plantar flexor SS (6 sets of 45s/15s, 70-90% point of discomfort). SBDJ performance measures included jump height, impulse, time to reach peak force, contact time as well as the sEMG integral (IEMG) and pre-activation (IEMGpre-activation) of the gastrocnemius lateralis. Ankle dorsiflexion passive ROM increased in the stretched limb after the SS (pre-test: 21 ± 4° and post-test: 26.5 ± 5°, p < 0.001). Post-stretching decreases were observed with peak force (p = 0.029), IEMG (P<0.001), and IEMGpre-activation (p = 0.015) in the stretched limb; as well as impulse (p = 0.03), and jump height (p = 0.032) in the non-stretched limb. In conclusion, SS effectively increased passive ankle ROM of the stretched limb, and transiently (less than 10 minutes) decreased muscle peak force and pre-activation. The decrease of jump height and impulse for the non-stretched limb suggests a SS-induced central nervous system inhibitory effect. Key pointsWhen considering whether or not to SS prior to athletic activities, one must consider the potential positive effects of increased ankle dorsiflexion motion with the potential deleterious effects of power and muscle activity during a simple jumping task or as part of the rehabilitation process.Since decreased jump performance measures can persist for 10 minutes in the stretched leg, the timing of SS prior to performance must be taken into consideration.Athletes, fitness enthusiasts and therapists should also keep in mind that SS one limb has generalized effects upon contralateral limbs as well.",
"title": ""
},
{
"docid": "7ca863355d1fb9e4954c360c810ece53",
"text": "The detection of community structure is a widely accepted means of investigating the principles governing biological systems. Recent efforts are exploring ways in which multiple data sources can be integrated to generate a more comprehensive model of cellular interactions, leading to the detection of more biologically relevant communities. In this work, we propose a mathematical programming model to cluster multiplex biological networks, i.e. multiple network slices, each with a different interaction type, to determine a single representative partition of composite communities. Our method, known as SimMod, is evaluated through its application to yeast networks of physical, genetic and co-expression interactions. A comparative analysis involving partitions of the individual networks, partitions of aggregated networks and partitions generated by similar methods from the literature highlights the ability of SimMod to identify functionally enriched modules. It is further shown that SimMod offers enhanced results when compared to existing approaches without the need to train on known cellular interactions.",
"title": ""
},
{
"docid": "1594afac3fe296478bd2a0c5a6ca0bb4",
"text": "Executive Summary The market turmoil of 2008 highlighted the importance of risk management to investors in the UK and worldwide. Realized risk levels and risk forecasts from the Barra Europe Equity Model (EUE2L) are both currently at the highest level for the last two decades. According to portfolio theory, institutional investors can gain significant risk-reduction and return-enhancement benefits from venturing out of their domestic markets. These effects from international diversification are due to imperfect correlations among markets. In this paper, we explore the historical diversification effects of an international allocation for UK investors. We illustrate that investing only in the UK market can be considered an active deviation from a global benchmark. Although a domestic allocation to UK large-cap stocks has significant international exposure when revenue sources are taken into account, as an active deviation from a global benchmark a UK domestic strategy has high concentration, leading to high asset-specific risk, and significant style and industry tilts. We show that an international allocation resulted in higher returns and lower risk for a UK investor in the last one, three, five, and ten years. In GBP terms, the MSCI All Country World Investable Market Index (ACWI IMI) — a global index that could be viewed as a proxy for a global portfolio — achieved higher return and lower risk compared to the MSCI UK Index during these periods. A developed market minimum-variance portfolio, represented by the MSCI World Minimum Volatility Index, 1 The market turmoil of 2008 highlighted the importance of risk management to investors in the UK and worldwide. Figure 1 illustrates that the historical standard deviation of the MSCI UK Index is now near the highest level in recent history. The risk forecast for the index, obtained using the Barra Europe Equity Model, typically showed still better risk and return performance during these periods. The decreases in risk represented by allocations to MSCI ACWI IMI and the MSCI World Minimum Volatility Index were robust based on four different measures of portfolio risk. We also consider a stepwise approach to international diversification, sequentially adding small cap and international assets to a large cap UK portfolio. We show that this approach also reduced risk during the observed period, but we did not find evidence that it was more efficient for risk reduction than a passive allocation to MSCI ACWI IMI.",
"title": ""
},
{
"docid": "8147143579de86a5eeb668037c2b8c5d",
"text": "In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the bias/variance tradeoff. The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any learning procedure adheres to its training data. At one end of the scale (high variance), models can entertain very complex hypotheses, allowing them to fit a wide variety of data very closely--but as a result can generalize poorly, a phenomenon called overfitting. At the other end of the scale (high bias), models make relatively simple and inflexible assumptions, and as a result may fit the data poorly, called underfitting. Exemplar and prototype models of category formation are at opposite ends of this scale: prototype models are highly biased, in that they assume a simple, standard conceptual form (the prototype), while exemplar models have very little bias but high variance, allowing them to fit virtually any combination of training data. We investigated human learners' position on this spectrum by confronting them with category structures at variable levels of intrinsic complexity, ranging from simple prototype-like categories to much more complex multimodal ones. The results show that human learners adopt an intermediate point on the bias/variance continuum, inconsistent with either of the poles occupied by most conventional approaches. We present a simple model that adjusts (regularizes) the complexity of its hypotheses in order to suit the training data, which fits the experimental data better than representative exemplar and prototype models.",
"title": ""
},
{
"docid": "e4d10b57a9ddb304263abd869c1a79d9",
"text": "rrrmrrxvivt This research explores the effectiveness of interactive advertising on a new medium platform. Like the presence in industry and the media themselves, the academic research stream is fairly new. Our research seeks to isolate the key feature of interactivity from confounding factors and to begin to tease apart those situations for which interactivity might be highly desirable from those situations for which traditional advertising vehicles may be sufficient or superior. We find that the traditional linear advertising format of conventional ads is actually better than interactive advertising for certain kinds of consumers and for certain kinds of ads. In particular, we find that a cognitive “matching” of the system properties (being predominately visual or verbal) and the consumer segment needs (preferring their information to be presented in a visual or verbal manner) appears to be critical. More research should be conducted before substantial expenditures are devoted to advertising on these interactive media. These new means of communicating with customers are indeed exciting, but they must be demonstrated to be effective on consumer engagement and persuasion. INTERACTIVE MARKETING SYSTEMS are enjoying explosive growth, giving firms a plethora of ways of contacting consumers (e.g., kiosks, Web pages, home computers). In these interactive systems, a customer controls the content of the interaction, requesting or giving information, at the attributelevel (e.g., a PC’s RAM and MHz) or in terms of benefits (e.g., a PC’s capability and speed). A customer can control the presentation order of the information, and unwanted options may be deleted. The consumer may request that the information sought be presented in comparative table format, in video, audio, pictorial format, or in standard text. Increasingly, customers can also order products using the interactive system. These new media are no fad, and while they are only in the infancy of their development, they are already changing the marketplace (cf. Hoffman and Novak, 1996). The hallmark of all of these new media is their irlteuactivity-the consumer and the manufacturer enter into dialogue in a way not previously possible. Interactive marketing, as defined in this paper, is: “the immediately iterative process by which customer needs and desires are uncovered, met, modified, and satisfied by the providing firm.” Interactivity iterates between the firm and the customer, eliciting information from both parties, and attempting to align interests and possibilities. The iterations occur over some duration, allowing the firm to build databases that provide subsequent purchase opportunities tailored to the consumer (Blattberg and Deighton, 1991). The consumer’s input allows subsequent information to be customized to pertinent interests and bars irrelevant communications, thereby enhancing both the consumer experience and the efficiency of the firm’s advertising and marketing dollar. As exciting as these new interactive media appear to be, little is actually known about their effect on consumers’ consideration of the advertised products. As Berthon, Pitt, and Watson (1996) state, “advertising and marketing practitioners, and academics are by now aware that more systematic research is required to reveal the true nature of commerce on the Web” or for interactive systems more generally. Our research is intended to address this need, and more specifically to focus on the effects of interactivity. We investigate interactive marketing in terms of its performance in persuading consumers to buy the advertised products. We wish to begin to understand whether interactive methods are truly superior to standard advertising formats as the excitement about the new media would suggest. Alternatively, perhaps there are some circumstances for which traditional advertising is more effective. Certainly it would not be desirable to channel the majority of one’s advertising resources toward interactive media until they are demonstrated to be superior persuasion vehicles. To this end we present an experimental study comparing consumer reactions to products advertised through an interactive medium with re-",
"title": ""
},
{
"docid": "5a44a37f5ae6e485a4096861f53f6245",
"text": "The goal of the paper is to show that some types of L evy processes such as the hyperbolic motion and the CGMY are particularly suitable for asset price modelling and option pricing. We wish to review some fundamental mathematic properties of L evy distributions, such as the one of infinite divisibility, and how they translate observed features of asset price returns. We explain how these processes are related to Brownian motion, the central process in finance, through stochastic time changes which can in turn be interpreted as a measure of the economic activity. Lastly, we focus on two particular classes of pure jump L evy processes, the generalized hyperbolic model and the CGMY models, and report on the goodness of fit obtained both on stock prices and option prices. 2002 Published by Elsevier Science B.V. JEL classification: G12; G13",
"title": ""
},
{
"docid": "6933e944e88307c85f0b398b5abbb48f",
"text": "The main problem in many model-building situations is to choose from a large set of covariates those that should be included in the \"best\" model. A decision to keep a variable in the model might be based on the clinical or statistical significance. There are several variable selection algorithms in existence. Those methods are mechanical and as such carry some limitations. Hosmer and Lemeshow describe a purposeful selection of covariates within which an analyst makes a variable selection decision at each step of the modeling process. In this paper we introduce an algorithm which automates that process. We conduct a simulation study to compare the performance of this algorithm with three well documented variable selection procedures in SAS PROC LOGISTIC: FORWARD, BACKWARD, and STEPWISE. We show that the advantage of this approach is when the analyst is interested in risk factor modeling and not just prediction. In addition to significant covariates, this variable selection procedure has the capability of retaining important confounding variables, resulting potentially in a slightly richer model. Application of the macro is further illustrated with the Hosmer and Lemeshow Worchester Heart Attack Study (WHAS) data. If an analyst is in need of an algorithm that will help guide the retention of significant covariates as well as confounding ones they should consider this macro as an alternative tool.",
"title": ""
},
{
"docid": "dfcc6b34f008e4ea9d560b5da4826f4d",
"text": "The paper describes a Chinese shadow play animation system based on Kinect. Users, without any professional training, can personally manipulate the shadow characters to finish a shadow play performance by their body actions and get a shadow play video through giving the record command to our system if they want. In our system, Kinect is responsible for capturing human movement and voice commands data. Gesture recognition module is used to control the change of the shadow play scenes. After packaging the data from Kinect and the recognition result from gesture recognition module, VRPN transmits them to the server-side. At last, the server-side uses the information to control the motion of shadow characters and video recording. This system not only achieves human-computer interaction, but also realizes the interaction between people. It brings an entertaining experience to users and easy to operate for all ages. Even more important is that the application background of Chinese shadow play embodies the protection of the art of shadow play animation. Keywords—Gesture recognition, Kinect, shadow play animation, VRPN.",
"title": ""
},
{
"docid": "8582c4a040e4dec8fd141b00eaa45898",
"text": "Emerging airborne networks require domainspecific routing protocols to cope with the challenges faced by the highly-dynamic aeronautical environment. We present an ns-3 based performance comparison of the AeroRP protocol with conventional MANET routing protocols. To simulate a highly-dynamic airborne network, accurate mobility models are needed for the physical movement of nodes. The fundamental problem with many synthetic mobility models is their random, memoryless behavior. Airborne ad hoc networks require a flexible memory-based 3-dimensional mobility model. Therefore, we have implemented a 3-dimensional Gauss-Markov mobility model in ns-3 that appears to be more realistic than memoryless models such as random waypoint and random walk. Using this model, we are able to simulate the airborne networking environment with greater realism than was previously possible and show that AeroRP has several advantages over other MANET routing protocols.",
"title": ""
},
{
"docid": "cde6d84d22ca9d8cd851f3067bc9b41e",
"text": "The purpose of the present study was to examine the reciprocal relationships between authenticity and measures of life satisfaction and distress using a 2-wave panel study design. Data were collected from 232 college students attending 2 public universities. Structural equation modeling was used to analyze the data. The results of the cross-lagged panel analysis indicated that after controlling for temporal stability, initial authenticity (Time 1) predicted later distress and life satisfaction (Time 2). Specifically, higher levels of authenticity at Time 1 were associated with increased life satisfaction and decreased distress at Time 2. Neither distress nor life satisfaction at Time 1 significantly predicted authenticity at Time 2. However, the relationship between Time 1 distress and Time 2 authenticity was not significantly different from the relationship between Time 1 authenticity and Time 2 distress. Results are discussed in light of humanistic-existential theories and the empirical research on well-being.",
"title": ""
},
{
"docid": "59786d8ea951639b8b9a4e60c9d43a06",
"text": "Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.",
"title": ""
},
{
"docid": "6c1d7a70d0fa21222a0d1046eee128c7",
"text": "A BSTRACT Background Goal-directed therapy has been used for severe sepsis and septic shock in the intensive care unit. This approach involves adjustments of cardiac preload, afterload, and contractility to balance oxygen delivery with oxygen demand. The purpose of this study was to evaluate the efficacy of early goal-directed therapy before admission to the intensive care unit. Methods We randomly assigned patients who arrived at an urban emergency department with severe sepsis or septic shock to receive either six hours of early goal-directed therapy or standard therapy (as a control) before admission to the intensive care unit. Clinicians who subsequently assumed the care of the patients were blinded to the treatment assignment. In-hospital mortality (the primary efficacy outcome), end points with respect to resuscitation, and Acute Physiology and Chronic Health Evaluation (APACHE II) scores were obtained serially for 72 hours and compared between the study groups. Results Of the 263 enrolled patients, 130 were randomly assigned to early goal-directed therapy and 133 to standard therapy; there were no significant differences between the groups with respect to base-line characteristics. In-hospital mortality was 30.5 percent in the group assigned to early goal-directed therapy, as compared with 46.5 percent in the group assigned to standard therapy (P=0.009). During the interval from 7 to 72 hours, the patients assigned to early goaldirected therapy had a significantly higher mean (±SD) central venous oxygen saturation (70.4±10.7 percent vs. 65.3±11.4 percent), a lower lactate concentration (3.0±4.4 vs. 3.9±4.4 mmol per liter), a lower base deficit (2.0±6.6 vs. 5.1±6.7 mmol per liter), and a higher pH (7.40±0.12 vs. 7.36±0.12) than the patients assigned to standard therapy (P«0.02 for all comparisons). During the same period, mean APACHE II scores were significantly lower, indicating less severe organ dysfunction, in the patients assigned to early goal-directed therapy than in those assigned to standard therapy (13.0±6.3 vs. 15.9±6.4, P<0.001). Conclusions Early goal-directed therapy provides significant benefits with respect to outcome in patients with severe sepsis and septic shock. (N Engl J Med 2001;345:1368-77.)",
"title": ""
},
{
"docid": "0c70966c4dbe41458f7ec9692c566c1f",
"text": "By 2012 the U.S. military had increased its investment in research and production of unmanned aerial vehicles (UAVs) from $2.3 billion in 2008 to $4.2 billion [1]. Currently UAVs are used for a wide range of missions such as border surveillance, reconnaissance, transportation and armed attacks. UAVs are presumed to provide their services at any time, be reliable, automated and autonomous. Based on these presumptions, governmental and military leaders expect UAVs to improve national security through surveillance or combat missions. To fulfill their missions, UAVs need to collect and process data. Therefore, UAVs may store a wide range of information from troop movements to environmental data and strategic operations. The amount and kind of information enclosed make UAVs an extremely interesting target for espionage and endangers UAVs of theft, manipulation and attacks. Events such as the loss of an RQ-170 Sentinel to Iranian military forces on 4th December 2011 [2] or the “keylogging” virus that infected an U.S. UAV fleet at Creech Air Force Base in Nevada in September 2011 [3] show that the efforts of the past to identify risks and harden UAVs are insufficient. Due to the increasing governmental and military reliance on UAVs to protect national security, the necessity of a methodical and reliable analysis of the technical vulnerabilities becomes apparent. We investigated recent attacks and developed a scheme for the risk assessment of UAVs based on the provided services and communication infrastructures. We provide a first approach to an UAV specific risk assessment and take into account the factors exposure, communication systems, storage media, sensor systems and fault handling mechanisms. We used this approach to assess the risk of some currently used UAVs: The “MQ-9 Reaper” and the “AR Drone”. A risk analysis of the “RQ-170 Sentinel” is discussed.",
"title": ""
},
{
"docid": "b1b18ffff0f9efdef25dd15099139b7e",
"text": "This paper presents a fast and accurate alignment method for polyphonic symbolic music signals. It is known that to accurately align piano performances, methods using the voice structure are needed. However, such methods typically have high computational cost and they are applicable only when prior voice information is given. It is pointed out that alignment errors are typically accompanied by performance errors in the aligned signal. This suggests the possibility of correcting (or realigning) preliminary results by a fast (but not-so-accurate) alignment method with a refined method applied to limited segments of aligned signals, to save the computational cost. To realise this, we develop a method for detecting performance errors and a realignment method that works fast and accurately in local regions around performance errors. To remove the dependence on prior voice information, voice separation is performed to the reference signal in the local regions. By applying our method to results obtained by previously proposed hidden Markov models, the highest accuracies are achieved with short computation time. Our source code is published in the accompanying web page, together with a user interface to examine and correct alignment results.",
"title": ""
},
{
"docid": "c2fb88df12e97e8475bb923063c8a46e",
"text": "This paper addresses the job shop scheduling problem in the presence of machine breakdowns. In this work, we propose to exploit the advantages of data mining techniques to resolve the problem. We proposed an approach to discover a set of classification rules by using historic scheduling data. Intelligent decisions are then made in real time based on this constructed rules to assign the corresponding dispatching rule in a dynamic job shop scheduling environment. A simulation study is conducted at last with the constructed rules and four other dispatching rules from literature. The experimental results verify the performance of classification rule for minimizing mean tardiness.",
"title": ""
},
{
"docid": "7edb8a803734f4eb9418b8c34b1bf07c",
"text": "Building automation systems (BAS) provide automatic control of the conditions of indoor environments. The historical root and still core domain of BAS is the automation of heating, ventilation and air-conditioning systems in large functional buildings. Their primary goal is to realize significant savings in energy and reduce cost. Yet the reach of BAS has extended to include information from all kinds of building systems, working toward the goal of \"intelligent buildings\". Since these systems are diverse by tradition, integration issues are of particular importance. When compared with the field of industrial automation, building automation exhibits specific, differing characteristics. The present paper introduces the task of building automation and the systems and communications infrastructure necessary to address it. Basic requirements are covered as well as standard application models and typical services. An overview of relevant standards is given, including BACnet, LonWorks and EIB/KNX as open systems of key significance in the building automation domain.",
"title": ""
},
{
"docid": "21f6a18e34579ae482c93c3476828729",
"text": "A low power highly sensitive Thoracic Impedance Variance (TIV) and Electrocardiogram (ECG) monitoring SoC is designed and implemented into a poultice-like plaster sensor for wearable cardiac monitoring. 0.1 Ω TIV detection is possible with a sensitivity of 3.17 V/Ω and SNR > 40 dB. This is achieved with the help of a high quality (Q-factor > 30) balanced sinusoidal current source and low noise reconfigurable readout electronics. A cm-range 13.56 MHz fabric inductor coupling is adopted to start/stop the SoC remotely. Moreover, a 5% duty-cycled Body Channel Communication (BCC) is exploited for 0.2 nJ/b 1 Mbps energy efficient external data communication. The proposed SoC occupies 5 mm × 5 mm including pads in a standard 0.18 μm 1P6M CMOS technology. It dissipates a peak power of 3.9 mW when operating in body channel receiver mode, and consumes 2.4 mW when operating in TIV and ECG detection mode. The SoC is integrated on a 15 cm × 15 cm fabric circuit board together with a flexible battery to form a compact wearable sensor. With 25 adhesive screen-printed fabric electrodes, detection of TIV and ECG at 16 different sites of the heart is possible, allowing optimal detection sites to be configured to accommodate different user dependencies.",
"title": ""
}
] |
scidocsrr
|
441a8cccfe1b05140b8bed527e8a2359
|
Building a Recommender Agent for e-Learning Systems
|
[
{
"docid": "323113ab2bed4b8012f3a6df5aae63be",
"text": "Clustering data generally involves some input parameters or heuristics that are usually unknown at the time they are needed. We discuss the general problem of parameters in clustering and present a new approach, TURN, based on boundary detection and apply it to the clustering of web log data. We also present the use of di erent lters on the web log data to focus the clustering results and discuss di erent coeÆcients for de ning similarity in a non-Euclidean space.",
"title": ""
}
] |
[
{
"docid": "d297360f609e4b03c9d70fda7cc04123",
"text": "This paper describes an FPGA implementation of a single-precision floating-point multiply-accumulator (FPMAC) that supports single-cycle accumulation while maintaining high clock frequencies. A non-traditional internal representation reduces the cost of mantissa alignment within the accumulator. The FPMAC is evaluated on an Altera Stratix III FPGA.",
"title": ""
},
{
"docid": "35981768a2a46c2dd9d52ebbd5b63750",
"text": "A vehicle detection and classification system has been developed based on a low-cost triaxial anisotropic magnetoresistive sensor. Considering the characteristics of vehicle magnetic detection signals, especially the signals for low-speed congested traffic in large cities, a novel fixed threshold state machine algorithm based on signal variance is proposed to detect vehicles within a single lane and segment the vehicle signals effectively according to the time information of vehicles entering and leaving the sensor monitoring area. In our experiments, five signal features are extracted, including the signal duration, signal energy, average energy of the signal, ratio of positive and negative energy of x-axis signal, and ratio of positive and negative energy of y-axis signal. Furthermore, the detected vehicles are classified into motorcycles, two-box cars, saloon cars, buses, and Sport Utility Vehicle commercial vehicles based on a classification tree model. The experimental results have shown that the detection accuracy of the proposed algorithm can reach up to 99.05% and the average classification accuracy is 93.66%, which verify the effectiveness of our algorithm for low-speed congested traffic.",
"title": ""
},
{
"docid": "176cf87aa657a5066a02bfb650532070",
"text": "Structural Design of Reinforced Concrete Tall Buildings Author: Ali Sherif S. Rizk, Director, Dar al-Handasah Shair & Partners Subject: Structural Engineering",
"title": ""
},
{
"docid": "02c687cbe7961f082c60fad1cc3f3f80",
"text": "The simplicity of Transpose Jacobian (TJ) control is a significant characteristic of this algorithm for controlling robotic manipulators. Nevertheless, a poor performance may result in tracking of fast trajectories, since it is not dynamics-based. Use of high gains can deteriorate performance seriously in the presence of feedback measurement noise. Another drawback is that there is no prescribed method of selecting its control gains. In this paper, based on feedback linearization approach a Modified TJ (MTJ) algorithm is presented which employs stored data of the control command in the previous time step, as a learning tool to yield improved performance. The gains of this new algorithm can be selected systematically, and do not need to be large, hence the noise rejection characteristics of the algorithm are improved. Based on Lyapunov’s theorems, it is shown that both the standard and the MTJ algorithms are asymptotically stable. Analysis of the required computational effort reveals the efficiency of the proposed MTJ law compared to the Model-based algorithms. Simulation results are presented which compare tracking performance of the MTJ algorithm to that of the TJ and Model-Based algorithms in various tasks. Results of these simulations show that performance of the new MTJ algorithm is comparable to that of Computed Torque algorithms, without requiring a priori knowledge of plant dynamics, and with reduced computational burden. Therefore, the proposed algorithm is well suited to most industrial applications where simple efficient algorithms are more appropriate than complicated theoretical ones with massive computational burden. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b22137cbb14396f1dcd24b2a15b02508",
"text": "This paper studies the self-alignment properties between two chips that are stacked on top of each other with copper pillars micro-bumps. The chips feature alignment marks used for measuring the resulting offset after assembly. The accuracy of the alignment is found to be better than 0.5 µm in × and y directions, depending on the process. The chips also feature waveguides and vertical grating couplers (VGC) fabricated in the front-end-of-line (FEOL) and organized in order to realize an optical interconnection between the chips. The coupling of light between the chips is measured and compared to numerical simulation. This high accuracy self-alignment was obtained after studying the impact of flux and fluxless treatments on the wetting of the pads and the successful assembly yield. The composition of the bump surface was analyzed with Time-of-Flight Secondary Ions Mass Spectroscopy (ToF-SIMS) in order to understand the impact of each treatment. This study confirms that copper pillars micro-bumps can be used to self-align photonic integrated circuits (PIC) with another die (for example a microlens array) in order to achieve high throughput alignment of optical fiber to the PIC.",
"title": ""
},
{
"docid": "e4007c7e6a80006238e1211a213e391b",
"text": "Various techniques for multiprogramming parallel multiprocessor systems have been proposed recently as a way to improve performance. A natural approach is to divide the set of processing elements into independent partitions, and simultaneously execute a diierent parallel program in each partition. Several issues arise, including the determination of the optimal number of programs allowed to execute simultaneously (i.e., the number of partitions) and the corresponding partition sizes. This can be done statically, dynamically, or adaptively, depending on the system and workload characteristics. In this paper several adaptive partitioning policies are evaluated. Their behavior, as well as the behavior of static policies, is investigated using real parallel programs. The policy applicability to actual systems is addressed, and implementation results of the proposed policies on an iPSC/2 hypercube system are reported. The concept of robustness (i.e., the ability to perform well on a wide range of workload types over a wide range of arrival rates) is presented and quantiied. Relative rankings of the policies are obtained, depending on the speciic work-load characteristics. A trade-oo is shown between potential performance and the amount of knowledge of the workload characteristics required to select the best policy. A policy that performs best when such knowledge of workload parallelism and/or arrival rate is not available is proposed as the most robust of those analyzed.",
"title": ""
},
{
"docid": "18b3328725661770be1f408f37c7eb64",
"text": "Researchers have proposed various machine learning algorithms for traffic sign recognition, which is a supervised multicategory classification problem with unbalanced class frequencies and various appearances. We present a novel graph embedding algorithm that strikes a balance between local manifold structures and global discriminative information. A novel graph structure is designed to depict explicitly the local manifold structures of traffic signs with various appearances and to intuitively model between-class discriminative information. Through this graph structure, our algorithm effectively learns a compact and discriminative subspace. Moreover, by using L2, 1-norm, the proposed algorithm can preserve the sparse representation property in the original space after graph embedding, thereby generating a more accurate projection matrix. Experiments demonstrate that the proposed algorithm exhibits better performance than the recent state-of-the-art methods.",
"title": ""
},
{
"docid": "a712b6efb5c869619864cd817c2e27e1",
"text": "We measure the value of promotional activities and referrals by content creators to an online platform of user-generated content. To do so, we develop a modeling approach that explains individual-level choices of visiting the platform, creating, and purchasing content, as a function of consumer characteristics and marketing activities, allowing for the possibility of interdependence of decisions within and across users. Empirically, we apply our model to Hewlett-Packard’s (HP) print-on-demand service of user-created magazines, named MagCloud. We use two distinct data sets to show the applicability of our approach: an aggregate-level data set from Google Analytics, which is a widely available source of data to managers, and an individual-level data set from HP. Our results compare content creator activities, which include referrals and word-ofmouth efforts, with firm-based actions, such as price promotions and public relations. We show that price promotions have strong effects, but limited to the purchase decisions, while content creator referrals and public relations have broader effects which impact all consumer decisions at the platform. We provide recommendations to the level of the firm’s investments when “free” promotional activities by content creators exist. These “free” marketing campaigns are likely to have a substantial presence in most online services of user-generated content.",
"title": ""
},
{
"docid": "307d9742739cbd2ade98c3d3c5d25887",
"text": "In this paper, we present a smart US imaging system (SMUS) based on an android-OS smartphone, which can provide maximally optimized efficacy in terms of weight and size in point-of-care diagnostic applications. The proposed SMUS consists of the smartphone (Galaxy S5 LTE-A, Samsung., Korea) and a 16-channel probe system. The probe system contains analog and digital front-ends, which conducts beamforming and mid-processing procedures. Otherwise, the smartphone performs the back-end processing including envelope detection, log compression, 2D image filtering, digital scan conversion, and image display with custom-made graphical user interface (GUI). Note that the probe system and smartphone are interconnected by the USB 3.0 protocol. As a result, the developed SMUS can provide real-time B-mode image with the sufficient frame rate (i.e., 58 fps), battery run-time for point-of-care diagnosis (i.e., 54 min), and 35.0°C of transducer surface temperature during B-mode imaging, which satisfies the temperature standards for the safety and effectiveness of medical electrical equipment, IEC 60601-1 (i.e., 43°C).",
"title": ""
},
{
"docid": "4d4a09c7cef74e9be52844a61ca57bef",
"text": "The key of zero-shot learning (ZSL) is how to find the information transfer model for bridging the gap between images and semantic information (texts or attributes). Existing ZSL methods usually construct the compatibility function between images and class labels with the consideration of the relevance on the semantic classes (the manifold structure of semantic classes). However, the relationship of image classes (the manifold structure of image classes) is also very important for the compatibility model construction. It is difficult to capture the relationship among image classes due to unseen classes, so that the manifold structure of image classes often is ignored in ZSL. To complement each other between the manifold structure of image classes and that of semantic classes information, we propose structure propagation (SP) for improving the performance of ZSL for classification. SP can jointly consider the manifold structure of image classes and that of semantic classes for approximating to the intrinsic structure of object classes. Moreover, the SP can describe the constrain condition between the compatibility function and these manifold structures for balancing the influence of the structure propagation iteration. The SP solution provides not only unseen class labels but also the relationship of two manifold structures that encode the positive transfer in structure propagation. Experimental results demonstrate that SP can attain the promising results on the AwA, CUB, Dogs and SUN databases.",
"title": ""
},
{
"docid": "4100a10b2a03f3a1ba712901cee406d2",
"text": "Traditionally, many clinicians tend to forego esthetic considerations when full-coverage restorations are indicated for pediatric patients with primary dentitions. However, the availability of new zirconia pediatric crowns and reliable techniques for cementation makes esthetic outcomes practical and consistent when restoring primary dentition. Two cases are described: a 3-year-old boy who presented with severe early childhood caries affecting both anterior and posterior teeth, and a 6-year-old boy who presented with extensive caries of his primary posterior dentition, including a molar requiring full coverage. The parents of both boys were concerned about esthetics, and the extent of decay indicated the need for full-coverage restorations. This led to the boys receiving treatment using a restorative procedure in which the carious teeth were prepared for and restored with esthetic tooth-colored zirconia crowns. In both cases, comfortable function and pleasing esthetics were achieved.",
"title": ""
},
{
"docid": "b7ca3a123963bb2f0bfbe586b3bc63d0",
"text": "Objective In symptom-dependent diseases such as functional dyspepsia (FD), matching the pattern of epigastric symptoms, including severity, kind, and perception site, between patients and physicians is critical. Additionally, a comprehensive examination of the stomach, duodenum, and pancreas is important for evaluating the origin of such symptoms. Methods FD-specific symptoms (epigastric pain, epigastric burning, early satiety, and postprandial fullness) and other symptoms (regurgitation, nausea, belching, and abdominal bloating) as well as the perception site of the above symptoms were investigated in healthy subjects using a new questionnaire with an illustration of the human body. A total of 114 patients with treatment-resistant dyspeptic symptoms were evaluated for their pancreatic exocrine function using N-benzoyl-L-tyrosyl-p-aminobenzoic acid. Results A total of 323 subjects (men:women, 216:107; mean age, 52.1 years old) were initially enrolled. Most of the subjects felt the FD-specific symptoms at the epigastrium, while about 20% felt them at other abdominal sites. About 30% of expressed as epigastric symptoms were FD-nonspecific symptoms. At the epigastrium, epigastric pain and epigastric burning were mainly felt at the upper part, and postprandial fullness and early satiety were felt at the lower part. The prevalence of patients with pancreatic exocrine dysfunction was 71% in the postprandial fullness group, 68% in the epigastric pain group, and 82% in the diarrhea group. Conclusion We observed mismatch in the perception site and expression between the epigastric symptoms of healthy subjects and FD-specific symptoms. Postprandial symptoms were often felt at the lower part of the epigastrium, and pancreatic exocrine dysfunction may be involved in the FD symptoms, especially for treatment-resistant dyspepsia patients.",
"title": ""
},
{
"docid": "a5e960a4b20959a1b4a85e08eebab9d3",
"text": "This paper presents a new class of dual-, tri- and quad-band BPF by using proposed open stub-loaded shorted stepped-impedance resonator (OSLSSIR). The OSLSSIR consists of a two-end-shorted three-section stepped-impedance resistor (SIR) with two identical open stubs loaded at its impedance junctions. Two 50- Ω tapped lines are directly connected to two shorted sections of the SIR to serve as I/O ports. As the electrical lengths of two identical open stubs increase, many more transmission poles (TPs) and transmission zeros (TZs) can be shifted or excited within the interested frequency range. The TZs introduced by open stubs divide the TPs into multiple groups, which can be applied to design a multiple-band bandpass filter (BPF). In order to increase many more design freedoms for tuning filter performance, a high-impedance open stub and the narrow/broad side coupling are introduced as perturbations in all filters design, which can tune the even- and odd-mode TPs separately. In addition, two branches of I/O coupling and open stub-loaded shorted microstrip line are employed in tri- and quad-band BPF design. As examples, two dual-wideband BPFs, one tri-band BPF, and one quad-band BPF have been successfully developed. The fabricated four BPFs have merits of compact sizes, low insertion losses, and high band-to-band isolations. The measured results are in good agreement with the full-wave simulated results.",
"title": ""
},
{
"docid": "d6f473f6b6758b2243dde898840656b0",
"text": "In this paper, we introduce the new generation 3300V HiPak2 IGBT module (130x190)mm employing the recently developed TSPT+ IGBT with Enhanced Trench MOS technology and Field Charge Extraction (FCE) diode. The new chip-set enables IGBT modules with improved electrical performance in terms of low losses, good controllability, high robustness and soft diode recovery. Due to the lower losses and the excellent SOA, the current rating of the 3300V HiPak2 module can be increased from 1500A for the current SPT+ generation to 1800A for the new TSPT+ version.",
"title": ""
},
{
"docid": "7635d39eda6ac2b3969216b39a1aa1f7",
"text": "We introduce tailored displays that enhance visual acuity by decomposing virtual objects and placing the resulting anisotropic pieces into the subject's focal range. The goal is to free the viewer from needing wearable optical corrections when looking at displays. Our tailoring process uses aberration and scattering maps to account for refractive errors and cataracts. It splits an object's light field into multiple instances that are each in-focus for a given eye sub-aperture. Their integration onto the retina leads to a quality improvement of perceived images when observing the display with naked eyes. The use of multiple depths to render each point of focus on the retina creates multi-focus, multi-depth displays. User evaluations and validation with modified camera optics are performed. We propose tailored displays for daily tasks where using eyeglasses are unfeasible or inconvenient (e.g., on head-mounted displays, e-readers, as well as for games); when a multi-focus function is required but undoable (e.g., driving for farsighted individuals, checking a portable device while doing physical activities); or for correcting the visual distortions produced by high-order aberrations that eyeglasses are not able to.",
"title": ""
},
{
"docid": "69f710a71b27cf46039d54e20b5f589b",
"text": "This paper presents a new needle deflection model that is an extension of prior work in our group based on the principles of beam theory. The use of a long flexible needle in percutaneous interventions necessitates accurate modeling of the generated curved trajectory when the needle interacts with soft tissue. Finding a feasible model is important in simulators with applications in training novice clinicians or in path planners used for needle guidance. Using intra-operative force measurements at the needle base, our approach relates mechanical and geometric properties of needle-tissue interaction to the net amount of deflection and estimates the needle curvature. To this end, tissue resistance is modeled by introducing virtual springs along the needle shaft, and the impact of needle-tissue friction is considered by adding a moving distributed external force to the bending equations. Cutting force is also incorporated by finding its equivalent sub-boundary conditions. Subsequently, the closed-from solution of the partial differential equations governing the planar deflection is obtained using Green's functions. To evaluate the performance of our model, experiments were carried out on artificial phantoms.",
"title": ""
},
{
"docid": "c0f46732345837cf959ea9ee030874fd",
"text": "In this paper we discuss the development and use of low-rank approximate nonnegative matrix factorization (NMF) algorithms for feature extraction and identification in the fields of text mining and spectral data analysis. The evolution and convergence properties of hybrid methods based on both sparsity and smoothness constraints for the resulting nonnegative matrix factors are discussed. The interpretability of NMF outputs in specific contexts are provided along with opportunities for future work in the modification of NMF algorithms for large-scale and time-varying datasets.",
"title": ""
},
{
"docid": "481d62df8c6cc7ed6bc93a4e3c27a515",
"text": "Minutiae points are defined as the minute discontinuities of local ridge flows, which are widely used as the fine level features for fingerprint recognition. Accurate minutiae detection is important and traditional methods are often based on the hand-crafted processes such as image enhancement, binarization, thinning and tracing of the ridge flows etc. These methods require strong prior knowledge to define the patterns of minutiae points and are easily sensitive to noises. In this paper, we propose a machine learning based algorithm to detect the minutiae points with the gray fingerprint image based on Convolution Neural Networks (CNN). The proposed approach is divided into the training and testing stages. In the training stage, a number of local image patches are extracted and labeled and CNN models are trained to classify the image patches. The test fingerprint is scanned with the CNN model to locate the minutiae position in the testing stage. To improve the detection accuracy, two CNN models are trained to classify the local patch into minutiae v.s. non-minutiae and into ridge ending v.s. bifurcation, respectively. In addition, multi-scale CNNs are constructed with the image patches of varying sizes and are combined to achieve more accurate detection. Finally, the proposed algorithm is tested the fingerprints of FVC2002 DB1 database. Experimental results and comparisons have been presented to show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "57a48dee2cc149b70a172ac5785afc6c",
"text": "We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ~ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.",
"title": ""
},
{
"docid": "438e690466823b7ae79cf28f62ba87be",
"text": "Decades of research have documented that young word learners have more difficulty learning verbs than nouns. Nonetheless, recent evidence has uncovered conditions under which children as young as 24 months succeed. Here, we focus in on the kind of linguistic information that undergirds 24-month-olds' success. We introduced 24-month-olds to novel words (either nouns or verbs) as they watched dynamic scenes (e.g., a man waving a balloon); the novel words were presented in semantic contexts that were either rich (e.g., The man is pilking a balloon), or more sparse (e.g., He's pilking it). Toddlers successfully learned nouns in both the semantically rich and sparse contexts, but learned verbs only in the rich context. This documents that to learn the meaning of a novel verb, English-acquiring toddlers take advantage of the semantically rich information provided in lexicalized noun phrases. Implications for cross-linguistic theories of acquisition are discussed.",
"title": ""
}
] |
scidocsrr
|
c340a561d9f52732f7156d9f51afc0fd
|
Poisson-driven seamless completion of triangular meshes
|
[
{
"docid": "da87c8385ac485fe5d2903e27803c801",
"text": "It's not surprisingly when entering this site to get the book. One of the popular books now is the polygon mesh processing. You may be confused because you can't find the book in the book store around your city. Commonly, the popular book will be sold quickly. And when you have found the store to buy the book, it will be so hurt when you run out of it. This is why, searching for this popular book in this website will give you benefit. You will not run out of this book.",
"title": ""
}
] |
[
{
"docid": "07a21e6badefd068dd9da0fa86957ea2",
"text": "Design details of reduction of asymmetrical hypertrophy of the breasts are discussed. The concepts apply to nearly any breast reduction technique, although here they are presented in the context of the inferior segment method, which we favor. Although most of these points may seem self-evident to the experienced plastic surgeon, there has been little written about the details.",
"title": ""
},
{
"docid": "fdcea57edbe935ec9949247fd47888e6",
"text": "Maintenance of skeletal muscle mass is contingent upon the dynamic equilibrium (fasted losses-fed gains) in protein turnover. Of all nutrients, the single amino acid leucine (Leu) possesses the most marked anabolic characteristics in acting as a trigger element for the initiation of protein synthesis. While the mechanisms by which Leu is 'sensed' have been the subject of great scrutiny, as a branched-chain amino acid, Leu can be catabolized within muscle, thus posing the possibility that metabolites of Leu could be involved in mediating the anabolic effect(s) of Leu. Our objective was to measure muscle protein anabolism in response to Leu and its metabolite HMB. Using [1,2-(13)C2]Leu and [(2)H5]phenylalanine tracers, and GC-MS/GC-C-IRMS we studied the effect of HMB or Leu alone on MPS (by tracer incorporation into myofibrils), and for HMB we also measured muscle proteolysis (by arteriovenous (A-V) dilution). Orally consumed 3.42 g free-acid (FA-HMB) HMB (providing 2.42 g of pure HMB) exhibited rapid bioavailability in plasma and muscle and, similarly to 3.42 g Leu, stimulated muscle protein synthesis (MPS; HMB +70% vs. Leu +110%). While HMB and Leu both increased anabolic signalling (mechanistic target of rapamycin; mTOR), this was more pronounced with Leu (i.e. p70S6K1 signalling 90 min vs. 30 min for HMB). HMB consumption also attenuated muscle protein breakdown (MPB; -57%) in an insulin-independent manner. We conclude that exogenous HMB induces acute muscle anabolism (increased MPS and reduced MPB) albeit perhaps via distinct, and/or additional mechanism(s) to Leu.",
"title": ""
},
{
"docid": "dccfb7f6d6ac9014f668b0fa6a994931",
"text": "Orthodontics has the potential to cause significant damage to hard and soft tissues. The most important aspect of orthodontic care is to have an extremely high standard of oral hygiene before and during orthodontic treatment. It is also essential that any carious lesions are dealt with before any active treatment starts. Root resorption is a common complication during orthodontic treatment but there is some evidence that once appliances are removed this resorption stops. Some of the risk pointers for root resorption are summarised. Soft tissue damage includes that caused by archwires but also the more harrowing potential for headgears to cause damage to eyes. It is essential that adequate safety measures are included with this type of treatment.",
"title": ""
},
{
"docid": "d0bf44f333658339650ae725dc229980",
"text": "We propose a new multi-frame method for efficiently computing scene flow (dense depth and optical flow) and camera ego-motion for a dynamic scene observed from a moving stereo camera rig. Our technique also segments out moving objects from the rigid scene. In our method, we first estimate the disparity map and the 6-DOF camera motion using stereo matching and visual odometry. We then identify regions inconsistent with the estimated camera motion and compute per-pixel optical flow only at these regions. This flow proposal is fused with the camera motion-based flow proposal using fusion moves to obtain the final optical flow and motion segmentation. This unified framework benefits all four tasks – stereo, optical flow, visual odometry and motion segmentation leading to overall higher accuracy and efficiency. Our method is currently ranked third on the KITTI 2015 scene flow benchmark. Furthermore, our CPU implementation runs in 2-3 seconds per frame which is 1-3 orders of magnitude faster than the top six methods. We also report a thorough evaluation on challenging Sintel sequences with fast camera and object motion, where our method consistently outperforms OSF [30], which is currently ranked second on the KITTI benchmark.",
"title": ""
},
{
"docid": "237ad676cccd55b969a3a96364b59a96",
"text": "Modeling Modeling is not merely a process of behavioral mimicry. Highly functional patterns of behavior, which constitute the proven skills and established customs of a culture, may be adopted in essentially the same form as they are exemplified. There is little leeway for 25 improvisation on how to drive automobiles or to perform arithmetic operations. However, in many activities, subskills must be improvised to suit varying circumstances. Modeling influences can convey rules for generative and innovative behavior as well. This higher-level learning is achieved through abstract modeling. Rule-governed behavior differs in specific content and other details but it contains the same underlying rule. For example, the modeled statements, \"The dog is being petted,\" and \"the window was opened\" refer to different things but the linguistic rule-the passive form--is the same. In abstract modeling, observers extract the rule embodied in the specific behavior exhibited by others. Once they learn the rule, they can use it to generate new instances of behavior that go beyond what they have seen or heard. Much human learning is aimed at developing cognitive skills on how to gain and use knowledge for future use. Observational learning of thinking skills is greatly facilitated by modeling thought processes in conjunction with action strategies (Meichenbaum, 1984). Models verbalize their thought strategies as they engage in problem-solving activities. The ordinarily covert thoughts guiding the actions of the models are thus made observable and learnable by others. Modeling has been shown to be a highly effective means of establishing abstract or rulegoverned behavior. On the basis of modeled information, people acquire, among other things, judgmental standards, linguistic rules, styles of inquiry, information-processing skills, and standards of self-evaluation (Bandura, 1986; Rosenthal & Zimmerman, 1978). Evidence that generative rules of thought and conduct can be created through abstract modeling attests to the broad scope of observational learning. Development of Modeling Capabilities Because observational learning involves several subfunctions that evolve with maturation and experiences, it depends upon prior development. When analyzed in terms of its constituent 26 subfunctions, facility in observational learning is not primarily a matter of learning to imitate. Nor is it a discrete skill. Rather, developing adeptness in observational learning involves acquiring multiple subskills in selective attention, cognitive representation, symbolic transformation, and anticipatory motivation. Neonates possess greater modeling capabilities than is commonly believed. By several months of age, infants can and do model behavior with some consistency (Kaye, 1982; Meltzoff & Moore, 1983; Valentine, 1930). The development of proficiency in observational learning is grounded in social reciprocation. Infants possess sufficient rudimentary representational capacities and sensorimotor coordination to enable them to imitate elementary sounds and acts within their physical capabilities. Parents readily imitate a newborn's gestures and vocalizations from the very beginning, often in expressive ways that have been shown to facilitate modeling (Papousek & Papousek, 1977; Pawlby, 1977). The newborn, whose means of communication and social influence are severely limited, learns that reciprocal imitation is an effective way of eliciting and sustaining parental responsiveness. Uzgiris (1984) has given considerable attention to the social function of imitation in infancy. Mutual imitation serves as a means of conveying interest and sharing experiences. Initially, parents tend to model acts that infants spontaneously perform. After the reciprocal imitation is established, parents are quick to initiate new response patterns for imitative sequences that help to expand their infant's competencies (Pawlby, 1977). Successful modeling of these more complicated patterns of behavior require development of the major subfunctions that govern observational learning. It is to the developmental course of these subfunctions that we turn next. 27 Attentional Processes Young children present certain attentional deficiencies that limit their proficiency in observational learning. They have difficulty attending to different sorts of information at the same time, distinguishing pertinent aspects from irrelevancies, and maintaining attention to ongoing events long enough to acquire sufficient information about them (Cohen & Salapatek, 1975; Hagen & Hale, 1973). They are easily distracted. With increasing experience, children's attentional skills improve in all these respects. In promoting observational learning, adults alter the behavior they model to compensate for the attentional limitations of children. With infants, parents gain their attention and give salience to the behavior they want to encourage by selectively imitating them. Parents tend to perform the reciprocated imitations in an exaggerated animated fashion that is well designed to sustain the child's attentiveness at a high level during the mutual modeling sequences (Papousek & Papousek, 1977). The animated social interplay provides a vehicle for channeling and expanding infants' attention in activities that go beyond those they have already mastered. The attention-arousing value of the modeled acts, themselves, also influences what infants are likely to adopt. Infants are more attentive to, and imitate more often, modeled acts when they involve objects and sound accompaniments than when they are modeled silently and without objects to draw attention (Abravanel et al., 1976; Uzgiris, 1979). The more attention infants pay to the modeled activities, the more likely they are to adopt them. As infants attentional capabilities increase, parents exhibit developmentally progressive activities for them to model. 28 Representational Processes In developing representational memory skills, children have to learn how to transform modeled information into symbolic forms and to organize it into easily remembered structures. They also have to learn how to use timely rehearsal to facilitate the retention of activities that are vulnerable to memory loss. It takes time for children to learn that they can improve their future performances by symbolizing and rehearsing their immediate experiences. In the earliest period of development, experiences are probably retained mainly in imaginal modes of representation. Infants will model single acts, but they have difficulty reproducing coordinated sequences that require them to remember how several actions are strung together (McCall, Parke, & Kavanaugh, 1977). They often start the sequence right and simply forget what comes next. With experience, they become more skilled at delayed modeling. Indeed, infants even as young as 18 months will enact behavior learned from televised models after some time has elapsed. Delayed performances of this type require symbolic memory. As children begin to acquire language they can symbolize the essential aspects of events in words for memory representation. It is not until children acquire some cognitive and linguistic skills that they can extract rules from modeled performances and make effective use of the more complex linguistic transformations (Rosenthal & Zimmerman, 1978). Children can improve their memory by instruction to anticipate, verbally code, and rehearse what they observe (Brown & Barclay, 1976). The vicarious memorial subskills can also be acquired through modeling. By observing the memory feats of others, children learn what information is worth coding, how events should be categorized, and more general strategies for processing information (Lamal, 1971; Rosenthal & Zimmerman, 1978). 29 Production Processes Converting conceptions to appropriate actions requires development of transformational skills in intermodal guidance of behavior. Information in the symbolic mode must be translated into corresponding action modes. This involves learning how to organize action sequences, to monitor and compare behavioral enactments against the symbolic model, and to correct evident mismatches (Carroll & Bandura, 1985; 1987). When children must depend on what others tell them, because they cannot observe fully all of their own actions, detecting and correcting mismatches requires linguistic competencies. Deficiencies in any of these production subskills can create a developmental lag between comprehending and performing. Motivational Processes Motivational factors that influence the use to which modeled knowledge is put undergo significant developmental changes. During infancy, imitation functions mainly to secure interpersonal responsiveness. Through mutual modeling with adults, infants enjoy playful intimacy and gain experience in social reciprocation. Before long parents cease mimicking their infant's actions, but they remain responsive to instances of infants adopting modeled patterns that expand their competencies. What continues to serve a social function for young infants changes into an instructional vehicle for parents. This transition requires infants to cognize, on the basis of observed regularities, the social effects their different imitations are likely to produce. To help infants to learn the functional value of modeling the parents make the outcomes salient, recurrent, consistent, and closely tied to the infant's actions (Papousek & Papousek, 1977). With increasing cognitive development, children become more skilled at judging probable outcomes of their actions. Such outcome expectations serve as incentives for observational learning. 30 What has incentive value for children also changes with experience. At the earliest level, infants and young children are motivated primarily by the immediate sensory and social effects of their actions. In the course of development, symbolic incentives signifying achievements, the exercise of mas",
"title": ""
},
{
"docid": "2168edeee6171ef9df18f74f9b1d2c47",
"text": "We present a novel high-level parallel programming model aimed at graphics processing units (GPUs). We embed GPU kernels as data-parallel array computations in the purely functional language Haskell. GPU and CPU computations can be freely interleaved with the type system tracking the two different modes of computation. The embedded language of array computations is sufficiently limited that our system can automatically extract these computations and compile them to efficient GPU code. In this paper, we outline our approach and present the results of a few preliminary benchmarks.",
"title": ""
},
{
"docid": "8f24898cb21a259d9260b67202141d49",
"text": "PROBLEM\nHow can human contributions to accidents be reconstructed? Investigators can easily take the position a of retrospective outsider, looking back on a sequence of events that seems to lead to an inevitable outcome, and pointing out where people went wrong. This does not explain much, however, and may not help prevent recurrence.\n\n\nMETHOD AND RESULTS\nThis paper examines how investigators can reconstruct the role that people contribute to accidents in light of what has recently become known as the new view of human error. The commitment of the new view is to move controversial human assessments and actions back into the flow of events of which they were part and which helped bring them forth, to see why assessments and actions made sense to people at the time. The second half of the paper addresses one way in which investigators can begin to reconstruct people's unfolding mindsets.\n\n\nIMPACT ON INDUSTRY\nIn an era where a large portion of accidents are attributed to human error, it is critical to understand why people did what they did, rather than judging them for not doing what we now know they should have done. This paper helps investigators avoid the traps of hindsight by presenting a method with which investigators can begin to see how people's actions and assessments actually made sense at the time.",
"title": ""
},
{
"docid": "0b631a4139efb14c1fe43876b29cf1c6",
"text": "In recent years, remote sensing image data have increased significantly due to the improvement of remote sensing technique. On the other hand, data acquisition rate will also be accelerated by increasing satellite sensors. Hence, it is a large challenge to make full use of so considerable data by conventional retrieval approach. The lack of semantic based retrieval capability has impeded application of remote sensing data. To address the issue, we propose a framework based on domain-dependent ontology to perform semantic retrieval in image archives. Firstly, primitive features expressed by color and texture are extracted to gain homogeneous region by means of our unsupervised algorithm. The homogeneous regions are described by high-level concepts depicted and organized by domain specific ontology. Interactive learning technique is employed to associate regions and high-level concepts. These associations are used to perform querying task. Additionally, a reasoning mechanism over ontology integrating an inference engine is discussed. It enables the capability of semantic query in archives by mining the interrelationships among domain concepts and their properties to satisfy users’ requirements. In our framework, ontology is used to provide a sharable and reusable concept set as infrastructure for high level extension such as reasoning. Finally, preliminary results are present and future work is also discussed. KeywordsImage retrieval; Ontology; Semantic reasoning;",
"title": ""
},
{
"docid": "990c1e569bf489d23182d5778a3c1b3f",
"text": "The future Internet is an IPv6 network interconnecting traditional computers and a large number of smart objects. This Internet of Things (IoT) will be the foundation of many services and our daily life will depend on its availability and reliable operation. Therefore, among many other issues, the challenge of implementing secure communication in the IoT must be addressed. In the traditional Internet, IPsec is the established and tested way of securing networks. It is therefore reasonable to explore the option of using IPsec as a security mechanism for the IoT. Smart objects are generally added to the Internet using IPv6 over Low-power Wireless Personal Area Networks (6LoWPAN), which defines IP communication for resource-constrained networks. Thus, to provide security for the IoT based on the trusted and tested IPsec mechanism, it is necessary to define an IPsec extension of 6LoWPAN. In this paper, we present such a 6LoWPAN/IPsec extension and show the viability of this approach. We describe our 6LoWPAN/IPsec implementation, which we evaluate and compare with our implementation of IEEE 802.15.4 link-layer security. We also show that it is possible to reuse crypto hardware within existing IEEE 802.15.4 transceivers for 6LoWPAN/IPsec. The evaluation results show that IPsec is a feasible option for securing the IoT in terms of packet size, energy consumption, memory usage, and processing time. Furthermore, we demonstrate that in contrast to common belief, IPsec scales better than link-layer security as the data size and the number of hops grow, resulting in time and energy savings. Copyright © 2012 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "0894fa5925eae1d0f1235115dee8d85e",
"text": "Using aperture-fed stacked curl elements, a full corporate-fed millimeter-wave planar broadband circularly polarized antenna array is proposed. First, the stacked curl element is aperture-fed, using the substrate-integrated waveguide (SIW) feeding network. This element is designed under a specific boundary condition, which leads to an excellent agreement of impedance bandwidth and axial ratio (AR) bandwidth between the designed element and the corresponding antenna array. Simulation results show that the antenna element achieves 35.8% impedance bandwidth and 37.8% 3 dB AR bandwidth with a maximum gain of 8.2 dBic. Then, a <inline-formula> <tex-math notation=\"LaTeX\">$2\\times 2$ </tex-math></inline-formula> subarray is designed to achieve 33.1% impedance bandwidth. Next, the full-corporate SIW feeding scheme is utilized to construct the <inline-formula> <tex-math notation=\"LaTeX\">$8\\times 8$ </tex-math></inline-formula> broadband antenna array. Finally, the <inline-formula> <tex-math notation=\"LaTeX\">$8\\times 8$ </tex-math></inline-formula> antenna array is fabricated and its performance is evaluated. Measurement results show that its impedance bandwidth is 35.4%, 3 dB AR bandwidth is 33.8%, and 3 dB gain bandwidth is 32.2% with a peak gain of 23.5 dBic. Considering these three types of bandwidth together, its overall bandwidth is 30.6%. Good efficiency of about 70% is obtained because of the low-loss full-corporate SIW feeding scheme. A <inline-formula> <tex-math notation=\"LaTeX\">$4\\times 4$ </tex-math></inline-formula> multibeam array is also simulated to verify the multibeam potential of the proposed antenna array.",
"title": ""
},
{
"docid": "5f68b3ab2253349941fc1bf7e602c6a2",
"text": "Motivated by recent advances in adaptive sparse representations and nonlocal image modeling, we propose a patch-based image interpolation algorithm under a set theoretic framework. Our algorithm alternates the projection onto two convex sets: one is given by the observation data and the other defined by a sparsity-based nonlocal prior similar to BM3D. In order to optimize the design of observation constraint set, we propose to address the issue of sampling pattern and model it by a spatial point process. A Monte-Carlo based algorithm is proposed to optimize the randomness of sampling patterns to better approximate homogeneous Poisson process. Extensive experimental results in image interpolation and coding applications are reported to demonstrate the potential of the proposed algorithms.",
"title": ""
},
{
"docid": "733f5029329072adf5635f0b4d0ad1cb",
"text": "We present a new approach to scalable training of deep learning machines by incremental block training with intra-block parallel optimization to leverage data parallelism and blockwise model-update filtering to stabilize learning process. By using an implementation on a distributed GPU cluster with an MPI-based HPC machine learning framework to coordinate parallel job scheduling and collective communication, we have trained successfully deep bidirectional long short-term memory (LSTM) recurrent neural networks (RNNs) and fully-connected feed-forward deep neural networks (DNNs) for large vocabulary continuous speech recognition on two benchmark tasks, namely 309-hour Switchboard-I task and 1,860-hour \"Switch-board+Fisher\" task. We achieve almost linear speedup up to 16 GPU cards on LSTM task and 64 GPU cards on DNN task, with either no degradation or improved recognition accuracy in comparison with that of running a traditional mini-batch based stochastic gradient descent training on a single GPU.",
"title": ""
},
{
"docid": "7209139cc4d9ca5d41eba5e0f5256eb9",
"text": "Over the last few years, research on learning and memory has become increasingly interdisciplinary. In the past, theories of learning, as a prerogative of psychologists, were generally formulated in purely verbal terms and evaluated exclusively at the behavioral level. At present, scientists are trying to build theories with a quantitative and biological flavor, seeking to embrace more complex behavioral phenomena. Pavlovian conditioning, one of the simplest and ubiquitous forms of learning, is especially suited for this multiple level analysis (i.e., quantitative, neurobiological, and behavioral), in part because of recent discoveries showing a correspondence between behavioral phenomena and associative properties at the cellular and systems levels, and in part because of its well established quantitative theoretical tradition. The present review, examines the mayor quantitative theories of Pavlovian conditioning and the phenomena to which they have been designed to account. In order to provide researchers from different disciplines with a simple guideline about the rationale of the different theoretical choices, all the models are described through a single formalism based on the neural network connectionist perspective.",
"title": ""
},
{
"docid": "507a60e62e9d2086481e7a306d012e52",
"text": "Health monitoring systems have rapidly evolved recently, and smart systems have been proposed to monitor patient current health conditions, in our proposed and implemented system, we focus on monitoring the patient's blood pressure, and his body temperature. Based on last decade statistics of medical records, death rates due to hypertensive heart disease, shows that the blood pressure is a crucial risk factor for atherosclerosis and ischemic heart diseases; thus, preventive measures should be taken against high blood pressure which provide the ability to track, trace and save patient's life at appropriate time is an essential need for mankind. Nowadays, Globalization demands Smart cities, which involves many attributes and services, such as government services, Intelligent Transportation Systems (ITS), energy, health care, water and waste. This paper proposes a system architecture for smart healthcare based on GSM and GPS technologies. The objective of this work is providing an effective application for Real Time Health Monitoring and Tracking. The system will track, trace, monitor patients and facilitate taking care of their health; so efficient medical services could be provided at appropriate time. By Using specific sensors, the data will be captured and compared with a configurable threshold via microcontroller which is defined by a specialized doctor who follows the patient; in any case of emergency a short message service (SMS) will be sent to the Doctor's mobile number along with the measured values through GSM module. furthermore, the GPS provides the position information of the monitored person who is under surveillance all the time. Moreover, the paper demonstrates the feasibility of realizing a complete end-to-end smart health system responding to the real health system design requirements by taking in consideration wider vital human health parameters such as respiration rate, nerves signs ... etc. The system will be able to bridge the gap between patients - in dramatic health change occasions- and health entities who response and take actions in real time fashion.",
"title": ""
},
{
"docid": "1be9813cd6765a4d3df0f84ff8580256",
"text": "Deep learning models learn to fit training data while they are highly expected to generalize well to testing data. Most works aim at finding such models by creatively designing architectures and fine-tuning parameters. To adapt to particular tasks, hand-crafted information such as image prior has also been incorporated into end-to-end learning. However, very little progress has been made on investigating how an individual training sample will influence the generalization ability of a model. In other words, to achieve high generalization accuracy, do we really need all the samples in a training dataset? In this paper, we demonstrate that deep learning models such as convolutional neural networks may not favor all training samples, and generalization accuracy can be further improved by dropping those unfavorable samples. Specifically, the influence of removing a training sample is quantifiable, and we propose a Two-Round Training approach, aiming to achieve higher generalization accuracy. We locate unfavorable samples after the first round of training, and then retrain the model from scratch with the reduced training dataset in the second round. Since our approach is essentially different from fine-tuning or further training, the computational cost should not be a concern. Our extensive experimental results indicate that, with identical settings, the proposed approach can boost performance of the well-known networks on both high-level computer vision problems such as image classification, and low-level vision problems such as image denoising.",
"title": ""
},
{
"docid": "bb31ee54930ed5e3f807bdb93dcb8b80",
"text": "Darwin charted the field of emotional expressions with five major contributions. Possible explanations of why he was able to make such important and lasting contributions are proposed. A few of the important questions that he did not consider are described. Two of those questions have been answered at least in part; one remains a major gap in our understanding of emotion.",
"title": ""
},
{
"docid": "b585947e882fca6f07b65dc940cc819f",
"text": "One way to help all users of commercial Web search engines be more successful in their searches is to better understand what those users with greater search expertise are doing, and use this knowledge to benefit everyone. In this paper we study the interaction logs of advanced search engine users (and those not so advanced) to better understand how these user groups search. The results show that there are marked differences in the queries, result clicks, post-query browsing, and search success of users we classify as advanced (based on their use of query operators), relative to those classified as non-advanced. Our findings have implications for how advanced users should be supported during their searches, and how their interactions could be used to help searchers of all experience levels find more relevant information and learn improved searching strategies.",
"title": ""
},
{
"docid": "1ceee71530d920adb32f4cab56b0c4a2",
"text": "Requirement analysis is the preliminary step in software development process. The requirements stated by the clients are analyzed and an abstraction of it, is created which is termed as requirement model. The automatic generation of UML diagram from natural language requirements is highly challenging and demanding very efficient methodology. Unified Modeling Language (UML) models are helpful for understanding the problems, communicating with application experts and preparing documentation. The static design view of the system can be modeled using a UML class diagram. System requirements stated by the user are usually in natural language form. This is an imprecise and inconsistent form which is difficult to be used by the developer for design UML model. We present a new methodology for generating UML diagrams or models from natural language problem statement or requirement specification. We have named our methodology as Requirement analysis and UML diagram extraction (RAUE).",
"title": ""
},
{
"docid": "e6b9c0064a8dcf2790a891e20a5bb01d",
"text": "The difficulty in inverse reinforcement learning (IRL) aris es in choosing the best reward function since there are typically an infinite number of eward functions that yield the given behaviour data as optimal. Using a Bayes i n framework, we address this challenge by using the maximum a posteriori (MA P) estimation for the reward function, and show that most of the previous IRL al gorithms can be modeled into our framework. We also present a gradient metho d for the MAP estimation based on the (sub)differentiability of the poster ior distribution. We show the effectiveness of our approach by comparing the performa nce of the proposed method to those of the previous algorithms.",
"title": ""
},
{
"docid": "e65d522f6b08eeebb8a488b133439568",
"text": "We propose a bootstrap learning algorithm for salient object detection in which both weak and strong models are exploited. First, a weak saliency map is constructed based on image priors to generate training samples for a strong model. Second, a strong classifier based on samples directly from an input image is learned to detect salient pixels. Results from multiscale saliency maps are integrated to further improve the detection performance. Extensive experiments on six benchmark datasets demonstrate that the proposed bootstrap learning algorithm performs favorably against the state-of-the-art saliency detection methods. Furthermore, we show that the proposed bootstrap learning approach can be easily applied to other bottom-up saliency models for significant improvement.",
"title": ""
}
] |
scidocsrr
|
8273a14719375c386589fbfadf432e2a
|
An Analytical Study of Routing Attacks in Vehicular Ad-hoc Networks ( VANETs )
|
[
{
"docid": "fd61461d5033bca2fd5a2be9bfc917b7",
"text": "Vehicular networks are very likely to be deployed in the coming years and thus become the most relevant form of mobile ad hoc networks. In this paper, we address the security of these networks. We provide a detailed threat analysis and devise an appropriate security architecture. We also describe some major design decisions still to be made, which in some cases have more than mere technical implications. We provide a set of security protocols, we show that they protect privacy and we analyze their robustness and efficiency.",
"title": ""
}
] |
[
{
"docid": "c504800ce08654fb5bf49356d2f7fce3",
"text": "Memristive synapses, the most promising passive devices for synaptic interconnections in artificial neural networks, are the driving force behind recent research on hardware neural networks. Despite significant efforts to utilize memristive synapses, progress to date has only shown the possibility of building a neural network system that can classify simple image patterns. In this article, we report a high-density cross-point memristive synapse array with improved synaptic characteristics. The proposed PCMO-based memristive synapse exhibits the necessary gradual and symmetrical conductance changes, and has been successfully adapted to a neural network system. The system learns, and later recognizes, the human thought pattern corresponding to three vowels, i.e. /a /, /i /, and /u/, using electroencephalography signals generated while a subject imagines speaking vowels. Our successful demonstration of a neural network system for EEG pattern recognition is likely to intrigue many researchers and stimulate a new research direction.",
"title": ""
},
{
"docid": "d5509e4d4165872122609deddb440d40",
"text": "Model selection with cross validation (CV) is very popular in machine learning. However, CV with grid and other common search strategies cannot guarantee to find the model with minimum CV error, which is often the ultimate goal of model selection. Recently, various solution path algorithms have been proposed for several important learning algorithms including support vector classification, Lasso, and so on. However, they still do not guarantee to find the model with minimum CV error. In this paper, we first show that the solution paths produced by various algorithms have the property of piecewise linearity. Then, we prove that a large class of error (or loss) functions are piecewise constant, linear, or quadratic w.r.t. the regularization parameter, based on the solution path. Finally, we propose a new generalized error path algorithm (GEP), and prove that it will find the model with minimum CV error for the entire range of the regularization parameter. The experimental results on a variety of datasets not only confirm our theoretical findings, but also show that the best model with our GEP has better generalization error on the test data, compared to the grid search, manual search, and random search.",
"title": ""
},
{
"docid": "914daf0fd51e135d6d964ecbe89a5b29",
"text": "Large-scale parallel programming environments and algorithms require efficient group-communication on computing systems with failing nodes. Existing reliable broadcast algorithms either cannot guarantee that all nodes are reached or are very expensive in terms of the number of messages and latency. This paper proposes Corrected-Gossip, a method that combines Monte Carlo style gossiping with a deterministic correction phase, to construct a Las Vegas style reliable broadcast that guarantees reaching all the nodes at low cost. We analyze the performance of this method both analytically and by simulations and show how it reduces the latency and network load compared to existing algorithms. Our method improves the latency by 20% and the network load by 53% compared to the fastest known algorithm on 4,096 nodes. We believe that the principle of corrected-gossip opens an avenue for many other reliable group communication operations.",
"title": ""
},
{
"docid": "b5f8f310f2f4ed083b20f42446d27feb",
"text": "This paper provides algorithms that use an information-theoretic analysis to learn Bayesian network structures from data. Based on our three-phase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional independence (CI) tests in typical cases. We provide precise conditions that specify when these algorithms are guaranteed to be correct as well as empirical evidence (from real world applications and simulation tests) that demonstrates that these systems work efficiently and reliably in practice.",
"title": ""
},
{
"docid": "c7daf28d656a9e51e5a738e70beeadcf",
"text": "We present a taxonomy for Information Visualization (IV) that characterizes it in terms of data, task, skill and context, as well as a number of dimensions that relate to the input and output hardware, the software tools, as well as user interactions and human perceptual abil ities. We il lustrate the utilit y of the taxonomy by focusing particularly on the information retrieval task and the importance of taking into account human perceptual capabiliti es and limitations. Although the relevance of Psychology to IV is often recognised, we have seen relatively littl e translation of psychological results and theory to practical IV applications. This paper targets the better development of information visualizations through the introduction of a framework delineating the major factors in interface development. We believe that higher quality visualizations will result from structured developments that take into account these considerations and that the framework will also serve to assist the development of effective evaluation and assessment processes.",
"title": ""
},
{
"docid": "e4cea8ba1de77c94b658c83b08d4c584",
"text": "Algorithms, IEEE.it can be combined with many IP address lookup algorithms for fast update. Surveys on address lookup algorithms were given in 5 11 9. Ruiz-Sanchez, E.W. Dabbous, Survey and Taxonomy of. IP.AbstractIP address lookup is a key bottleneck for. Lookup algorithm based on a new memory organization. Survey and taxonomy of IP address lookup.IP routing requires that a router perform a longest-prefix-match address lookup for each incoming datagram in order to determine the datagrams next. A very quick survey at the time of writing indicates that.",
"title": ""
},
{
"docid": "65a4709f62c084cdd07fe54d834b8eaf",
"text": "Although in the era of third generation (3G) mobile networks technical hurdles are minor, the continuing failure of mobile payments (m-payments) withstands the endorsement by customers and service providers. A major reason is the uncommonly high interdependency of technical, human and market factors which have to be regarded and orchestrated cohesively to solve the problem. In this paper, we apply Business Model Ontology in order to develop an m-payment business model framework based on the results of a precedent multi case study analysis of 27 m-payment procedures. The framework is depicted with a system of morphological boxes and the interrelations between the associated characteristics. Representing any m-payment business model along with its market setting and influencing decisions as instantiations, the resulting framework enables researchers and practitioners for comprehensive analysis of existing and future models and provides a helpful tool for m-payment business model engineering.",
"title": ""
},
{
"docid": "d9f0f36e75c08d2c3097e85d8c2dec36",
"text": "Social software solutions in enterprises such as IBM Connections are said to have the potential to support communication and collaboration among employees. However, companies are faced to manage the adoption of such collaborative tools and therefore need to raise the employees’ acceptance and motivation. To solve these problems, developers started to implement Gamification elements in social software tools, which aim to increase users’ motivation. In this research-in-progress paper, we give first insights and critically examine the current market of leading social software solutions to find out which Gamification approaches are implementated in such collaborative tools. Our findings show, that most of the major social collaboration solutions do not offer Gamification features by default, but leave the integration to a various number of third party plug-in vendors. Furthermore we identify a trend in which Gamification solutions majorly focus on rewarding quantitative improvement of work activities, neglecting qualitative performance. Subsequently, current solutions do not match recent findings in research and ignore risks that can lower the employees’ motivation and work performance in the long run.",
"title": ""
},
{
"docid": "9d089af812c0fdd245a218362d88b62a",
"text": "Interaction is increasingly a public affair, taking place in our theatres, galleries, museums, exhibitions and on the city streets. This raises a new design challenge for HCI - how should spectators experience a performer's interaction with a computer? We classify public interfaces (including examples from art, performance and exhibition design) according to the extent to which a performer's manipulations of an interface and their resulting effects are hidden, partially revealed, fully revealed or even amplified for spectators. Our taxonomy uncovers four broad design strategies: 'secretive,' where manipulations and effects are largely hidden; 'expressive,' where they tend to be revealed enabling the spectator to fully appreciate the performer's interaction; 'magical,' where effects are revealed but the manipulations that caused them are hidden; and finally 'suspenseful,' where manipulations are apparent but effects are only revealed as the spectator takes their turn.",
"title": ""
},
{
"docid": "100c152685655ad6865f740639dd7d57",
"text": "Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.",
"title": ""
},
{
"docid": "e577c2827822bfe2f1fc177efeeef732",
"text": "This paper presents a control problem involving an experimental propeller setup that is called the twin rotor multi-input multi-output system (TRMS). The control objective is to make the beam of the TRMS move quickly and accurately to the desired attitudes, both the pitch angle and the azimuth angle in the condition of decoupling between two axes. It is difficult to design a suitable controller because of the influence between the two axes and nonlinear movement. For easy demonstration in the vertical and horizontal separately, the TRMS is decoupled by the main rotor and tail rotor. An intelligent control scheme which utilizes a hybrid PID controller is implemented to this problem. Simulation results show that the new approach to the TRMS control problem can improve the tracking performance and reduce control energy.",
"title": ""
},
{
"docid": "f50c735147be5112bc3c81107002d99a",
"text": "Over the years, several spatio-temporal interest point detectors have been proposed. While some detectors can only extract a sparse set of scaleinvariant features, others allow for the detection of a larger amount of features at user-defined scales. This paper presents for the first time spatio-temporal interest points that are at the same time scale-invariant (both spatially and temporally) and densely cover the video content. Moreover, as opposed to earlier work, the features can be computed efficiently. Applying scale-space theory, we show that this can be achieved by using the determinant of the Hessian as the saliency measure. Computations are speeded-up further through the use of approximative box-filter operations on an integral video structure. A quantitative evaluation and experimental results on action recognition show the strengths of the proposed detector in terms of repeatability, accuracy and speed, in comparison with previously proposed detectors.",
"title": ""
},
{
"docid": "40aa8b356983686472b3d2871add4491",
"text": "Illegal logging is in these days widespread problem. In this paper we propose the system based on principles of WSN for monitoring the forest. Acoustic signal processing and evaluation system described in this paper is dealing with the detection of chainsaw sound with autocorrelation method. This work is describing first steps in building the integrated system.",
"title": ""
},
{
"docid": "c07c69bf5e2fce6f9944838ce80b5b8c",
"text": "Many image editing applications rely on the analysis of image patches. In this paper, we present a method to analyze patches by embedding them to a vector space, in which the Euclidean distance reflects patch similarity. Inspired by Word2Vec, we term our approach Patch2Vec. However, there is a significant difference between words and patches. Words have a fairly small and well defined dictionary. Image patches, on the other hand, have no such dictionary and the number of different patch types is not well defined. The problem is aggravated by the fact that each patch might contain several objects and textures. Moreover, Patch2Vec should be universal because it must be able to map never-seen-before texture to the vector space. The mapping is learned by analyzing the distribution of all natural patches. We use Convolutional Neural Networks (CNN) to learn Patch2Vec. In particular, we train a CNN on labeled images with a triplet-loss objective function. The trained network encodes a given patch to a 128D vector. Patch2Vec is evaluated visually, qualitatively, and quantitatively. We then use several variants of an interactive single-click image segmentation algorithm to demonstrate the power of our method.",
"title": ""
},
{
"docid": "43f9cd44dee709339fe5b11eb73b15b6",
"text": "Mutual interference of radar systems has been identified as one of the major challenges for future automotive radar systems. In this work the interference of frequency (FMCW) and phase modulated continuous wave (PMCW) systems is investigated by means of simulations. All twofold combinations of the aforementioned systems are considered. The interference scenario follows a typical use-case from the well-known MOre Safety for All by Radar Interference Mitigation (MOSARIM) study. The investigated radar systems operate with similar system parameters to guarantee a certain comparability, but with different waveform durations, and chirps with different slopes and different phase code sequences, respectively. Since the effects in perfect synchrony are well understood, we focus on the cases where both systems exhibit a certain asynchrony. It is shown that the energy received from interferers can cluster in certain Doppler bins in the range-Doppler plane when systems exhibit a slight asynchrony.",
"title": ""
},
{
"docid": "3176f0a4824b2dd11d612d55b4421881",
"text": "This article reviews some of the criticisms directed towards the eclectic paradigm of international production over the past decade, and restates its main tenets. The second part of the article considers a number of possible extensions of the paradigm and concludes by asserting that it remains \"a robust general framework for explaining and analysing not only the economic rationale of economic production but many organisational nd impact issues in relation to MNE activity as well.\"",
"title": ""
},
{
"docid": "811c430ff9efd0f8a61ff40753f083d4",
"text": "The Waikato Environment for Knowledge Analysis (Weka) is a comprehensive suite of Java class libraries that implement many state-of-the-art machine learning and data mining algorithms. Weka is freely available on the World-Wide Web and accompanies a new text on data mining [1] which documents and fully explains all the algorithms it contains. Applications written using the Weka class libraries can be run on any computer with a Web browsing capability; this allows users to apply machine learning techniques to their own data regardless of computer platform.",
"title": ""
},
{
"docid": "5859379f3c4c5a7186c9dc8c85e1e384",
"text": "Purpose – Investigate the use of two imaging-based methods – coded pattern projection and laser-based triangulation – to generate 3D models as input to a rapid prototyping pipeline. Design/methodology/approach – Discusses structured lighting technologies as suitable imaging-based methods. Two approaches, coded-pattern projection and laser-based triangulation, are specifically identified and discussed in detail. Two commercial systems are used to generate experimental results. These systems include the Genex Technologies 3D FaceCam and the Integrated Vision Products Ranger System. Findings – Presents 3D reconstructions of objects from each of the commercial systems. Research limitations/implications – Provides background in imaging-based methods for 3D data collection and model generation. A practical limitation is that imaging-based systems do not currently meet accuracy requirements, but continued improvements in imaging systems will minimize this limitation. Practical implications – Imaging-based approaches to 3D model generation offer potential to increase scanning time and reduce scanning complexity. Originality/value – Introduces imaging-based concepts to the rapid prototyping pipeline.",
"title": ""
},
{
"docid": "ab97caed9c596430c3d76ebda55d5e6e",
"text": "A 1.5 GHz low noise amplifier for a Global Positioning System (GPS) receiver has been implemented in a 0.6 /spl mu/m CMOS process. This amplifier provides a forward gain of 22 dB with a noise figure of only 3.5 dB while drawing 30 mW from a 1.5 V supply. To the authors' knowledge, this represents the lowest noise figure reported to date for a CMOS amplifier operating above 1 GHz.",
"title": ""
},
{
"docid": "60d8839833d10b905729e3d672cafdd6",
"text": "In order to account for the phenomenon of virtual pitch, various theories assume implicitly or explicitly that each spectral component introduces a series of subharmonics. The spectral-compression method for pitch determination can be viewed as a direct implementation of this principle. The widespread application of this principle in pitch determination is, however, impeded by numerical problems with respect to accuracy and computational efficiency. A modified algorithm is described that solves these problems. Its performance is tested for normal speech and \"telephone\" speech, i.e., speech high-pass filtered at 300 Hz. The algorithm out-performs the harmonic-sieve method for pitch determination, while its computational requirements are about the same. The algorithm is described in terms of nonlinear system theory, i.c., subharmonic summation. It is argued that the favorable performance of the subharmonic-summation algorithm stems from its corresponding more closely with current pitch-perception theories than does the harmonic sieve.",
"title": ""
}
] |
scidocsrr
|
f8a56babbb0a788a5a5846259882844d
|
Improving Security Level through Obfuscation Technique for Source Code Protection using AES Algorithm
|
[
{
"docid": "fe944f1845eca3b0c252ada2c0306d61",
"text": "Now a days sharing the information over internet is becoming a critical issue due to security problems. Hence more techniques are needed to protect the shared data in an unsecured channel. The present work focus on combination of cryptography and steganography to secure the data while transmitting in the network. Firstly the data which is to be transmitted from sender to receiver in the network must be encrypted using the encrypted algorithm in cryptography .Secondly the encrypted data must be hidden in an image or video or an audio file with help of steganographic algorithm. Thirdly by using decryption technique the receiver can view the original data from the hidden image or video or audio file. Transmitting data or document can be done through these ways will be secured. In this paper we implemented three encrypt techniques like DES, AES and RSA algorithm along with steganographic algorithm like LSB substitution technique and compared their performance of encrypt techniques based on the analysis of its stimulated time at the time of encryption and decryption process and also its buffer size experimentally. The entire process has done in C#.",
"title": ""
},
{
"docid": "24fc1997724932c6ddc3311a529d7505",
"text": "In these days securing a network is an important issue. Many techniques are provided to secure network. Cryptographic is a technique of transforming a message into such form which is unreadable, and then retransforming that message back to its original form. Cryptography works in two techniques: symmetric key also known as secret-key cryptography algorithms and asymmetric key also known as public-key cryptography algorithms. In this paper we are reviewing different symmetric and asymmetric algorithms.",
"title": ""
},
{
"docid": "395dcc7c09562f358c07af9c999fbdc7",
"text": "Protecting source code against reverse engineering and theft is an important problem. The goal is to carry out computations using confidential algorithms on an untrusted party while ensuring confidentiality of algorithms. This problem has been addressed for Boolean circuits known as ‘circuit privacy’. Circuits corresponding to real-world programs are impractical. Well-known obfuscation techniques are highly practicable, but provide only limited security, e.g., no piracy protection. In this work, we modify source code yielding programs with adjustable performance and security guarantees ranging from indistinguishability obfuscators to (non-secure) ordinary obfuscation. The idea is to artificially generate ‘misleading’ statements. Their results are combined with the outcome of a confidential statement using encrypted selector variables. Thus, an attacker must ‘guess’ the encrypted selector variables to disguise the confidential source code. We evaluated our method using more than ten programmers as well as pattern mining across open source code repositories to gain insights of (micro-)coding patterns that are relevant for generating misleading statements. The evaluation reveals that our approach is effective in that it successfully preserves source code confidentiality.",
"title": ""
}
] |
[
{
"docid": "806088642828d5064e0b52f3c08f6ce9",
"text": "We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE’s ability to reconstruct inputs which are not near the data manifold. Furthermore, we show that a wide variety of features can be learned when different activation functions are used. Finally, connections are established with the Contractive and Sparse Auto-Encoders.",
"title": ""
},
{
"docid": "d6e76bfeeb127addcbe2eb77b1b0ad7e",
"text": "The choice of modeling units is critical to automatic speech recognition (ASR) tasks. Conventional ASR systems typically choose context-dependent states (CD-states) or contextdependent phonemes (CD-phonemes) as their modeling units. However, it has been challenged by sequence-to-sequence attention-based models, which integrate an acoustic, pronunciation and language model into a single neural network. On English ASR tasks, previous attempts have already shown that the modeling unit of graphemes can outperform that of phonemes by sequence-to-sequence attention-based model. In this paper, we are concerned with modeling units on Mandarin Chinese ASR tasks using sequence-to-sequence attention-based models with the Transformer. Five modeling units are explored including context-independent phonemes (CI-phonemes), syllables, words, sub-words and characters. Experiments on HKUST datasets demonstrate that the lexicon free modeling units can outperform lexicon related modeling units in terms of character error rate (CER). Among five modeling units, character based model performs best and establishes a new state-of-the-art CER of 26.64% on HKUST datasets without a hand-designed lexicon and an extra language model integration, which corresponds to a 4.8% relative improvement over the existing best CER of 28.0% by the joint CTC-attention based encoder-decoder network.",
"title": ""
},
{
"docid": "032db9c2dba42ca376e87b28ecb812fa",
"text": "This paper tries to put various ways in which Natural Language Processing (NLP) and Software Engineering (SE) can be seen as inter-disciplinary research areas. We survey the current literature, with the aim of assessing use of Software Engineering and Natural Language Processing tools in the researches undertaken. An assessment of how various phases of SDLC can employ NLP techniques is presented. The paper also provides the justification of the use of text for automating or combining both these areas. A short research direction while undertaking multidisciplinary research is also provided.",
"title": ""
},
{
"docid": "25176cef55afd54f06b7127d10729f5e",
"text": "Senescent cells (SCs) accumulate with age and after genotoxic stress, such as total-body irradiation (TBI). Clearance of SCs in a progeroid mouse model using a transgenic approach delays several age-associated disorders, suggesting that SCs play a causative role in certain age-related pathologies. Thus, a 'senolytic' pharmacological agent that can selectively kill SCs holds promise for rejuvenating tissue stem cells and extending health span. To test this idea, we screened a collection of compounds and identified ABT263 (a specific inhibitor of the anti-apoptotic proteins BCL-2 and BCL-xL) as a potent senolytic drug. We show that ABT263 selectively kills SCs in culture in a cell type– and species-independent manner by inducing apoptosis. Oral administration of ABT263 to either sublethally irradiated or normally aged mice effectively depleted SCs, including senescent bone marrow hematopoietic stem cells (HSCs) and senescent muscle stem cells (MuSCs). Notably, this depletion mitigated TBI-induced premature aging of the hematopoietic system and rejuvenated the aged HSCs and MuSCs in normally aged mice. Our results demonstrate that selective clearance of SCs by a pharmacological agent is beneficial in part through its rejuvenation of aged tissue stem cells. Thus, senolytic drugs may represent a new class of radiation mitigators and anti-aging agents.",
"title": ""
},
{
"docid": "a17241732ee8e9a8bc34caea2f08545d",
"text": "Text line segmentation is an essential pre-processing stage for off-line handwriting recognition in many Optical Character Recognition (OCR) systems. It is an important step because inaccurately segmented text lines will cause errors in the recognition stage. Text line segmentation of the handwritten documents is still one of the most complicated problems in developing a reliable OCR. The nature of handwriting makes the process of text line segmentation very challenging. Several techniques to segment handwriting text line have been proposed in the past. This paper seeks to provide a comprehensive review of the methods of off-line handwriting text line segmentation proposed by researchers.",
"title": ""
},
{
"docid": "e872173252bf7b516183d3e733c36f6c",
"text": "Nonlinear autoregressive moving average with exogenous inputs (NARMAX) models have been successfully demonstrated for modeling the input-output behavior of many complex systems. This paper deals with the proposition of a scheme to provide time series prediction. The approach is based on a recurrent NARX model obtained by linear combination of a recurrent neural network (RNN) output and the real data output. Some prediction metrics are also proposed to assess the quality of predictions. This metrics enable to compare different prediction schemes and provide an objective way to measure how changes in training or prediction model (Neural network architecture) affect the quality of predictions. Results show that the proposed NARX approach consistently outperforms the prediction obtained by the RNN neural network.",
"title": ""
},
{
"docid": "33a9140fb57200a489b9150d39f0ab65",
"text": "In this paper, a double-quadrant state-of-charge (SoC)-based droop control method for distributed energy storage system is proposed to reach the proper power distribution in autonomous dc microgrids. In order to prolong the lifetime of the energy storage units (ESUs) and avoid the overuse of a certain unit, the SoC of each unit should be balanced and the injected/output power should be gradually equalized. Droop control as a decentralized approach is used as the basis of the power sharing method for distributed energy storage units. In the charging process, the droop coefficient is set to be proportional to the nth order of SoC, while in the discharging process, the droop coefficient is set to be inversely proportional to the nth order of SoC. Since the injected/output power is inversely proportional to the droop coefficient, it is obtained that in the charging process the ESU with higher SoC absorbs less power, while the one with lower SoC absorbs more power. Meanwhile, in the discharging process, the ESU with higher SoC delivers more power and the one with lower SoC delivers less power. Hence, SoC balancing and injected/output power equalization can be gradually realized. The exponent n of SoC is employed in the control diagram to regulate the speed of SoC balancing. It is found that with larger exponent n, the balancing speed is higher. MATLAB/simulink model comprised of three ESUs is implemented and the simulation results are shown to verify the proposed approach.",
"title": ""
},
{
"docid": "4d52865efa6c359d68125c7013647c86",
"text": "In recent years, we have witnessed an unprecedented proliferation of large document collections. This development has spawned the need for appropriate analytical means. In particular, to seize the thematic composition of large document collections, researchers increasingly draw on quantitative topic models. Among their most prominent representatives is the Latent Dirichlet Allocation (LDA). Yet, these models have significant drawbacks, e.g. the generated topics lack context and thus meaningfulness. Prior research has rarely addressed this limitation through the lens of mixed-methods research. We position our paper towards this gap by proposing a structured mixedmethods approach to the meaningful analysis of large document collections. Particularly, we draw on qualitative coding and quantitative hierarchical clustering to validate and enhance topic models through re-contextualization. To illustrate the proposed approach, we conduct a case study of the thematic composition of the AIS Senior Scholars' Basket of Journals.",
"title": ""
},
{
"docid": "d6bd475e9929748bbb71ac0d82e4f067",
"text": "We present an approach for answering questions that span multiple sentences and exhibit sophisticated cross-sentence anaphoric phenomena, evaluating on a rich source of such questions – the math portion of the Scholastic Aptitude Test (SAT). By using a tree transducer cascade as its basic architecture, our system (called EUCLID) propagates uncertainty from multiple sources (e.g. coreference resolution or verb interpretation) until it can be confidently resolved. Experiments show the first-ever results (43% recall and 91% precision) on SAT algebra word problems. We also apply EUCLID to the public Dolphin algebra question set, and improve the state-of-the-art F1-score from 73.9% to 77.0%.",
"title": ""
},
{
"docid": "a0ebe19188abab323122a5effc3c4173",
"text": "In this paper, we present LOADED, an algorithm for outlier detection in evolving data sets containing both continuous and categorical attributes. LOADED is a tunable algorithm, wherein one can trade off computation for accuracy so that domain-specific response times are achieved. Experimental results show that LOADED provides very good detection and false positive rates, which are several times better than those of existing distance-based schemes.",
"title": ""
},
{
"docid": "119a4b04bc042b68f4b32480a069f6d4",
"text": "Preserving the availability and integrity of the power grid critical infrastructures in the face of fast-spreading intrusions requires advances in detection techniques specialized for such large-scale cyber-physical systems. In this paper, we present a security-oriented cyber-physical state estimation (SCPSE) system, which, at each time instant, identifies the compromised set of hosts in the cyber network and the maliciously modified set of measurements obtained from power system sensors. SCPSE fuses uncertain information from different types of distributed sensors, such as power system meters and cyber-side intrusion detectors, to detect the malicious activities within the cyber-physical system. We implemented a working prototype of SCPSE and evaluated it using the IEEE 24-bus benchmark system. The experimental results show that SCPSE significantly improves on the scalability of traditional intrusion detection techniques by using information from both cyber and power sensors. Furthermore, SCPSE was able to detect all the attacks against the control network in our experiments.",
"title": ""
},
{
"docid": "759bf80a33903899cb7f684aa277eddd",
"text": "Effective patient similarity assessment is important for clinical decision support. It enables the capture of past experience as manifested in the collective longitudinal medical records of patients to help clinicians assess the likely outcomes resulting from their decisions and actions. However, it is challenging to devise a patient similarity metric that is clinically relevant and semantically sound. Patient similarity is highly context sensitive: it depends on factors such as the disease, the particular stage of the disease, and co-morbidities. One way to discern the semantics in a particular context is to take advantage of physicians’ expert knowledge as reflected in labels assigned to some patients. In this paper we present a method that leverages localized supervised metric learning to effectively incorporate such expert knowledge to arrive at semantically sound patient similarity measures. Experiments using data obtained from the MIMIC II database demonstrate the effectiveness of this approach.",
"title": ""
},
{
"docid": "902e6d047605a426ae9bebc3f9ddf139",
"text": "Learning based approaches have not yet achieved their full potential in optical flow estimation, where their performance still trails heuristic approaches. In this paper, we present a CNN based patch matching approach for optical flow estimation. An important contribution of our approach is a novel thresholded loss for Siamese networks. We demonstrate that our loss performs clearly better than existing losses. It also allows to speed up training by a factor of 2 in our tests. Furthermore, we present a novel way for calculating CNN based features for different image scales, which performs better than existing methods. We also discuss new ways of evaluating the robustness of trained features for the application of patch matching for optical flow. An interesting discovery in our paper is that low-pass filtering of feature maps can increase the robustness of features created by CNNs. We proved the competitive performance of our approach by submitting it to the KITTI 2012, KITTI 2015 and MPI-Sintel evaluation portals where we obtained state-of-the-art results on all three datasets.",
"title": ""
},
{
"docid": "ffe6edef11daef1db0c4aac77bed7a23",
"text": "MPI is a well-established technology that is used widely in high-performance computing environment. However, setting up an MPI cluster can be challenging and time-consuming. This paper tackles this challenge by using modern containerization technology, which is Docker, and container orchestration technology, which is Docker Swarm mode, to automate the MPI cluster setup and deployment. We created a ready-to-use solution for developing and deploying MPI programs in a cluster of Docker containers running on multiple machines, orchestrated with Docker Swarm mode, to perform high computation tasks. We explain the considerations when creating Docker image that will be instantiated as MPI nodes, and we describe the steps needed to set up a fully connected MPI cluster as Docker containers running in a Docker Swarm mode. Our goal is to give the rationale behind our solution so that others can adapt to different system requirements. All pre-built Docker images, source code, documentation, and screencasts are publicly available.",
"title": ""
},
{
"docid": "11962ec2381422cfac77ad543b519545",
"text": "In high dimensions, most machine learning methods are brittle to even a small fraction of structured outliers. To address this, we introduce a new meta-algorithm that can take in a base learner such as least squares or stochastic gradient descent, and harden the learner to be resistant to outliers. Our method, Sever, possesses strong theoretical guarantees yet is also highly scalable—beyond running the base learner itself, it only requires computing the top singular vector of a certain n×d matrix. We apply Sever on a drug design dataset and a spam classification dataset, and find that in both cases it has substantially greater robustness than several baselines. On the spam dataset, with 1% corruptions, we achieved 7.4% test error, compared to 13.4%− 20.5% for the baselines, and 3% error on the uncorrupted dataset. Similarly, on the drug design dataset, with 10% corruptions, we achieved 1.42 mean-squared error test error, compared to 1.51-2.33 for the baselines, and 1.23 error on the uncorrupted dataset.",
"title": ""
},
{
"docid": "448b1a9645216cedc89feac0afd70d0c",
"text": "Voluminous amounts of data have been produced, since the past decade as the miniaturization of Internet of things (IoT) devices increases. However, such data are not useful without analytic power. Numerous big data, IoT, and analytics solutions have enabled people to obtain valuable insight into large data generated by IoT devices. However, these solutions are still in their infancy, and the domain lacks a comprehensive survey. This paper investigates the state-of-the-art research efforts directed toward big IoT data analytics. The relationship between big data analytics and IoT is explained. Moreover, this paper adds value by proposing a new architecture for big IoT data analytics. Furthermore, big IoT data analytic types, methods, and technologies for big data mining are discussed. Numerous notable use cases are also presented. Several opportunities brought by data analytics in IoT paradigm are then discussed. Finally, open research challenges, such as privacy, big data mining, visualization, and integration, are presented as future research directions.",
"title": ""
},
{
"docid": "4e5c9901da9ee977d995dd4fd6b9b6bd",
"text": "kmlonolgpbqJrtsHu qNvwlyxzl{vw|~}ololyp | xolyxoqNv
J lgxgOnyc}g pAqNvwl lgrc p|HqJbxz|r rc|pb4|HYl xzHnzl}o}gpb |p'w|rmlypnoHpb0rpb }zqJOn pyxg |HqJOp c}&olypb%nov4|rrclgpbYlo%ys{|Xq|~qlo noxX}ozz|~}lz rlo|xgp4pb0|~} |3 loqNvwH J xzOpb0| p|HqJbxz|rr|pbw|~lmxzHnolo}o}gpb;}gsH}oqly ¡cqOv rpb }zqJOnm¢~p TrloHYly¤£;r¥qOv4XHv&noxX}ozz|~}lz |YxzH|Ynvwl}]vw|~l zlolyp¦}4nonolo}o}gbrp2 |p4s o lyxzlypbq |xzlo|~}^]p|~q§bxz|r4r|pbw|~lmxzHnolo}o}gpbHu ̈cq©c} Joqhlyp qNvwl]no|~}yl^qNvw|~qaqNvwl}llqOv4~} no|o4qJbxzl qNvwl&rtpbbc}oq§Nn pgHxg |HqJOp#qNvwlys%|xol Xlgrrpb«pxzlonoqJrts¦p r|xJYl2w|X¬g4l&q|Xgrclo}2J }oqh|HqJc}o qJOn};®v }&no|p |~¢l¦cq3 ̄=nybr°q]qh%|p|rsH±ylu bpXlgx}zqh|p|p%]xzl qNvwl«|XgrcqJsLJ&qOv4lo}l |Yxo|Xnov4lo}q HYlyr pYlyxgrtspw0rtpw~bc}oqJOn;zlvw|Nxg 2gp¦qNv c} 4|o4lyxou 3l rr Yl}ngxgNzl;| }g rlxgbrlzo|H}lo |oYxzH|Ynv q |Xq|~qlo rlo|xgp4pb0 rpbbc}oq§On^¢p TrcloHYlgT®v } |oYxzH|Ynv vw|~} ololgp}ovw ¡p2xL| ́p4bolyxLJ&q|~}o¢} qhno|4qJ xol¦pgxg|~q§Np p |nyrlo|xolgx2|p# xzl«xzlonq |~}ov Op cqNvwXq]|%noxo c}l p«wlyxxg |pnoly3μLl¶xzl}lgpwq¶|«Ylq|rlo«no|H}l }oqJ4s%J qOvbc} rclz|xgp4pw0lqNvwHL|YrtOlo qh4|xoq]J;}J4lolznv2qh|HHpb",
"title": ""
},
{
"docid": "3d2e47ed90e8ff4dec54e85e4996c961",
"text": "Open source software encourages innovation by allowing users to extend the functionality of existing applications. Treeview is a popular application for the visualization of microarray data, but is closed-source and platform-specific, which limits both its current utility and suitability as a platform for further development. Java Treeview is an open-source, cross-platform rewrite that handles very large datasets well, and supports extensions to the file format that allow the results of additional analysis to be visualized and compared. The combination of a general file format and open source makes Java Treeview an attractive choice for solving a class of visualization problems. An applet version is also available that can be used on any website with no special server-side setup.",
"title": ""
},
{
"docid": "6a51aba04d0af9351e86b8a61b4529cb",
"text": "Cloud computing is a newly emerged technology, and the rapidly growing field of IT. It is used extensively to deliver Computing, data Storage services and other resources remotely over internet on a pay per usage model. Nowadays, it is the preferred choice of every IT organization because it extends its ability to meet the computing demands of its everyday operations, while providing scalability, mobility and flexibility with a low cost. However, the security and privacy is a major hurdle in its success and its wide adoption by organizations, and the reason that Chief Information Officers (CIOs) hesitate to move the data and applications from premises of organizations to the cloud. In fact, due to the distributed and open nature of the cloud, resources, applications, and data are vulnerable to intruders. Intrusion Detection System (IDS) has become the most commonly used component of computer system security and compliance practices that defends network accessible Cloud resources and services from various kinds of threats and attacks. This paper presents an overview of different intrusions in cloud, various detection techniques used by IDS and the types of Cloud Computing based IDS. Then, we analyze some pertinent existing cloud based intrusion detection systems with respect to their various types, positioning, detection time and data source. The analysis also gives strengths of each system, and limitations, in order to evaluate whether they carry out the security requirements of cloud computing environment or not. We highlight the deployment of IDS that uses multiple detection approaches to deal with security challenges in cloud.",
"title": ""
},
{
"docid": "2e2cffc777e534ad1ab7a5c638e0574e",
"text": "BACKGROUND\nPoly(ADP-ribose)polymerase-1 (PARP-1) is a highly promising novel target in breast cancer. However, the expression of PARP-1 protein in breast cancer and its associations with outcome are yet poorly characterized.\n\n\nPATIENTS AND METHODS\nQuantitative expression of PARP-1 protein was assayed by a specific immunohistochemical signal intensity scanning assay in a range of normal to malignant breast lesions, including a series of patients (N = 330) with operable breast cancer to correlate with clinicopathological factors and long-term outcome.\n\n\nRESULTS\nPARP-1 was overexpressed in about a third of ductal carcinoma in situ and infiltrating breast carcinomas. PARP-1 protein overexpression was associated to higher tumor grade (P = 0.01), estrogen-negative tumors (P < 0.001) and triple-negative phenotype (P < 0.001). The hazard ratio (HR) for death in patients with PARP-1 overexpressing tumors was 7.24 (95% CI; 3.56-14.75). In a multivariate analysis, PARP-1 overexpression was an independent prognostic factor for both disease-free (HR 10.05; 95% CI 5.42-10.66) and overall survival (HR 1.82; 95% CI 1.32-2.52).\n\n\nCONCLUSIONS\nNuclear PARP-1 is overexpressed during the malignant transformation of the breast, particularly in triple-negative tumors, and independently predicts poor prognosis in operable invasive breast cancer.",
"title": ""
}
] |
scidocsrr
|
6fd040dc325fe201973f655a2c237c99
|
Performance Analysis of GWO , GA and PSO Optimized FOPID and PSS for SMIB System
|
[
{
"docid": "9888a7723089d2f1218e6e1a186a5e91",
"text": "This classic text offers you the key to understanding short circuits, open conductors and other problems relating to electric power systems that are subject to unbalanced conditions. Using the method of symmetrical components, acknowledged expert Paul M. Anderson provides comprehensive guidance for both finding solutions for faulted power systems and maintaining protective system applications. You'll learn to solve advanced problems, while gaining a thorough background in elementary configurations. Features you'll put to immediate use: Numerous examples and problems Clear, concise notation Analytical simplifications Matrix methods applicable to digital computer technology Extensive appendices",
"title": ""
}
] |
[
{
"docid": "543348825e8157926761b2f6a7981de2",
"text": "With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones. Specifically, we propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $$ norm CS reconstruction model. To cast ISTA into deep network form, we develop an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms. All the parameters in ISTA-Net (e.g. nonlinear transforms, shrinkage thresholds, step sizes, etc.) are learned end-to-end, rather than being hand-crafted. Moreover, considering that the residuals of natural images are more compressible, an enhanced version of ISTA-Net in the residual domain, dubbed ISTA-Net+, is derived to further improve CS reconstruction. Extensive CS experiments demonstrate that the proposed ISTA-Nets outperform existing state-of-the-art optimization-based and network-based CS methods by large margins, while maintaining fast computational speed. Our source codes are available: http://jianzhang.tech/projects/ISTA-Net.",
"title": ""
},
{
"docid": "b27d5552763f6e5656610e01160acc08",
"text": "PURPOSE\nSome researchers (F. R. Vellutino, F. M. Scanlon, & M. S. Tanzman, 1994) have argued that the different domains comprising language (e.g., phonology, semantics, and grammar) may influence reading development in a differential manner and at different developmental periods. The purpose of this study was to examine proposed causal relationships among different linguistic subsystems and different measures of reading achievement in a group of children with reading disabilities.\n\n\nMETHODS\nParticipants were 279 students in 2nd to 3rd grade who met research criteria for reading disability. Of those students, 108 were girls and 171 were boys. In terms of heritage, 135 were African and 144 were Caucasian. Measures assessing pre-reading skills, word identification, reading comprehension, and general oral language skills were administered.\n\n\nRESULTS\nStructural equation modeling analyses indicated receptive and expressive vocabulary knowledge was independently related to pre-reading skills. Additionally, expressive vocabulary knowledge and listening comprehension skills were found to be independently related to word identification abilities.\n\n\nCONCLUSION\nResults are consistent with previous research indicating that oral language skills are related to reading achievement (e.g., A. Olofsson & J. Niedersoe, 1999; H. S. Scarborough, 1990). Results from this study suggest that receptive and expressive vocabulary knowledge influence pre-reading skills in differential ways. Further, results suggest that expressive vocabulary knowledge and listening comprehension skills facilitate word identification skills.",
"title": ""
},
{
"docid": "da4f95cc061e7f2433ffa37a8e34437e",
"text": "Active learning has been proven to be quite effective in reducing the human labeling efforts by actively selecting the most informative examples to label. In this paper, we present a batch-mode active learning method based on logistic regression. Our key motivation is an out-of-sample bound on the estimation error of class distribution in logistic regression conditioned on any fixed training sample. It is different from a typical PACstyle passive learning error bound, that relies on the i.i.d. assumption of example-label pairs. In addition, it does not contain the class labels of the training sample. Therefore, it can be immediately used to design an active learning algorithm by minimizing this bound iteratively. We also discuss the connections between the proposed method and some existing active learning approaches. Experiments on benchmark UCI datasets and text datasets demonstrate that the proposed method outperforms the state-of-the-art active learning methods significantly.",
"title": ""
},
{
"docid": "66044816ca1af0198acd27d22e0e347e",
"text": "BACKGROUND\nThe Close Kinetic Chain Upper Extremity Stability Test (CKCUES test) is a low cost shoulder functional test that could be considered as a complementary and objective clinical outcome for shoulder performance evaluation. However, its reliability was tested only in recreational athletes' males and there are no studies comparing scores between sedentary and active samples. The purpose was to examine inter and intrasession reliability of CKCUES Test for samples of sedentary male and female with (SIS), for samples of sedentary healthy male and female, and for male and female samples of healthy upper extremity sport specific recreational athletes. Other purpose was to compare scores within sedentary and within recreational athletes samples of same gender.\n\n\nMETHODS\nA sample of 108 subjects with and without SIS was recruited. Subjects were tested twice, seven days apart. Each subject performed four test repetitions, with 45 seconds of rest between them. The last three repetitions were averaged and used to statistical analysis. Intraclass Correlation Coefficient ICC2,1 was used to assess intrasession reliability of number of touches score and ICC2,3 was used to assess intersession reliability of number of touches, normalized score, and power score. Test scores within groups of same gender also were compared. Measurement error was determined by calculating the Standard Error of the Measurement (SEM) and Minimum detectable change (MDC) for all scores.\n\n\nRESULTS\nThe CKCUES Test showed excellent intersession reliability for scores in all samples. Results also showed excellent intrasession reliability of number of touches for all samples. Scores were greater in active compared to sedentary, with exception of power score. All scores were greater in active compared to sedentary and SIS males and females. SEM ranged from 1.45 to 2.76 touches (based on a 95% CI) and MDC ranged from 2.05 to 3.91(based on a 95% CI) in subjects with and without SIS. At least three touches are needed to be considered a real improvement on CKCUES Test scores.\n\n\nCONCLUSION\nResults suggest CKCUES Test is a reliable tool to evaluate upper extremity functional performance for sedentary, for upper extremity sport specific recreational, and for sedentary males and females with SIS.",
"title": ""
},
{
"docid": "9b19f343a879430283881a69e3f9cb78",
"text": "Effective analysis of applications (shortly apps) is essential to understanding apps' behavior. Two analysis approaches, i.e., static and dynamic, are widely used; although, both have well known limitations. Static analysis suffers from obfuscation and dynamic code updates. Whereas, it is extremely hard for dynamic analysis to guarantee the execution of all the code paths in an app and thereby, suffers from the code coverage problem. However, from a security point of view, executing all paths in an app might be less interesting than executing certain potentially malicious paths in the app. In this work, we use a hybrid approach that combines static and dynamic analysis in an iterative manner to cover their shortcomings. We use targeted execution of interesting code paths to solve the issues of obfuscation and dynamic code updates. Our targeted execution leverages a slicing-based analysis for the generation of data-dependent slices for arbitrary methods of interest (MOI) and on execution of the extracted slices for capturing their dynamic behavior. Motivated by the fact that malicious apps use Inter Component Communications (ICC) to exchange data [19], our main contribution is the automatic targeted triggering of MOI that use ICC for passing data between components. We implement a proof of concept, TelCC, and report the results of our evaluation.",
"title": ""
},
{
"docid": "f82a9c15e88ba24dbf8f5d4678b8dffd",
"text": "Numerous existing object segmentation frameworks commonly utilize the object bounding box as a prior. In this paper, we address semantic segmentation assuming that object bounding boxes are provided by object detectors, but no training data with annotated segments are available. Based on a set of segment hypotheses, we introduce a simple voting scheme to estimate shape guidance for each bounding box. The derived shape guidance is used in the subsequent graph-cut-based figure-ground segmentation. The final segmentation result is obtained by merging the segmentation results in the bounding boxes. We conduct an extensive analysis of the effect of object bounding box accuracy. Comprehensive experiments on both the challenging PASCAL VOC object segmentation dataset and GrabCut-50 image segmentation dataset show that the proposed approach achieves competitive results compared to previous detection or bounding box prior based methods, as well as other state-of-the-art semantic segmentation methods.",
"title": ""
},
{
"docid": "9153e5e34130ce42df99d86131289744",
"text": "We present an adiabatic taper design in three dimensions for coupling light into photonic crystal defect waveguides in a square lattice of circular dielectric rods. The taper is a two-stage structure in which the first stage makes the transition from a dielectric waveguide to a coupled-cavity waveguide. The second stage subsequently transforms the waveguide mode from an index-guided mode to a band-gap-guided mode. We discuss differences between the two-dimensional device and its three-dimensional slab version. © 2003 Optical Society of America OCIS codes: 060.1810, 130.3120, 230.7370.",
"title": ""
},
{
"docid": "82e78a0e89a5fe7ca4465af9d7a4dc3e",
"text": "While Six Sigma is increasingly implemented in industry, little academic research has been done on Six Sigma and its influence on quality management theory and application. There is a criticism that Six Sigma simply puts traditional quality management practices in a new package. To investigate this issue and the role of Six Sigma in quality management, this study reviewed both the traditional quality management and Six Sigma literatures and identified three new practices that are critical for implementing Six Sigma’s concept and method in an organization. These practices are referred to as: Six Sigma role structure, Six Sigma structured improvement procedure, and Six Sigma focus on metrics. A research model and survey instrument were developed to investigate how these Six Sigma practices integrate with seven traditional quality management practices to affect quality performance and business performance. Test results based on a sample of 226 US manufacturing plants revealed that the three Six Sigma practices are distinct practices from traditional quality management practices, and that they complement the traditional quality management practices in improving performance. The implications of the findings for researchers and practitioners are discussed and further research directions are offered. # 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "10b8223c9005bd5bdd2836d17541bbb1",
"text": "This study explores the stability of attachment security and representations from infancy to early adulthood in a sample chosen originally for poverty and high risk for poor developmental outcomes. Participants for this study were 57 young adults who are part of an ongoing prospective study of development and adaptation in a high-risk sample. Attachment was assessed during infancy by using the Ainsworth Strange Situation (Ainsworth & Wittig) and at age 19 by using the Berkeley Adult Attachment Interview (George, Kaplan, & Main). Possible correlates of continuity and discontinuity in attachment were drawn from assessments of the participants and their mothers over the course of the study. Results provided no evidence for significant continuity between infant and adult attachment in this sample, with many participants transitioning to insecurity. The evidence, however, indicated that there might be lawful discontinuity. Analyses of correlates of continuity and discontinuity in attachment classification from infancy to adulthood indicated that the continuous and discontinuous groups were differentiated on the basis of child maltreatment, maternal depression, and family functioning in early adolescence. These results provide evidence that although attachment has been found to be stable over time in other samples, attachment representations are vulnerable to difficult and chaotic life experiences.",
"title": ""
},
{
"docid": "19f604732dd88b01e1eefea1f995cd54",
"text": "Power electronic transformer (PET) technology is one of the promising technology for medium/high power conversion systems. With the cutting-edge improvements in the power electronics and magnetics, makes it possible to substitute conventional line frequency transformer traction (LFTT) technology with the PET technology. Over the past years, research and field trial studies are conducted to explore the technical challenges associated with the operation, functionalities, and control of PET-based traction systems. This paper aims to review the essential requirements, technical challenges, and the existing state of the art of PET traction system architectures. Finally, this paper discusses technical considerations and introduces the new research possibilities especially in the power conversion stages, PET design, and the power switching devices.",
"title": ""
},
{
"docid": "a7765d68c277dbc712376a46a377d5d4",
"text": "The trend of currency rates can be predicted with supporting from supervised machine learning in the transaction systems such as support vector machine. Not only representing models in use of machine learning techniques in learning, the support vector machine (SVM) model also is implemented with actual FoRex transactions. This might help automatically to make the transaction decisions of Bid/Ask in Foreign Exchange Market by using Expert Advisor (Robotics). The experimental results show the advantages of use SVM compared to the transactions without use SVM ones.",
"title": ""
},
{
"docid": "22286933cdcdb34870ff08980b8c278a",
"text": "Identifying sentiment of opinion target is an essential component of many tasks for sentiment analysis. We firstly identify the sentiment of the clause in which the specific opinion target lie and then infer the sentiment of opinion target from the sentiment of clause. In order to utilize context more adequately, We propose a novel model using Long Short-Term Memory(LSTM) and Convolutional Neural Network(CNN) together to identify the sentiment of clause. LSTM is used for generating context embedding and CNN is treated as a trainable feature detector. In the experiment using product reviews data, our model outperforms traditional methods in the aspect of accuracy. What's more, the time of model training is acceptable and our model is more scalable because we don't need to discovery rules manually and prepare lots of external language resources which is laborious and time-consuming.",
"title": ""
},
{
"docid": "154f5455f593e8ebf7058cc0a32426a2",
"text": "Many life-log analysis applications, which transfer data from cameras and sensors to a Cloud and analyze them in the Cloud, have been developed with the spread of various sensors and Cloud computing technologies. However, difficulties arise because of the limitation of the network bandwidth between the sensors and the Cloud. In addition, sending raw sensor data to a Cloud may introduce privacy issues. Therefore, we propose distributed deep learning processing between sensors and the Cloud in a pipeline manner to reduce the amount of data sent to the Cloud and protect the privacy of the users. In this paper, we have developed a pipeline-based distributed processing method for the Caffe deep learning framework and investigated the processing times of the classification by varying a division point and the parameters of the network models using data sets, CIFAR-10 and ImageNet. The experiments show that the accuracy of deep learning with coarse-grain data is comparable to that with the default parameter settings, and the proposed distributed processing method has performance advantages in cases of insufficient network bandwidth with actual sensors and a Cloud environment.",
"title": ""
},
{
"docid": "9f44d82b0f11037e593e719ae0c60a13",
"text": "The past 25 years have been a significant period with advances in the development of interior permanent magnet (IPM) machines. Line-start small IPM synchronous motors have expanded their presence in the domestic marketplace from few specialized niche markets in high efficiency machine tools, household appliances, small utility motors, and servo drives to mass-produced applications. A closer examination reveals that several different knowledge-based technological advancements and market forces as well as consumer demand for high efficiency requirements have combined, sometimes in fortuitous ways, to accelerate the development of the improved new small energy efficient motors. This paper provides a broad explanation of the various factors that lead to the current state of the art of the single-phase interior permanent motor drive technology. A unified analysis of single-phase IPM motor that permits the determination of the steady-state, dynamic, and transient performances is presented. The mathematical model is based on both d-q axis theory and finite-element analysis. It leads to more accurate numerical results and meets the engineering requirements more satisfactorily than any other methods. Finally, some concluding comments and remarks are provided for efficiency improvement, manufacturing, and future research trends of line-start energy efficient permanent magnet synchronous motors.",
"title": ""
},
{
"docid": "72def907d2404ea82942d6b09d0f438b",
"text": "This paper proposes the measurement of an ozone generator using a phase-shifted pulse width modulation (PWM) full bridge inverter as a power supply. The method of electrode parameter measurement for an ozone generator is presented. An electrode set is represented by an equivalent circuit by parallel capacitor and resistor. The test was performed with high-frequency high-voltage ac power supply. Then voltage and charge characteristics were obtained leading to calculating values of the parallel capacitor and resistor. The validity of the obtained parameters is verified with a comparison between simulation and experimental results. It has proved that the proposed equivalent circuit with parameters obtained from the proposed method is valid which is in agreement with experimental results. It can be used for power supply and electrode design for generating ozone gas. The correctness of the proposed technique is verified by both simulation and experimental results.",
"title": ""
},
{
"docid": "c2733a7dd7006b05852475c21a61bbee",
"text": "Data mining is the practice of examining and deriving purposeful information from the data. Data mining finds its application in various fields like finance, retail, medicine, agriculture etc. Data mining in agriculture is used for analyzing the various biotic and abiotic factors. Agriculture in India plays a predominant role in economy and employment. The common problem existing among the Indian farmers are they don't choose the right crop based on their soil requirements. Due to this they face a serious setback in productivity. This problem of the farmers has been addressed through precision agriculture. Precision agriculture is a modern farming technique that uses research data of soil characteristics, soil types, crop yield data collection and suggests the farmers the right crop based on their site-specific parameters. This reduces the wrong choice on a crop and increase in productivity. In this paper, this problem is solved by proposing a recommendation system through an ensemble model with majority voting technique using Random tree, CHAID, K-Nearest Neighbor and Naive Bayes as learners to recommend a crop for the site specific parameters with high accuracy and efficiency.",
"title": ""
},
{
"docid": "8d350db000f7a2b1481b9cad6ce318f1",
"text": "Purpose – The purpose of this research paper is to offer a solution to differentiate supply chain planning for products with different demand features and in different life-cycle phases. Design/methodology/approach – A normative framework for selecting a planning approach was developed based on a literature review of supply chain differentiation and supply chain planning. Explorative mini-cases from three companies – Vaisala, Mattel, Inc. and Zara – were investigated to identify the features of their innovative planning solutions. The selection framework was applied to the case company’s new business unit dealing with a product portfolio of highly innovative products as well as commodity items. Findings – The need for planning differentiation is essential for companies with large product portfolios operating in volatile markets. The complexity of market, channel and supply networks makes supply chain planning more intricate. The case company provides an example of using the framework for rough segmentation to differentiate planning. Research limitations/implications – The paper widens Fisher’s supply chain selection framework to consider the aspects of planning. Practical implications – Despite substantial resources being used, planning results are often not reliable or consistent enough to ensure cost efficiency and adequate customer service. Therefore there is a need for management to critically consider current planning solutions. Originality/value – The procedure outlined in this paper is a first illustrative example of the type of processes needed to monitor and select the right planning approach.",
"title": ""
},
{
"docid": "4a163c071d54c641ef4c24a9c1b2299c",
"text": "Current taint tracking systems suffer from high overhead and a lack of generality. In this paper, we solve both of these issues with an extensible system that is an order of magnitude more efficient than previous software taint tracking systems and is fully general to dynamic data flow tracking problems. Our system uses a compiler to transform untrusted programs into policy-enforcing programs, and our system can be easily reconfigured to support new analyses and policies without modifying the compiler or runtime system. Our system uses a sound and sophisticated static analysis that can dramatically reduce the amount of data that must be dynamically tracked. For server programs, our system's average overhead is 0.65% for taint tracking, which is comparable to the best hardware-based solutions. For a set of compute-bound benchmarks, our system produces no runtime overhead because our compiler can prove the absence of vulnerabilities, eliminating the need to dynamically track taint. After modifying these benchmarks to contain format string vulnerabilities, our system's overhead is less than 13%, which is over 6X lower than the previous best solutions. We demonstrate the flexibility and power of our system by applying it to file disclosure vulnerabilities, a problem that taint tracking cannot handle. To prevent such vulnerabilities, our system introduces an average runtime overhead of 0.25% for three open source server programs.",
"title": ""
},
{
"docid": "0056d305c7689d45e7cd9f4b87cac79e",
"text": "A method is presented that uses a vectorial multiscale feature image for wave front propagation between two or more user defined points to retrieve the central axis of tubular objects in digital images. Its implicit scale selection mechanism makes the method more robust to overlap and to the presence of adjacent structures than conventional techniques that propagate a wave front over a scalar image representing the maximum of a range of filters. The method is shown to retain its potential to cope with severe stenoses or imaging artifacts and objects with varying widths in simulated and actual two-dimensional angiographic images.",
"title": ""
},
{
"docid": "b1272039194d07ff9b7568b7f295fbfb",
"text": "Protein catalysis requires the atomic-level orchestration of side chains, substrates and cofactors, and yet the ability to design a small-molecule-binding protein entirely from first principles with a precisely predetermined structure has not been demonstrated. Here we report the design of a novel protein, PS1, that binds a highly electron-deficient non-natural porphyrin at temperatures up to 100 °C. The high-resolution structure of holo-PS1 is in sub-Å agreement with the design. The structure of apo-PS1 retains the remote core packing of the holoprotein, with a flexible binding region that is predisposed to ligand binding with the desired geometry. Our results illustrate the unification of core packing and binding-site definition as a central principle of ligand-binding protein design.",
"title": ""
}
] |
scidocsrr
|
17572600a0866f7586a9a223db95f7d3
|
Real-time Monocular Dense Mapping for Augmented Reality
|
[
{
"docid": "01e5f7c2eb9b55699c38a523776fc4a4",
"text": "We propose a formulation of monocular SLAM which combines live dense reconstruction with shape priors-based 3D tracking and reconstruction. Current live dense SLAM approaches are limited to the reconstruction of visible surfaces. Moreover, most of them are based on the minimisation of a photo-consistency error, which usually makes them sensitive to specularities. In the 3D pose recovery literature, problems caused by imperfect and ambiguous image information have been dealt with by using prior shape knowledge. At the same time, the success of depth sensors has shown that combining joint image and depth information drastically increases the robustness of the classical monocular 3D tracking and 3D reconstruction approaches. In this work we link dense SLAM to 3D object pose and shape recovery. More specifically, we automatically augment our SLAM system with object specific identity, together with 6D pose and additional shape degrees of freedom for the object(s) of known class in the scene, combining image data and depth information for the pose and shape recovery. This leads to a system that allows for full scaled 3D reconstruction with the known object(s) segmented from the scene. The segmentation enhances the clarity, accuracy and completeness of the maps built by the dense SLAM system, while the dense 3D data aids the segmentation process, yielding faster and more reliable convergence than when using 2D image data alone.",
"title": ""
},
{
"docid": "7f3fe1eadb59d58db8e5911c1de3465f",
"text": "We propose a semi-direct monocular visual odometry algorithm that is precise, robust, and faster than current state-of-the-art methods. The semi-direct approach eliminates the need of costly feature extraction and robust matching techniques for motion estimation. Our algorithm operates directly on pixel intensities, which results in subpixel precision at high frame-rates. A probabilistic mapping method that explicitly models outlier measurements is used to estimate 3D points, which results in fewer outliers and more reliable points. Precise and high frame-rate motion estimation brings increased robustness in scenes of little, repetitive, and high-frequency texture. The algorithm is applied to micro-aerial-vehicle state-estimation in GPS-denied environments and runs at 55 frames per second on the onboard embedded computer and at more than 300 frames per second on a consumer laptop. We call our approach SVO (Semi-direct Visual Odometry) and release our implementation as open-source software.",
"title": ""
},
{
"docid": "f0c08cb3e23e71bab0ff9ca73a4d7869",
"text": "A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manually labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth prediction, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photometric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives comparable performance to that of the state of art supervised methods for single view depth estimation.",
"title": ""
}
] |
[
{
"docid": "366cd2f9b48715c0a987a0f77093a780",
"text": "Work processes involving dozens or hundreds of collaborators are complex and difficult to manage. Problems within the process may have severe organizational and financial consequences. Visualization helps monitor and analyze those processes. In this paper, we study the development of large software systems as an example of a complex work process. We introduce Developer Rivers, a timeline-based visualization technique that shows how developers work on software modules. The flow of developers' activity is visualized by a river metaphor: activities are transferred between modules represented as rivers. Interactively switching between hierarchically organized modules and workload metrics allows for exploring multiple facets of the work process. We study typical development patterns by applying our visualization to Python and the Linux kernel.",
"title": ""
},
{
"docid": "9bbf3500233a900188987349ecbd6218",
"text": "Terpene synthases catalyze the conversion of linear prenyl-diphosphates to a multitude of hydrocarbon skeletons with often high regioand stereoselectivity. These remarkable enzymes all rely on a shared fold for activity, namely, the class I terpene cyclase fold. Recent work has illuminated the catalytic strategy used by these enzymes to catalyze the arguably most complex chemical reactions found in Nature. Terpene synthases catalyze the formation of a reactive carbocation and provide a template for the cyclization reactions while at the same time providing the necessary stability of the carbocationic reaction intermediates as well as strictly controlling water access.",
"title": ""
},
{
"docid": "d895b939ea60b41f7de7e64eb60e3b07",
"text": "Deep learning (DL) based semantic segmentation methods have been providing state-of-the-art performance in the last few years. More specifically, these techniques have been successfully applied to medical image classification, segmentation, and detection tasks. One deep learning technique, U-Net, has become one of the most popular for these applications. In this paper, we propose a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net models, which are named RU-Net and R2U-Net respectively. The proposed models utilize the power of U-Net, Residual Network, as well as RCNN. There are several advantages of these proposed architectures for segmentation tasks. First, a residual unit helps when training deep architecture. Second, feature accumulation with recurrent residual convolutional layers ensures better feature representation for segmentation tasks. Third, it allows us to design better U-Net architecture with same number of network parameters with better performance for medical image segmentation. The proposed models are tested on three benchmark datasets such as blood vessel segmentation in retina images, skin cancer segmentation, and lung lesion segmentation. The experimental results show superior performance on segmentation tasks compared to equivalent models including UNet and residual U-Net (ResU-Net).",
"title": ""
},
{
"docid": "3ec603c63166167c88dc6d578a7c652f",
"text": "Peer-to-peer (P2P) lending or crowdlending, is a recent innovation allows a group of individual or institutional lenders to lend funds to individuals or businesses in return for interest payment on top of capital repayments. The rapid growth of P2P lending marketplaces has heightened the need to develop a support system to help lenders make sound lending decisions. But realizing such system is challenging in the absence of formal credit data used by the banking sector. In this paper, we attempt to explore the possible connections between user credit risk and how users behave in the lending sites. We present the first analysis of user detailed clickstream data from a large P2P lending provider. Our analysis reveals that the users’ sequences of repayment histories and financial activities in the lending site, have significant predictive value for their future loan repayments. In the light of this, we propose a deep architecture named DeepCredit, to automatically acquire the knowledge of credit risk from the sequences of activities that users conduct on the site. Experiments on our large-scale real-world dataset show that our model generates a high accuracy in predicting both loan delinquency and default, and significantly outperforms a number of baselines and competitive alternatives.",
"title": ""
},
{
"docid": "e451b5044b7d8f642d6718a3e46dfd2a",
"text": "Driving simulation from the very beginning of the advent of VR technology uses the very same technology for visualization and similar technology for head movement tracking and high end 3D vision. They also share the same or similar difficulties in rendering movements of the observer in the virtual environments. The visual-vestibular conflict, due to the discrepancies perceived by the human visual and vestibular systems, induce the so-called simulation sickness, when driving or displacing using a control device (ex. Joystick). Another cause for simulation sickness is the transport delay, the delay between the action and the corresponding rendering cues.\n Another similarity between driving simulation and VR is need for correct scale 1:1 perception. Correct perception of speed and acceleration in driving simulation is crucial for automotive experiments for Advances Driver Aid System (ADAS) as vehicle behavior has to be simulated correctly and anywhere where the correct mental workload is an issue as real immersion and driver attention is depending on it. Correct perception of distances and object size is crucial using HMDs or CAVEs, especially as their use is frequently involving digital mockup validation for design, architecture or interior and exterior lighting.\n Today, the advents of high resolution 4K digital display technology allows near eye resolution stereoscopic 3D walls and integrate them in high performance CAVEs. High performance CAVEs now can be used for vehicle ergonomics, styling, interior lighting and perceived quality. The first CAVE in France, built in 2001 at Arts et Metiers ParisTech, is a 4 sided CAVE with a modifiable geometry with now traditional display technology. The latest one is Renault's 70M 3D pixel 5 sides CAVE with 4K x 4K walls and floor and with a cluster of 20 PCs. Another equipment recently designed at Renault is the motion based CARDS driving simulator with CAVE like 4 sides display system providing full 3D immersion for the driver.\n The separation between driving simulation and digital mockup design review is now fading though different uses will require different simulation configurations.\n New application domains, such as automotive AR design, will bring combined features of VR and driving simulation technics, including CAVE like display system equipped driving simulators.",
"title": ""
},
{
"docid": "dc6ee3d45fa76aafe45507b0778018d5",
"text": "Traditional endpoint protection will not address the looming cybersecurity crisis because it ignores the source of the problem--the vast online black market buried deep within the Internet.",
"title": ""
},
{
"docid": "db6904a5aa2196dedf37b279e04b3ea8",
"text": "The use of animation and multimedia for learning is now further extended by the provision of entire Virtual Reality Learning Environments (VRLE). This highlights a shift in Web-based learning from a conventional multimedia to a more immersive, interactive, intuitive and exciting VR learning environment. VRLEs simulate the real world through the application of 3D models that initiates interaction, immersion and trigger the imagination of the learner. The question of good pedagogy and use of technology innovations comes into focus once again. Educators attempt to find theoretical guidelines or instructional principles that could assist them in developing and applying a novel VR learning environment intelligently. This paper introduces the educational use of Web-based 3D technologies and highlights in particular VR features. It then identifies constructivist learning as the pedagogical engine driving the construction of VRLE and discusses five constructivist learning approaches. Furthermore, the authors provide two case studies to investigate VRLEs for learning purposes. The authors conclude with formulating some guidelines for the effective use of VRLEs, including discussion of the limitations and implications for the future study of VRLEs. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "64f45424c2bfa571dd47523633cb5d03",
"text": "We demonstrate how adjustable robust optimization (ARO) problems with fixed recourse can be casted as static robust optimization problems via Fourier-Motzkin elimination (FME). Through the lens of FME, we characterize the structures of the optimal decision rules for a broad class of ARO problems. A scheme based on a blending of classical FME and a simple Linear Programming technique that can efficiently remove redundant constraints, is developed to reformulate ARO problems. This generic reformulation technique enhances the classical approximation scheme via decision rules, and enables us to solve adjustable optimization problems to optimality. We show via numerical experiments that, for small-size ARO problems our novel approach finds the optimal solution. For moderate or large-size instances, we eliminate a subset of the adjustable variables, which improves the solutions from decision rule approximations.",
"title": ""
},
{
"docid": "3072c5458a075e6643a7679ccceb1417",
"text": "A novel interleaved flyback converter with leakage energy recycled is proposed. The proposed converter is combined with dual-switch dual-transformer flyback topology. Two clamping diodes are used to reduce the voltage stress on power switches to the input voltage level and also to recycle leakage inductance energy to the input voltage and capacitor. Besides, the interleaved control is implemented to reduce the output current ripple. In addition, the voltage on the primary windings is reduced to the half of the input voltage and thus reducing the turns ratio of transformers to improve efficiency. The operating principle and the steady state analysis of the proposed converter are discussed in detail. Finally, an experimental prototype is implemented with 400V input voltage, 24V/300W output to verify the feasibility of the proposed converter. The experimental results reveals that the highest efficiency of the proposed converter is 94.42%, the full load efficiency is 92.7%, and the 10% load efficiency is 92.61%.",
"title": ""
},
{
"docid": "96f4f77f114fec7eca22d0721c5efcbe",
"text": "Aggregation structures with explicit information, such as image attributes and scene semantics, are effective and popular for intelligent systems for assessing aesthetics of visual data. However, useful information may not be available due to the high cost of manual annotation and expert design. In this paper, we present a novel multi-patch (MP) aggregation method for image aesthetic assessment. Different from state-of-the-art methods, which augment an MP aggregation network with various visual attributes, we train the model in an end-to-end manner with aesthetic labels only (i.e., aesthetically positive or negative). We achieve the goal by resorting to an attention-based mechanism that adaptively adjusts the weight of each patch during the training process to improve learning efficiency. In addition, we propose a set of objectives with three typical attention mechanisms (i.e., average, minimum, and adaptive) and evaluate their effectiveness on the Aesthetic Visual Analysis (AVA) benchmark. Numerical results show that our approach outperforms existing methods by a large margin. We further verify the effectiveness of the proposed attention-based objectives via ablation studies and shed light on the design of aesthetic assessment systems.",
"title": ""
},
{
"docid": "c03bf622dde1bd81c0eb83a87e1f9924",
"text": "Image-schemas (e.g. CONTAINER, PATH, FORCE) are pervasive skeletal patterns of a preconceptual nature which arise from everyday bodily and social experiences and which enable us to mentally structure perceptions and events (Johnson 1987; Lakoff 1987, 1989). Within Cognitive Linguistics, these recurrent non-propositional models are taken to unify the different sensory and motor experiences in which they manifest themselves in a direct way and, most significantly, they may be metaphorically projected from the realm of the physical to other more abstract domains. In this paper, we intend to provide a cognitively plausible account of the OBJECT image-schema, which has received rather contradictory treatments in the literature. The OBJECT schema is experientially grounded in our everyday interaction with our own bodies and with other discrete entities. In the light of existence-related language (more specifically, linguistic expressions concerning the creation and destruction of both physical and abstract entities), it is argued that the OBJECT image-schema may be characterized as a basic image-schema, i.e. one that functions as a guideline for the activation of additional models, including other dependent image-schematic patterns (LINK, PART-WHOLE, CENTREPERIPHERY, etc.) which highlight various facets of the higher-level schema.",
"title": ""
},
{
"docid": "b448ea63495d08866ba7759a4ede6895",
"text": "Heterogeneous sensor data fusion is a challenging field that has gathered significant interest in recent years. Two of these challenges are learning from data with missing values, and finding shared representations for multimodal data to improve inference and prediction. In this paper, we propose a multimodal data fusion framework, the deep multimodal encoder (DME), based on deep learning techniques for sensor data compression, missing data imputation, and new modality prediction under multimodal scenarios. While traditional methods capture only the intramodal correlations, DME is able to mine both the intramodal correlations in the initial layers and the enhanced intermodal correlations in the deeper layers. In this way, the statistical structure of sensor data may be better exploited for data compression. By incorporating our new objective function, DME shows remarkable ability for missing data imputation tasks in sensor data. The shared multimodal representation learned by DME may be used directly for predicting new modalities. In experiments with a real-world dataset collected from a 40-node agriculture sensor network which contains three modalities, DME can achieve a root mean square error (RMSE) of missing data imputation which is only 20% of the traditional methods like K-nearest neighbors and sparse principal component analysis and the performance is robust to different missing rates. It can also reconstruct temperature modality from humidity and illuminance with an RMSE of $7\\; {}^{\\circ }$C, directly from a highly compressed (2.1%) shared representation that was learned from incomplete (80% missing) data.",
"title": ""
},
{
"docid": "b16eb9ba71fa4ebcd690e9746773321e",
"text": "Macroeconometricians face a peculiar data structure. On the one hand, the number of years for which there is reliable and relevant data is limited and cannot readily be increased other than by the passage of time. On the other hand, for much of the postwar period statistical agencies have collected monthly or quarterly data on a great many related macroeconomic, financial, and sectoral variables. Thus macroeconometricians face data sets that have hundreds or even thousands of series, but the number of observations on each series is relatively short, for example 20 to 40 years of quarterly data. This chapter surveys work on a class of models, dynamic factor models (DFMs), which has received considerable attention in the past decade because of their ability to model simultaneously and consistently data sets in which the number of series exceeds the number of time series observations. Dynamic factor models were originally proposed by Geweke (1977) as a time-series extension of factor models previously developed for cross-sectional data. In early influential work, Sargent and Sims (1977) showed that two dynamic factors could explain a large fraction of the variance of important U.S. quarterly macroeconomic variables, including output, employment, and prices. This central empirical finding that a few factors can explain a large fraction of the variance of many macroeconomic series has been confirmed by many studies; see for example Giannone, The aim of this survey is to describe, at a level that is specific enough to be useful to researchers new to the area, the key theoretical results, applications, and empirical findings in the recent literature on DFMs. Bai and Ng (2008) and Stock and Watson 2 (2006) provide complementary surveys of this literature. Bai and Ng's (2008) survey is more technical than this one and focuses on the econometric theory and conditions; Stock and Watson (2006) focus on DFM-based forecasts in the context of other methods for forecasting with many predictors. The premise of a dynamic factor model is that a few latent dynamic factors, f t , drive the comovements of a high-dimensional vector of time-series variables, X t , which is also affected by a vector of mean-zero idiosyncratic disturbances, e t. These idiosyncratic disturbances arise from measurement error and from special features that are specific to an individual series (the effect of a Salmonella scare on restaurant employment, for example). The latent factors follow a time series process, which is …",
"title": ""
},
{
"docid": "ef241b52d4f4fdc892071f684b387242",
"text": "A description is given of Sprite, an experimental network operating system under development at the University of California at Berkeley. It is part of a larger research project, SPUR, for the design and construction of a high-performance multiprocessor workstation with special hardware support of Lisp applications. Sprite implements a set of kernel calls that provide sharing, flexibility, and high performance to networked workstations. The discussion covers: the application interface: the basic kernel structure; management of the file name space and file data, virtual memory; and process migration.<<ETX>>",
"title": ""
},
{
"docid": "3266af647a3a85d256d42abc6f3eca55",
"text": "This paper introduces a learning scheme to construct a Hilbert space (i.e., a vector space along its inner product) to address both unsupervised and semi-supervised domain adaptation problems. This is achieved by learning projections from each domain to a latent space along the Mahalanobis metric of the latent space to simultaneously minimizing a notion of domain variance while maximizing a measure of discriminatory power. In particular, we make use of the Riemannian optimization techniques to match statistical properties (e.g., first and second order statistics) between samples projected into the latent space from different domains. Upon availability of class labels, we further deem samples sharing the same label to form more compact clusters while pulling away samples coming from different classes. We extensively evaluate and contrast our proposal against state-of-the-art methods for the task of visual domain adaptation using both handcrafted and deep-net features. Our experiments show that even with a simple nearest neighbor classifier, the proposed method can outperform several state-of-the-art methods benefitting from more involved classification schemes.",
"title": ""
},
{
"docid": "4d9adaac8dc69f902056d531f7570da7",
"text": "A new CMOS buffer without short-circuit power consumption is proposed. The gatedriving signal of the output pull-up (pull-down) transistor is fed back to the output pull-down (pull-up) transistor to get tri-state output momentarily, eliminating the short-circuit power consumption. The HSPICE simulation results verified the operation of the proposed buffer and showed the power-delay product is about 15% smaller than conventional tapered CMOS buffer.",
"title": ""
},
{
"docid": "3cdc2052eb37bdbb1f7d38ec90a095c4",
"text": "We present a simple and effective blind image deblurring method based on the dark channel prior. Our work is inspired by the interesting observation that the dark channel of blurred images is less sparse. While most image patches in the clean image contain some dark pixels, these pixels are not dark when averaged with neighboring highintensity pixels during the blur process. This change in the sparsity of the dark channel is an inherent property of the blur process, which we both prove mathematically and validate using training data. Therefore, enforcing the sparsity of the dark channel helps blind deblurring on various scenarios, including natural, face, text, and low-illumination images. However, sparsity of the dark channel introduces a non-convex non-linear optimization problem. We introduce a linear approximation of the min operator to compute the dark channel. Our look-up-table-based method converges fast in practice and can be directly extended to non-uniform deblurring. Extensive experiments show that our method achieves state-of-the-art results on deblurring natural images and compares favorably methods that are well-engineered for specific scenarios.",
"title": ""
},
{
"docid": "ee63ca73151e24ee6f0543b0914a3bb6",
"text": "The aim of this study was to investigate whether different aspects of morality predict traditional bullying and cyberbullying behaviour in a similar way. Students between 12 and 19 years participated in an online study. They reported on the frequency of different traditional and cyberbullying behaviours and completed self-report measures on moral emotions and moral values. A scenario approach with open questions was used to assess morally disengaged justifications. Tobit regressions indicated that a lack of moral values and a lack of remorse predicted both traditional and cyberbullying behaviour. Traditional bullying was strongly predictive for cyberbullying. A lack of moral emotions and moral values predicted cyberbullying behaviour even when controlling for traditional bUllying. Morally disengaged justifications were only predictive for traditional, but not for cyberbullying behaviour. The findings show that moral standards and moral affect are important to understand individual differences in engagement in both traditional and cyberforms of bUllying.",
"title": ""
},
{
"docid": "7013e752987cf3dbdeab029d8eb184e6",
"text": "Federated searching was once touted as the library world’s answer to Google, but ten years since federated searching technology’s inception, how does it actually compare? This study focuses on undergraduate student preferences and perceptions when doing research using both Google and a federated search tool. Students were asked about their preferences using each search tool and the perceived relevance of the sources they found using each search tool. Students were also asked to self-assess their online searching skills. The findings show that students believe they possess strong searching skills, are able to find relevant sources using both search tools, but actually prefer the federated search tool to Google for doing research. Thus, despite federated searching’s limitations, students see the need for it, libraries should continue to offer federated search (especially if a discovery search tool is not available), and librarians should focus on teaching students how to use federated search and Google more effectively.",
"title": ""
},
{
"docid": "ea5f0ac771cd3dd860320aba5620cdee",
"text": "Despite the burgeoning number of studies of public sector information systems, very few scholars have focussed on the relationship between e-Government policies and information systems choice and design. Drawing on Fountain’s (2001) technology enactment framework, this paper endeavours to conduct an in-depth investigation of the intricacies characterising the choice and design of new technologies in the context of e-Government reforms. By claiming that technologies are carriers of e-Government reform aims, this study investigates the logics embedded in the design of new technology and extant political interests and values inscribed in e-Government policies. The e-Government enactment framework is proposed as a theoretical and analytical approach to understand and study the complexity of these relationships which shape e-Government policies. 2010 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
7194785b2bc4d2392b94316cef85b9d5
|
GSET somi: a game-specific eye tracking dataset for somi
|
[
{
"docid": "825b567c1a08d769aa334b707176f607",
"text": "A critical function in both machine vision and biological vision systems is attentional selection of scene regions worthy of further analysis by higher-level processes such as object recognition. Here we present the first model of spatial attention that (1) can be applied to arbitrary static and dynamic image sequences with interactive tasks and (2) combines a general computational implementation of both bottom-up (BU) saliency and dynamic top-down (TD) task relevance; the claimed novelty lies in the combination of these elements and in the fully computational nature of the model. The BU component computes a saliency map from 12 low-level multi-scale visual features. The TD component computes a low-level signature of the entire image, and learns to associate different classes of signatures with the different gaze patterns recorded from human subjects performing a task of interest. We measured the ability of this model to predict the eye movements of people playing contemporary video games. We found that the TD model alone predicts where humans look about twice as well as does the BU model alone; in addition, a combined BU*TD model performs significantly better than either individual component. Qualitatively, the combined model predicts some easy-to-describe but hard-to-compute aspects of attentional selection, such as shifting attention leftward when approaching a left turn along a racing track. Thus, our study demonstrates the advantages of integrating BU factors derived from a saliency map and TD factors learned from image and task contexts in predicting where humans look while performing complex visually-guided behavior.",
"title": ""
}
] |
[
{
"docid": "a1b98a7a689e4972808ccae4dc26ac52",
"text": "Person re-identification is generally divided into two part: first how to represent a pedestrian by discriminative visual descriptors and second how to compare them by suitable distance metrics. Conventional methods isolate these two parts, the first part usually unsupervised and the second part supervised. The Bag-of-Words (BoW) model is a widely used image representing descriptor in part one. Its codebook is simply generated by clustering visual features in Euclidian space. In this paper, we propose to use part two metric learning techniques in the codebook generation phase of BoW. In particular, the proposed codebook is clustered under Mahalanobis distance which is learned supervised. Extensive experiments prove that our proposed method is effective. With several low level features extracted on superpixel and fused together, our method outperforms state-of-the-art on person re-identification benchmarks including VIPeR, PRID450S, and Market1501.",
"title": ""
},
{
"docid": "348702d85126ed64ca24bdc62c1146d9",
"text": "Autonomous Vehicles are currently being tested in a variety of scenarios. As we move towards Autonomous Vehicles, how should intersections look? To answer that question, we break down an intersection management into the different conundrums and scenarios involved in the trajectory planning and current approaches to solve them. Then, a brief analysis of current works in autonomous intersection is conducted. With a critical eye, we try to delve into the discrepancies of existing solutions while presenting some critical and important factors that have been addressed. Furthermore, open issues that have to be addressed are also emphasized. We also try to answer the question of how to benchmark intersection management algorithms by providing some factors that impact autonomous navigation at intersection.",
"title": ""
},
{
"docid": "0b3e7b6b47f51dc75c99f59e3aa79b52",
"text": "This brief presents a frequency-domain analysis of the latch comparator offset due to load capacitor mismatch. Although the analysis is applied to the static latch comparator, the developed method can be extended to the dynamic latch comparator.",
"title": ""
},
{
"docid": "6154efdd165c7323c1ba9ec48e63cfc6",
"text": "A RANSAC based procedure is described for detecting inliers corresponding to multiple models in a given set of data points. The algorithm we present in this paper (called multiRANSAC) on average performs better than traditional approaches based on the sequential application of a standard RANSAC algorithm followed by the removal of the detected set of inliers. We illustrate the effectiveness of our approach on a synthetic example and apply it to the problem of identifying multiple world planes in pairs of images containing dominant planar structures.",
"title": ""
},
{
"docid": "ce95a757725fe5cff0443e4a29214390",
"text": "Logistic regression is a popular technique used in machine learning to construct classification models. Since the construction of such models is based on computing with large datasets, it is an appealing idea to outsource this computation to a cloud service. The privacy-sensitive nature of the input data requires appropriate privacy preserving measures before outsourcing it. Homomorphic encryption enables one to compute on encrypted data directly, without decryption and can be used to mitigate the privacy concerns raised by using a cloud service. In this paper, we propose an algorithm (and its implementation) to train a logistic regression model on a homomorphically encrypted dataset. The core of our algorithm consists of a new iterative method that can be seen as a simplified form of the fixed Hessian method, but with a much lower multiplicative complexity. We test the new method on two interesting real life applications: the first application is in medicine and constructs a model to predict the probability for a patient to have cancer, given genomic data as input; the second application is in finance and the model predicts the probability of a credit card transaction to be fraudulent. The method produces accurate results for both applications, comparable to running standard algorithms on plaintext data. This article introduces a new simple iterative algorithm to train a logistic regression model that is tailored to be applied on a homomorphically encrypted dataset. This algorithm can be used as a privacy-preserving technique to build a binary classification model and can be applied in a wide range of problems that can be modelled with logistic regression. Our implementation results show that our method can handle the large datasets used in logistic regression training.",
"title": ""
},
{
"docid": "431e3826c8191834d08aae4f3e85e10b",
"text": "This paper presents an ultra low-power high-speed dynamic comparator. The proposed dynamic comparator is designed and simulated in a 65-nm CMOS technology. It dissipates 7 μW, 21.1 μW from a 0.9-V supply while operating at 1 GHz, 3 GHz sampling clock respectively. Proposed circuit can work up to 14 GHz. Ultra low power consumption is achieved by utilizing charge-steering concept and proper sizing. Monte Carlo simulations show that the input referred offset contribution of the internal devices is negligible compared to the effect of the input devices which results in 3.8 mV offset and 3 mV kick-back noise.",
"title": ""
},
{
"docid": "c42edb326ec95c257b821cc617e174e6",
"text": "recommendation systems support users and developers of various computer and software systems to overcome information overload, perform information discovery tasks and approximate computation, among others. They have recently become popular and have attracted a wide variety of application scenarios from business process modelling to source code manipulation. Due to this wide variety of application domains, different approaches and metrics have been adopted for their evaluation. In this chapter, we review a range of evaluation metrics and measures as well as some approaches used for evaluating recommendation systems. The metrics presented in this chapter are grouped under sixteen different dimensions, e.g., correctness, novelty, coverage. We review these metrics according to the dimensions to which they correspond. A brief overview of approaches to comprehensive evaluation using collections of recommendation system dimensions and associated metrics is presented. We also provide suggestions for key future research and practice directions. Iman Avazpour Faculty of ICT, Centre for Computing and Engineering Software and Systems (SUCCESS), Swinburne University of Technology, Hawthorn, Victoria 3122, Australia e-mail: iavazpour@swin.",
"title": ""
},
{
"docid": "6bc5aab717d5a78c99f5d85b40c3d482",
"text": "Recognition in uncontrolled situations is one of the most important bottlenecks for practical face recognition systems. In particular, few researchers have addressed the challenge to recognize noncooperative or even uncooperative subjects who try to cheat the recognition system by deliberately changing their facial appearance through such tricks as variant expressions or disguise (e.g., by partial occlusions). This paper addresses these problems within the framework of similarity matching. A novel perception-inspired nonmetric partial similarity measure is introduced, which is potentially useful in dealing with the concerned problems because it can help capture the prominent partial similarities that are dominant in human perception. Two methods, based on the general golden section rule and the maximum margin criterion, respectively, are proposed to automatically set the similarity threshold. The effectiveness of the proposed method in handling large expressions, partial occlusions, and other distortions is demonstrated on several well-known face databases.",
"title": ""
},
{
"docid": "0cb34c6202328c57dbd1e8e7270d8aa6",
"text": "Optimization of deep learning is no longer an imminent problem, due to various gradient descent methods and the improvements of network structure, including activation functions, the connectivity style, and so on. Then the actual application depends on the generalization ability, which determines whether a network is effective. Regularization is an efficient way to improve the generalization ability of deep CNN, because it makes it possible to train more complex models while maintaining a lower overfitting. In this paper, we propose to optimize the feature boundary of deep CNN through a two-stage training method (pre-training process and implicit regularization training process) to reduce the overfitting problem. In the pre-training stage, we train a network model to extract the image representation for anomaly detection. In the implicit regularization training stage, we re-train the network based on the anomaly detection results to regularize the feature boundary and make it converge in the proper position. Experimental results on five image classification benchmarks show that the two-stage training method achieves a state-of-the-art performance and that it, in conjunction with more complicated anomaly detection algorithm, obtains better results. Finally, we use a variety of strategies to explore and analyze how implicit regularization plays a role in the two-stage training process. Furthermore, we explain how implicit regularization can be interpreted as data augmentation and model ensemble.",
"title": ""
},
{
"docid": "38fab4cc5cffea363eecbc8b2f2c6088",
"text": "Domain adaptation algorithms are useful when the distributions of the training and the test data are different. In this paper, we focus on the problem of instrumental variation and time-varying drift in the field of sensors and measurement, which can be viewed as discrete and continuous distributional change in the feature space. We propose maximum independence domain adaptation (MIDA) and semi-supervised MIDA to address this problem. Domain features are first defined to describe the background information of a sample, such as the device label and acquisition time. Then, MIDA learns a subspace which has maximum independence with the domain features, so as to reduce the interdomain discrepancy in distributions. A feature augmentation strategy is also designed to project samples according to their backgrounds so as to improve the adaptation. The proposed algorithms are flexible and fast. Their effectiveness is verified by experiments on synthetic datasets and four real-world ones on sensors, measurement, and computer vision. They can greatly enhance the practicability of sensor systems, as well as extend the application scope of existing domain adaptation algorithms by uniformly handling different kinds of distributional change.",
"title": ""
},
{
"docid": "535934dc80c666e0d10651f024560d12",
"text": "The following individuals read and discussed the thesis submitted by student Mindy Elizabeth Bennett, and they also evaluated her presentation and response to questions during the final oral examination. They found that the student passed the final oral examination, and that the thesis was satisfactory for a master's degree and ready for any final modifications that they explicitly required. iii ACKNOWLEDGEMENTS During my time of study at Boise State University, I have received an enormous amount of academic support and guidance from a number of different individuals. I would like to take this opportunity to thank everyone who has been instrumental in the completion of this degree. Without the continued support and guidance of these individuals, this accomplishment would not have been possible. I would also like to thank the following individuals for generously giving their time to provide me with the help and support needed to complete this study. Without them, the completion of this study would not have been possible. Breast hypertrophy is a common medical condition whose morbidity has increased over recent decades. Symptoms of breast hypertrophy often include musculoskeletal pain in the neck, back and shoulders, and numerous psychosocial health burdens. To date, reduction mammaplasty (RM) is the only treatment shown to significantly reduce the severity of the symptoms associated with breast hypertrophy. However, due to a lack of scientific evidence in the medical literature justifying the medical necessity of RM, insurance companies often deny requests for coverage of this procedure. Therefore, the purpose of this study is to investigate biomechanical differences in the upper body of women with larger breast sizes in order to provide scientific evidence of the musculoskeletal burdens of breast hypertrophy to the medical community Twenty-two female subjects (average age 25.90, ± 5.47 years) who had never undergone or been approved for breast augmentation surgery, were recruited to participate in this study. Kinematic data of the head, thorax, pelvis and scapula was collected during static trials and during each of four different tasks of daily living. Surface electromyography (sEMG) data from the Midcervical (C-4) Paraspinal, Upper Trapezius, Lower Trapezius, Serratus Anterior, and Erector Spinae muscles were recorded in the same activities. Maximum voluntary contractions (MVC) were used to normalize the sEMG data, and %MVC during each task in the protocol was analyzed. Kinematic data from the tasks of daily living were normalized to average static posture data for each subject. Subjects were …",
"title": ""
},
{
"docid": "be7cc41f9e8d3c9e08c5c5ff1ea79f59",
"text": "A person’s emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: “The face is the portrait of the mind; the eyes, its informers.”. This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and",
"title": ""
},
{
"docid": "7dfe5cc877bfe1796c6bdcd2f5114473",
"text": "The main challenge in Super Resolution (SR) is to discover the mapping between the low-and high-resolution manifolds of image patches, a complex ill-posed problem which has recently been addressed through piecewise linear regression with promising results. In this paper we present a novel regression-based SR algorithm that benefits from an extended knowledge of the structure of both manifolds. We propose a transform that collapses the 16 variations induced from the dihedral group of transforms (i.e. rotations, vertical and horizontal reflections) and antipodality (i.e. diametrically opposed points in the unitary sphere) into a single primitive. The key idea of our transform is to study the different dihedral elements as a group of symmetries within the high-dimensional manifold. We obtain the respective set of mirror-symmetry axes by means of a frequency analysis of the dihedral elements, and we use them to collapse the redundant variability through a modified symmetry distance. The experimental validation of our algorithm shows the effectiveness of our approach, which obtains competitive quality with a dictionary of as little as 32 atoms (reducing other methods' dictionaries by at least a factor of 32) and further pushing the state-of-the-art with a 1024 atoms dictionary.",
"title": ""
},
{
"docid": "f58192a1ef1686e1754f8adb3f4481de",
"text": "The twenty-first century organizations are characterized by an emphasis on knowledge and information. Today’s organizations also require the acquisition, management, and exploitation of knowledge and information in order to improve their own performance. In the current economy, the foundations of organizational competitiveness have turned former tangible and intangible resources into knowledge and the focus of information systems has also changed from information management to knowledge management. Besides, the most important step in the implementation of knowledge management is to examine the significant factors in this regard and to identify the causes of failure. Therefore, the present study evaluated knowledge management failure factors in an intuitionistic fuzzy environment as a case study in Khuzestan Oil and Gas Company. For this purpose, a series of failure factors affecting knowledge management in organizations were identified based on a review of the related literature and similar studies. Then, 16 failure factors in the implementation of knowledge management in the given organization were determined on the basis of interviews with company experts. According to the specified factors as well as the integration of multiple criteria decision-making techniques in an intuitionistic fuzzy environment, knowledge management failure factors in Khuzestan Oil and Gas Company were investigated. The results indicated that lack of management commitment and leadership was the most important factor affecting the failure of knowledge management in the given company.",
"title": ""
},
{
"docid": "5d5d301bf65031791a3f012758888d6e",
"text": "Recent additive manufacturing technologies, such as 3-D printing and printed electronics, require constant speed motion for consistent material deposition. In this paper, a new path planning algorithm is developed for an \\(XY\\) -motion stage with an emphasis on aerosol printing. The continuous aerosol stream provided by the printing nozzle requires constant velocity in relative motion of a substrate to evenly deposit inks. During transitioning between print segments, a shutter prevents the aerosol from reaching the substrate, therefore wasting material. The proposed path planning algorithm can control motion of an \\(XY\\) stage for an arbitrary printing path and desired velocity while minimizing material waste. Linear segments with parabolic blends (LSPBs) trajectory planning is used during printing, and minimum time trajectory (MTT) planning is used during printer transition. Simulation results show that combining LSPB with MTT can minimize the printing time while following the desired path.",
"title": ""
},
{
"docid": "a7cc7076d324f33d5e9b40756c5e1631",
"text": "Social learning analytics introduces tools and methods that help improving the learning process by providing useful information about the actors and their activity in the learning system. This study examines the relation between SNA parameters and student outcomes, between network parameters and global course performance, and it shows how visualizations of social learning analytics can help observing the visible and invisible interactions occurring in online distance education. The findings from our empirical study show that future research should further investigate whether there are conditions under which social network parameters are reliable predictors of academic performance, but also advises against relying exclusively in social network parameters for predictive purposes. The findings also show that data visualization is a useful tool for social learning analytics, and how it may provide additional information about actors and their behaviors for decision making in online distance",
"title": ""
},
{
"docid": "3e18a760083cd3ed169ed8dae36156b9",
"text": "n engl j med 368;26 nejm.org june 27, 2013 2445 correct diagnoses as often as we think: the diagnostic failure rate is estimated to be 10 to 15%. The rate is highest among specialties in which patients are diagnostically undifferentiated, such as emergency medicine, family medicine, and internal medicine. Error in the visual specialties, such as radiology and pathology, is considerably lower, probably around 2%.1 Diagnostic error has multiple causes, but principal among them are cognitive errors. Usually, it’s not a lack of knowledge that leads to failure, but problems with the clinician’s thinking. Esoteric diagnoses are occasionally missed, but common illnesses are commonly misdiagnosed. For example, physicians know the pathophysiology of pulmonary embolus in excruciating detail, yet because its signs and symptoms are notoriously variable and overlap with those of numerous other diseases, this important diagnosis was missed a staggering 55% of the time in a series of fatal cases.2 Over the past 40 years, work by cognitive psychologists and others has pointed to the human mind’s vulnerability to cognitive biases, logical fallacies, false assumptions, and other reasoning failures. It seems that much of our everyday thinking is f lawed, and clinicians are not immune to the problem (see box). More than 100 biases affecting clinical decision making have been described, and many medical disciplines now acknowledge their pervasive influence on our thinking. Cognitive failures are best understood in the context of how our brains manage and process information. The two principal modes, automatic and controlled, are colloquially referred to as “intuitive” and “analytic”; psychologists know them as Type 1 and Type 2 processes. Various conceptualizations of the reasoning process have been proposed, but most can be incorporated into this dual-process system. This system is more than a model: it is accepted that the two processes involve different cortical mechanisms with associated neurophysiologic and neuroanatomical From Mindless to Mindful Practice — Cognitive Bias and Clinical Decision Making",
"title": ""
},
{
"docid": "9f037fd53e6547b689f88fc1c1bed10a",
"text": "We study feature selection as a means to optimize the baseline clickbait detector employed at the Clickbait Challenge 2017 [6]. The challenge’s task is to score the “clickbaitiness” of a given Twitter tweet on a scale from 0 (no clickbait) to 1 (strong clickbait). Unlike most other approaches submitted to the challenge, the baseline approach is based on manual feature engineering and does not compete out of the box with many of the deep learning-based approaches. We show that scaling up feature selection efforts to heuristically identify better-performing feature subsets catapults the performance of the baseline classifier to second rank overall, beating 12 other competing approaches and improving over the baseline performance by 20%. This demonstrates that traditional classification approaches can still keep up with deep learning on this task.",
"title": ""
},
{
"docid": "c479983e954695014417976275030746",
"text": "Semi-Non-negative Matrix Factorization is a technique that learns a low-dimensional representation of a dataset that lends itself to a clustering interpretation. It is possible that the mapping between this new representation and our original data matrix contains rather complex hierarchical information with implicit lower-level hidden attributes, that classical one level clustering methodologies cannot interpret. In this work we propose a novel model, Deep Semi-NMF, that is able to learn such hidden representations that allow themselves to an interpretation of clustering according to different, unknown attributes of a given dataset. We also present a semi-supervised version of the algorithm, named Deep WSF, that allows the use of (partial) prior information for each of the known attributes of a dataset, that allows the model to be used on datasets with mixed attribute knowledge. Finally, we show that our models are able to learn low-dimensional representations that are better suited for clustering, but also classification, outperforming Semi-Non-negative Matrix Factorization, but also other state-of-the-art methodologies variants.",
"title": ""
},
{
"docid": "4331057bb0a3f3add576513fa71791a8",
"text": "The category theoretic structures of monads and comonads can be used as an abstraction mechanism for simplifying both language semantics and programs. Monads have been used to structure impure computations, whilst comonads have been used to structure context-dependent computations. Interestingly, the class of computations structured by monads and the class of computations structured by comonads are not mutually exclusive. This paper formalises and explores the conditions under which a monad and a comonad can both structure the same notion of computation: when a comonad is left adjoint to a monad. Furthermore, we examine situations where a particular monad/comonad model of computation is deficient in capturing the essence of a computational pattern and provide a technique for calculating an alternative monad or comonad structure which fully captures the essence of the computation. Included is some discussion on how to choose between a monad or comonad structure in the case where either can be used to capture a particular notion of computation.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.